Multiple algorithms are known for efficiently calculating the prefix probability of a string under a probabilistic context-free grammar (PCFG). Good algorithms for the problem have a runtime cubic in the length of the input string. However, some proposed algorithms are suboptimal with respect to the size of the grammar. This paper proposes a novel speed-up of Jelinek and Lafferty's (1991) algorithm, which runs in $\mathcal{O}({N^3 |\mathcal{N}|^3 + |\mathcal{N}|^4})$, where $N$ is the input length and $|\mathcal{N}|$ is the number of non-terminals in the grammar. In contrast, our speed-up runs in $\mathcal{O}({N^2 |\mathcal{N}|^3+N^3|\mathcal{N}|^2})$.
Gaussian graphical models are nowadays commonly applied to the comparison of groups sharing the same variables, by jointy learning their independence structures. We consider the case where there are exactly two dependent groups and the association structure is represented by a family of coloured Gaussian graphical models suited to deal with paired data problems. To learn the two dependent graphs, together with their across-graph association structure, we implement a fused graphical lasso penalty. We carry out a comprehensive analysis of this approach, with special attention to the role played by some relevant submodel classes. In this way, we provide a broad set of tools for the application of Gaussian graphical models to paired data problems. These include results useful for the specification of penalty values in order to obtain a path of lasso solutions and an ADMM algorithm that solves the fused graphical lasso optimization problem. Finally, we present an application of our method to cancer genomics where it is of interest to compare cancer cells with a control sample from histologically normal tissues adjacent to the tumor. All the methods described in this article are implemented in the $\texttt{R}$ package $\texttt{pdglasso}$ availabe at: //github.com/savranciati/pdglasso.
We introduce Scylla, a primal heuristic for mixed-integer optimization problems. It exploits approximate solves of the Linear Programming relaxations through the matrix-free Primal-Dual Hybrid Gradient algorithm with specialized termination criteria, and derives integer-feasible solutions via fix-and-propagate procedures and feasibility-pump-like updates to the objective function. Computational experiments show that the method is particularly suited to instances with hard linear relaxations.
Three variants of the statistical complexity function, which is used as a criterion in the problem of detection of a useful signal in the signal-noise mixture, are considered. The probability distributions maximizing the considered variants of statistical complexity are obtained analytically and conclusions about the efficiency of using one or another variant for detection problem are made. The comparison of considered information characteristics is shown and analytical results are illustrated on an example of synthesized signals. A method is proposed for selecting the threshold of the information criterion, which can be used in decision rule for useful signal detection in the signal-noise mixture. The choice of the threshold depends a priori on the analytically obtained maximum values. As a result, the complexity based on the total variation demonstrates the best ability of useful signal detection.
The nonlocal Allen-Cahn equation with nonlocal diffusion operator is a generalization of the classical Allen-Cahn equation. It satisfies the energy dissipation law and maximum bound principle (MBP), and is important for simulating a series of physical and biological phenomena involving long-distance interactions in space. In this paper, we construct first- and second-order (in time) accurate, unconditionally energy stable and MBP-preserving schemes for the nonlocal Allen-Cahn type model based on the stabilized exponential scalar auxiliary variable (sESAV) approach. On the one hand, we have proved the MBP and unconditional energy stability carefully and rigorously in the fully discrete levels. On the other hand, we adopt an efficient FFT-based fast solver to compute the nearly full coefficient matrix generated from the spatial discretization, which improves the computational efficiency. Finally, typical numerical experiments are presented to demonstrate the performance of our proposed schemes.
Let $0\leq\tau_{1}\leq\tau_{2}\leq\cdots\leq\tau_{m}\leq1$, originated from a uniform distribution. Let also $\epsilon,\delta\in\mathbb{R}$, and $d\in\mathbb{N}$. What is the probability of having more than $d$ adjacent $\tau_{i}$-s pairs that the distance between them is $\delta$, up to an error $\epsilon$ ? In this paper we are going to show how this untreated theoretical probabilistic problem arises naturally from the motivation of analyzing a simple asynchronous algorithm for detection of signals with a known frequency, using the novel technology of an event camera.
Consistent hashing is a technique that can minimize key remapping when the number of hash buckets changes. The paper proposes a fast consistent hash algorithm (called power consistent hash) that has $O(1)$ expected time for key lookup, independent of the number of buckets. Hash values are computed in real time. No search data structure is constructed to store bucket ranges or key mappings. The algorithm has a lightweight design using $O(1)$ space with superior scalability. In particular, it uses two auxiliary hash functions to achieve distribution uniformity and $O(1)$ expected time for key lookup. Furthermore, it performs consistent hashing such that only a minimal number of keys are remapped when the number of buckets changes. Consistent hashing has a wide range of use cases, including load balancing, distributed caching, and distributed key-value stores. The proposed algorithm is faster than well-known consistent hash algorithms with $O(\log n)$ lookup time.
The Independent Cutset problem asks whether there is a set of vertices in a given graph that is both independent and a cutset. Such a problem is $\textsf{NP}$-complete even when the input graph is planar and has maximum degree five. In this paper, we first present a $\mathcal{O}^*(1.4423^{n})$-time algorithm for the problem. We also show how to compute a minimum independent cutset (if any) in the same running time. Since the property of having an independent cutset is MSO$_1$-expressible, our main results are concerned with structural parameterizations for the problem considering parameters that are not bounded by a function of the clique-width of the input. We present $\textsf{FPT}$-time algorithms for the problem considering the following parameters: the dual of the maximum degree, the dual of the solution size, the size of a dominating set (where a dominating set is given as an additional input), the size of an odd cycle transversal, the distance to chordal graphs, and the distance to $P_5$-free graphs. We close by introducing the notion of $\alpha$-domination, which allows us to identify more fixed-parameter tractable and polynomial-time solvable cases.
Recently Chen and Gao~\cite{ChenGao2017} proposed a new quantum algorithm for Boolean polynomial system solving, motivated by the cryptanalysis of some post-quantum cryptosystems. The key idea of their approach is to apply a Quantum Linear System (QLS) algorithm to a Macaulay linear system over $\mathbb{C}$, which is derived from the Boolean polynomial system. The efficiency of their algorithm depends on the condition number of the Macaulay matrix. In this paper, we give a strong lower bound on the condition number as a function of the Hamming weight of the Boolean solution, and show that in many (if not all) cases a Grover-based exhaustive search algorithm outperforms their algorithm. Then, we improve upon Chen and Gao's algorithm by introducing the Boolean Macaulay linear system over $\mathbb{C}$ by reducing the original Macaulay linear system. This improved algorithm could potentially significantly outperform the brute-force algorithm, when the Hamming weight of the solution is logarithmic in the number of Boolean variables. Furthermore, we provide a simple and more elementary proof of correctness for our improved algorithm using a reduction employing the Valiant-Vazirani affine hashing method, and also extend the result to polynomial systems over $\mathbb{F}_q$ improving on subsequent work by Chen, Gao and Yuan \cite{ChenGao2018}. We also suggest a new approach for extracting the solution of the Boolean polynomial system via a generalization of the quantum coupon collector problem \cite{arunachalam2020QuantumCouponCollector}.
The expectation-maximization (EM) algorithm and its variants are widely used in statistics. In high-dimensional mixture linear regression, the model is assumed to be a finite mixture of linear regression and the number of predictors is much larger than the sample size. The standard EM algorithm, which attempts to find the maximum likelihood estimator, becomes infeasible for such model. We devise a group lasso penalized EM algorithm and study its statistical properties. Existing theoretical results of regularized EM algorithms often rely on dividing the sample into many independent batches and employing a fresh batch of sample in each iteration of the algorithm. Our algorithm and theoretical analysis do not require sample-splitting, and can be extended to multivariate response cases. The proposed methods also have encouraging performances in numerical studies.
There is a large amount of work constructing hashmaps to minimize the number of collisions. However, to the best of our knowledge no known hashing technique guarantees group fairness among different groups of items. We are given a set $P$ of $n$ tuples in $\mathbb{R}^d$, for a constant dimension $d$ and a set of groups $\mathcal{G}=\{\mathbf{g}_1,\ldots, \mathbf{g}_k\}$ such that every tuple belongs to a unique group. We formally define the fair hashing problem introducing the notions of single fairness ($Pr[h(p)=h(x)\mid p\in \mathbf{g}_i, x\in P]$ for every $i=1,\ldots, k$), pairwise fairness ($Pr[h(p)=h(q)\mid p,q\in \mathbf{g}_i]$ for every $i=1,\ldots, k$), and the well-known collision probability ($Pr[h(p)=h(q)\mid p,q\in P]$). The goal is to construct a hashmap such that the collision probability, the single fairness, and the pairwise fairness are close to $1/m$, where $m$ is the number of buckets in the hashmap. We propose two families of algorithms to design fair hashmaps. First, we focus on hashmaps with optimum memory consumption minimizing the unfairness. We model the input tuples as points in $\mathbb{R}^d$ and the goal is to find the vector $w$ such that the projection of $P$ onto $w$ creates an ordering that is convenient to split to create a fair hashmap. For each projection we design efficient algorithms that find near optimum partitions of exactly (or at most) $m$ buckets. Second, we focus on hashmaps with optimum fairness ($0$-unfairness), minimizing the memory consumption. We make the important observation that the fair hashmap problem is reduced to the necklace splitting problem. By carefully implementing algorithms for solving the necklace splitting problem, we propose faster algorithms constructing hashmaps with $0$-unfairness using $2(m-1)$ boundary points when $k=2$ and $k(m-1)(4+\log_2 (3mn))$ boundary points for $k>2$.