We consider the fundamental problem of constructing fast and small circuits for binary addition. We propose a new algorithm with running time $\mathcal O(n \log_2 n)$ for constructing linear-size $n$-bit adder circuits with a significantly better depth guarantee compared to previous approaches: Our circuits have a depth of at most $\log_2 n + \log_2 \log_2 n + \log_2 \log_2 \log_2 n + \text{const}$, improving upon the previously best circuits by [12] with a depth of at most $\log_2 n + 8 \sqrt{\log_2 n} + 6 \log_2 \log_2 n + \text{const}$. Hence, we decrease the gap to the lower bound of $\log_2 n + \log_2 \log_2 n + \text{const}$ by [5] significantly from $\mathcal O (\sqrt{\log_2 n})$ to $\mathcal O(\log_2 \log_2 \log_2 n)$. Our core routine is a new algorithm for the construction of a circuit for a single carry bit, or, more generally, for an And-Or path, i.e., a Boolean function of type $t_0 \lor ( t_1 \land (t_2 \lor ( \dots t_{m-1}) \dots ))$. We compute linear-size And-Or path circuits with a depth of at most $\log_2 m + \log_2 \log_2 m + 0.65$ in time $\mathcal O(m \log_2 m)$. These are the first And-Or path circuits known that, up to an additive constant, match the lower bound by [5] and at the same time have a linear size. The previously fastest And-Or path circuits are only by an additive constant worse in depth, but have a much higher size in the order of $\mathcal O (m \log_2 m)$.
In this contribution we study the formal ability of a multi-resolution-times lattice Boltzmann scheme to approximate isothermal and thermal compressible Navier Stokes equations with a single particle distribution. More precisely, we consider a total of 12 classical square lattice Boltzmann schemes with prescribed sets of conserved and nonconserved moments. The question is to determine the algebraic expressions of the equilibrium functions for the nonconserved moments and the relaxation parameters associated to each scheme. We compare the fluid equations and the result of the Taylor expansion method at second order accuracy for bidimensional examples with a maximum of 17 velocities and three-dimensional schemes with at most 33 velocities. In some cases, it is not possible to fit exactly the physical model. For several examples, we adjust the Navier Stokes equations and propose nontrivial expressions for the equilibria.
We address the problem of the best uniform approximation of a continuous function on a convex domain. The approximation is by linear combinations of a finite system of functions (not necessarily Chebyshev) under arbitrary linear constraints. By modifying the concept of alternance and of the Remez iterative procedure we present a method, which demonstrates its efficiency in numerical problems. The linear rate of convergence is proved under some favourable assumptions. A special attention is paid to systems of complex exponents, Gaussian functions, lacunar algebraic and trigonometric polynomials. Applications to signal processing, linear ODE, switching dynamical systems, and to Markov-Bernstein type inequalities are considered.
This paper proposes an adaptive numerical method for stochastic delay differential equations (SDDEs) with a non-global Lipschitz drift term and a non-constant delay, building upon the work of Wei Fang and others. The method adapts the step size based on the growth of the drift term. Differing slightly from the conventional Euler-Maruyama format, this paper addresses the estimation of the delay term by substituting it with the numerically obtained solution closest to the left endpoint.This approach overcomes the challenge of numerical nodes not falling within the nodes after subtracting the delay. The paper proves the convergence of the numerical method for a class of non-global Lipschitz continuous SDDEs under the assumption that the step size function satisfies certain conditions.
We connect the mixing behaviour of random walks over a graph to the power of the local-consistency algorithm for the solution of the corresponding constraint satisfaction problem (CSP). We extend this connection to arbitrary CSPs and their promise variant. In this way, we establish a linear-level (and, thus, optimal) lower bound against the local-consistency algorithm applied to the class of aperiodic promise CSPs. The proof is based on a combination of the probabilistic method for random Erd\H{o}s-R\'enyi hypergraphs and a structural result on the number of fibers (i.e., long chains of hyperedges) in sparse hypergraphs of large girth. As a corollary, we completely classify the power of local consistency for the approximate graph homomorphism problem by establishing that, in the nontrivial cases, the problem has linear width.
This paper considers the problem of minimizing a differentiable function with locally Lipschitz continuous gradient on the algebraic variety of real matrices of upper-bounded rank. This problem is known to enable the formulation of several machine learning and signal processing tasks such as collaborative filtering and signal recovery. Several definitions of stationarity exist for this nonconvex problem. Among them, Bouligand stationarity is the strongest first-order necessary condition for local optimality. This paper proposes a first-order algorithm that combines the well-known projected-projected gradient descent map with a rank reduction mechanism and generates a sequence in the variety whose accumulation points are Bouligand stationary. This algorithm compares favorably with the three other algorithms known in the literature to enjoy this stationarity property, regarding both the typical computational cost per iteration and empirically observed numerical performance. A framework to design hybrid algorithms enjoying the same property is proposed and illustrated through an example.
This paper considers both the least squares and quasi-maximum likelihood estimation for the recently proposed scalable ARMA model, a parametric infinite-order vector AR model, and their asymptotic normality is also established. It makes feasible the inference on this computationally efficient model, especially for economic and financial time series. An efficient block coordinate descent algorithm is further introduced to search for estimates, and a Bayesian information criterion with selection consistency is suggested for model selection. Simulation experiments are conducted to illustrate their finite sample performance, and a real application on six macroeconomic indicators illustrates the usefulness of the proposed methodology.
We propose a variational symplectic numerical method for the time integration of dynamical systems issued from the least action principle. We assume a quadratic internal interpolation of the state between two time steps and we approximate the action in one time step by the Simpson's quadrature formula. The resulting scheme is nonlinear and symplectic. First numerical experiments concern a nonlinear pendulum and we have observed experimentally very good convergence properties.
Stable infiniteness, strong finite witnessability, and smoothness are model-theoretic properties relevant to theory combination in satisfiability modulo theories. Theories that are strongly finitely witnessable and smooth are called strongly polite and can be effectively combined with other theories. Toledo, Zohar, and Barrett conjectured that stably infinite and strongly finitely witnessable theories are smooth and therefore strongly polite. They called counterexamples to this conjecture unicorn theories, as their existence seemed unlikely. We prove that, indeed, unicorns do not exist. We also prove versions of the L\"owenheim-Skolem theorem and the {\L}o\'s-Vaught test for many-sorted logic.
This paper studies the influence of probabilism and non-determinism on some quantitative aspect X of the execution of a system modeled as a Markov decision process (MDP). To this end, the novel notion of demonic variance is introduced: For a random variable X in an MDP M, it is defined as 1/2 times the maximal expected squared distance of the values of X in two independent execution of M in which also the non-deterministic choices are resolved independently by two distinct schedulers. It is shown that the demonic variance is between 1 and 2 times as large as the maximal variance of X in M that can be achieved by a single scheduler. This allows defining a non-determinism score for M and X measuring how strongly the difference of X in two executions of M can be influenced by the non-deterministic choices. Properties of MDPs M with extremal values of the non-determinism score are established. Further, the algorithmic problems of computing the maximal variance and the demonic variance are investigated for two random variables, namely weighted reachability and accumulated rewards. In the process, also the structure of schedulers maximizing the variance and of scheduler pairs realizing the demonic variance is analyzed.
Due to their inherent capability in semantic alignment of aspects and their context words, attention mechanism and Convolutional Neural Networks (CNNs) are widely applied for aspect-based sentiment classification. However, these models lack a mechanism to account for relevant syntactical constraints and long-range word dependencies, and hence may mistakenly recognize syntactically irrelevant contextual words as clues for judging aspect sentiment. To tackle this problem, we propose to build a Graph Convolutional Network (GCN) over the dependency tree of a sentence to exploit syntactical information and word dependencies. Based on it, a novel aspect-specific sentiment classification framework is raised. Experiments on three benchmarking collections illustrate that our proposed model has comparable effectiveness to a range of state-of-the-art models, and further demonstrate that both syntactical information and long-range word dependencies are properly captured by the graph convolution structure.