The problem of constructing optimal factoring automata arises in the context of unification factoring for the efficient execution of logic programs. Given an ordered set of $n$ strings of length $m$, the problem is to construct a trie-like tree structure of minimum size in which the leaves in left-to-right order represent the input strings in the given order. Contrary to standard tries, the order in which the characters of a string are encountered can be different on different root-to-leaf paths. Dawson et al. [ACM Trans. Program. Lang. Syst. 18(5):528--563, 1996] gave an algorithm that solves the problem in time $O(n^2 m (n+m))$. In this paper, we present an improved algorithm with running-time $O(n^2m)$.
We propose a coefficient that measures dependence in paired samples of functions. It has properties similar to the Pearson correlation, but differs in significant ways: 1) it is designed to measure dependence between curves, 2) it focuses only on extreme curves. The new coefficient is derived within the framework of regular variation in Banach spaces. A consistent estimator is proposed and justified by an asymptotic analysis and a simulation study. The usefulness of the new coefficient is illustrated on financial and and climate functional data.
For boundary value problem of an elliptic equation with variable coefficients describing the physical field distribution in inhomogeneous media, the Levi function can represent the solution in terms of volume and surface potentials, with the drawback that the volume potential involving in the solution expression requires heavy computational costs as well as the solvability of the integral equations with respect to the density pair. We introduce an modified integral expression for the solution to an elliptic equation in divergence form under the Levi function framework. The well-posedness of the linear integral system with respect to the density functions to be determined is rigorously proved. Based on the singularity decomposition for the Levi function, we propose two schemes to deal with the volume integrals so that the density functions can be solved efficiently. One method is an adaptive discretization scheme (ADS) for computing the integrals with continuous integrands, leading to the uniform accuracy of the integrals in the whole domain, and consequently the efficient computations for the density functions. The other method is the dual reciprocity method (DRM) which is a meshless approach converting the volume integrals into boundary integrals equivalently by expressing the volume density as the combination of the radial basis functions determined by the interior grids. The proposed schemes are justified numerically to be of satisfactory computation costs. Numerical examples in 2-dimensional and 3-dimensional cases are presented to show the validity of the proposed schemes.
Many unconventional computing models, including some that appear to be quite different from traditional ones such as Turing machines, happen to characterise either the complexity class P or PSPACE when working in deterministic polynomial time (and in the maximally parallel way, where this applies). We discuss variants of cellular automata and membrane systems that escape this dichotomy and characterise intermediate complexity classes, usually defined in terms of Turing machines with oracles, as well as some possible reasons why this happens.
This work is concerned with the construction and analysis of structure-preserving Galerkin methods for computing the dynamics of rotating Bose-Einstein condensate (BEC) based on the Gross-Pitaevskii equation with angular momentum rotation. Due to the presence of the rotation term, constructing finite element methods (FEMs) that preserve both mass and energy remains an unresolved issue, particularly in the context of nonconforming FEMs. Furthermore, in comparison to existing works, we provide a comprehensive convergence analysis, offering a thorough demonstration of the methods' optimal and high-order convergence properties. Finally, extensive numerical results are presented to check the theoretical analysis of the structure-preserving numerical method for rotating BEC, and the quantized vortex lattice's behavior is scrutinized through a series of numerical tests.
Score-based diffusion models (SDMs) offer a flexible approach to sample from the posterior distribution in a variety of Bayesian inverse problems. In the literature, the prior score is utilized to sample from the posterior by different methods that require multiple evaluations of the forward mapping in order to generate a single posterior sample. These methods are often designed with the objective of enabling the direct use of the unconditional prior score and, therefore, task-independent training. In this paper, we focus on linear inverse problems, when evaluation of the forward mapping is computationally expensive and frequent posterior sampling is required for new measurement data, such as in medical imaging. We demonstrate that the evaluation of the forward mapping can be entirely bypassed during posterior sample generation. Instead, without introducing any error, the computational effort can be shifted to an offline task of training the score of a specific diffusion-like random process. In particular, the training is task-dependent requiring information about the forward mapping but not about the measurement data. It is shown that the conditional score corresponding to the posterior can be obtained from the auxiliary score by suitable affine transformations. We prove that this observation generalizes to the framework of infinite-dimensional diffusion models introduced recently and provide numerical analysis of the method. Moreover, we validate our findings with numerical experiments.
Approximating invariant subspaces of generalized eigenvalue problems (GEPs) is a fundamental computational problem at the core of machine learning and scientific computing. It is, for example, the root of Principal Component Analysis (PCA) for dimensionality reduction, data visualization, and noise filtering, and of Density Functional Theory (DFT), arguably the most popular method to calculate the electronic structure of materials. For a Hermitian definite GEP $HC=SC\Lambda$, let $\Pi_k$ be the true spectral projector on the invariant subspace that is associated with the $k$ smallest (or largest) eigenvalues. Given $H,$ $S$, an integer $k$, and accuracy $\varepsilon\in(0,1)$, we show that we can compute a matrix $\widetilde\Pi_k$ such that $\lVert\Pi_k-\widetilde\Pi_k\rVert_2\leq \varepsilon$, in $O\left( n^{\omega+\eta}\mathrm{polylog}(n,\varepsilon^{-1},\kappa(S),\mathrm{gap}_k^{-1}) \right)$ bit operations in the floating point model with probability $1-1/n$. Here, $\eta>0$ is arbitrarily small, $\omega\lesssim 2.372$ is the matrix multiplication exponent, $\kappa(S)=\lVert S\rVert_2\lVert S^{-1}\rVert_2$, and $\mathrm{gap}_k$ is the gap between eigenvalues $k$ and $k+1$. To the best of our knowledge, this is the first end-to-end analysis achieving such "forward-error" approximation guarantees with nearly $O(n^{\omega+\eta})$ bit complexity, improving classical $\widetilde O(n^3)$ eigensolvers, even for the regular case $(S=I)$. Our methods rely on a new $O(n^{\omega+\eta})$ stability analysis for the Cholesky factorization, and a new smoothed analysis for computing spectral gaps, which can be of independent interest. Ultimately, we obtain new matrix multiplication-type bit complexity upper bounds for PCA problems, including classical PCA and (randomized) low-rank approximation.
This work proposes a novel variational approximation of partial differential equations on moving geometries determined by explicit boundary representations. The benefits of the proposed formulation are the ability to handle large displacements of explicitly represented domain boundaries without generating body-fitted meshes and remeshing techniques. For the space discretization, we use a background mesh and an unfitted method that relies on integration on cut cells only. We perform this intersection by using clipping algorithms. To deal with the mesh movement, we pullback the equations to a reference configuration (the spatial mesh at the initial time slab times the time interval) that is constant in time. This way, the geometrical intersection algorithm is only required in 3D, another key property of the proposed scheme. At the end of the time slab, we compute the deformed mesh, intersect the deformed boundary with the background mesh, and consider an exact transfer operator between meshes to compute jump terms in the time discontinuous Galerkin integration. The transfer is also computed using geometrical intersection algorithms. We demonstrate the applicability of the method to fluid problems around rotating (2D and 3D) geometries described by oriented boundary meshes. We also provide a set of numerical experiments that show the optimal convergence of the method.
Classical algorithms for market equilibrium computation such as proportional response dynamics face scalability issues with Internet-based applications such as auctions, recommender systems, and fair division, despite having an almost linear runtime in terms of the product of buyers and goods. In this work, we provide the first quantum algorithm for market equilibrium computation with sub-linear performance. Our algorithm provides a polynomial runtime speedup in terms of the product of the number of buyers and goods while reaching the same optimization objective value as the classical algorithm. Numerical simulations of a system with 16384 buyers and goods support our theoretical results that our quantum algorithm provides a significant speedup.
Characteristic formulae give a complete logical description of the behaviour of processes modulo some chosen notion of behavioural semantics. They allow one to reduce equivalence or preorder checking to model checking, and are exactly the formulae in the modal logics characterizing classic behavioural equivalences and preorders for which model checking can be reduced to equivalence or preorder checking. This paper studies the complexity of determining whether a formula is characteristic for some finite, loop-free process in each of the logics providing modal characterizations of the simulation-based semantics in van Glabbeek's branching-time spectrum. Since characteristic formulae in each of those logics are exactly the consistent and prime ones, it presents complexity results for the satisfiability and primality problems, and investigates the boundary between modal logics for which those problems can be solved in polynomial time and those for which they become computationally hard. Amongst other contributions, this article also studies the complexity of constructing characteristic formulae in the modal logics characterizing simulation-based semantics, both when such formulae are presented in explicit form and via systems of equations.
This work presents several new results concerning the analysis of the convergence of binary, univariate, and linear subdivision schemes, all related to the {\it contractivity factor} of a convergent scheme. First, we prove that a convergent scheme cannot have a contractivity factor lower than half. Since the lower this factor is, the faster is the convergence of the scheme, schemes with contractivity factor $\frac{1}{2}$, such as those generating spline functions, have optimal convergence rate. Additionally, we provide further insights and conditions for the convergence of linear schemes and demonstrate their applicability in an improved algorithm for determining the convergence of such subdivision schemes.