It is known that when the diffuse interface thickness $\epsilon$ vanishes, the sharp interface limit of the stochastic reaction-diffusion equation is formally a stochastic geometric flow. To capture and simulate such geometric flow, it is crucial to develop numerical approximations whose error bounds depends on $\frac 1\epsilon$ polynomially. However, due to loss of spectral estimate of the linearized stochastic reaction-diffusion equation, how to get such error bound of numerical approximation has been an open problem. In this paper, we solve this weak error bound problem for stochastic reaction-diffusion equations near sharp interface limit. We first introduce a regularized problem which enjoys the exponential ergodicity. Then we present the regularity analysis of the regularized Kolmogorov and Poisson equations which only depends on $\frac 1{\epsilon}$ polynomially. Furthermore, we establish such weak error bound. This phenomenon could be viewed as a kind of the regularization effect of noise on the numerical approximation of stochastic partial differential equation (SPDE). As a by-product, a central limit theorem of the weak approximation is shown near sharp interface limit. Our method of proof could be extended to a number of other spatial and temporal numerical approximations for semilinear SPDEs.
Neural operators such as the Fourier Neural Operator (FNO) have been shown to provide resolution-independent deep learning models that can learn mappings between function spaces. For example, an initial condition can be mapped to the solution of a partial differential equation (PDE) at a future time-step using a neural operator. Despite the popularity of neural operators, their use to predict solution functions over a domain given only data over the boundary (such as a spatially varying Dirichlet boundary condition) remains unexplored. In this paper, we refer to such problems as boundary-to-domain problems; they have a wide range of applications in areas such as fluid mechanics, solid mechanics, heat transfer etc. We present a novel FNO-based architecture, named Lifting Product FNO (or LP-FNO) which can map arbitrary boundary functions defined on the lower-dimensional boundary to a solution in the entire domain. Specifically, two FNOs defined on the lower-dimensional boundary are lifted into the higher dimensional domain using our proposed lifting product layer. We demonstrate the efficacy and resolution independence of the proposed LP-FNO for the 2D Poisson equation.
Bayesian multidimensional scaling (BMDS) is a probabilistic dimension reduction tool that allows one to model and visualize data consisting of dissimilarities between pairs of objects. Although BMDS has proven useful within, e.g., Bayesian phylogenetic inference, its likelihood and gradient calculations require a burdensome order of $N^2$ floating-point operations, where $N$ is the number of data points. Thus, BMDS becomes impractical as $N$ grows large. We propose and compare two sparse versions of BMDS (sBMDS) that apply log-likelihood and gradient computations to subsets of the observed dissimilarity matrix data. Landmark sBMDS (L-sBMDS) extracts columns, while banded sBMDS (B-sBMDS) extracts diagonals of the data. These sparse variants let one specify a time complexity between $N^2$ and $N$. Under simplified settings, we prove posterior consistency for subsampled distance matrices. Through simulations, we examine the accuracy and computational efficiency across all models using both the Metropolis-Hastings and Hamiltonian Monte Carlo algorithms. We observe approximately 3-fold, 10-fold and 40-fold speedups with negligible loss of accuracy, when applying the sBMDS likelihoods and gradients to 500, 1,000 and 5,000 data points with 50 bands (landmarks); these speedups only increase with the size of data considered. Finally, we apply the sBMDS variants to the phylogeographic modeling of multiple influenza subtypes to better understand how these strains spread through global air transportation networks.
We consider the problem of low-rank rectangular matrix completion in the regime where the matrix $M$ of size $n\times m$ is ``long", i.e., the aspect ratio $m/n$ diverges to infinity. Such matrices are of particular interest in the study of tensor completion, where they arise from the unfolding of a low-rank tensor. In the case where the sampling probability is $\frac{d}{\sqrt{mn}}$, we propose a new spectral algorithm for recovering the singular values and left singular vectors of the original matrix $M$ based on a variant of the standard non-backtracking operator of a suitably defined bipartite weighted random graph, which we call a \textit{non-backtracking wedge operator}. When $d$ is above a Kesten-Stigum-type sampling threshold, our algorithm recovers a correlated version of the singular value decomposition of $M$ with quantifiable error bounds. This is the first result in the regime of bounded $d$ for weak recovery and the first for weak consistency when $d\to\infty$ arbitrarily slowly without any polylog factors. As an application, for low-CP-rank orthogonal $k$-tensor completion, we efficiently achieve weak recovery with sample size $O(n^{k/2})$ and weak consistency with sample size $\omega(n^{k/2})$. A similar result is obtained for low-multilinear-rank tensor completion with $O(n^{k/2})$ many samples.
In various stereological problems an $n$-dimensional convex body is intersected with an $(n-1)$-dimensional Isotropic Uniformly Random (IUR) hyperplane. In this paper the cumulative distribution function associated with the $(n-1)$-dimensional volume of such a random section is studied. This distribution is also known as chord length distribution and cross section area distribution in the planar and spatial case respectively. For various classes of convex bodies it is shown that these distribution functions are absolutely continuous with respect to Lebesgue measure. A Monte Carlo simulation scheme is proposed for approximating the corresponding probability density functions.
Learning unknown stochastic differential equations (SDEs) from observed data is a significant and challenging task with applications in various fields. Current approaches often use neural networks to represent drift and diffusion functions, and construct likelihood-based loss by approximating the transition density to train these networks. However, these methods often rely on one-step stochastic numerical schemes, necessitating data with sufficiently high time resolution. In this paper, we introduce novel approximations to the transition density of the parameterized SDE: a Gaussian density approximation inspired by the random perturbation theory of dynamical systems, and its extension, the dynamical Gaussian mixture approximation (DynGMA). Benefiting from the robust density approximation, our method exhibits superior accuracy compared to baseline methods in learning the fully unknown drift and diffusion functions and computing the invariant distribution from trajectory data. And it is capable of handling trajectory data with low time resolution and variable, even uncontrollable, time step sizes, such as data generated from Gillespie's stochastic simulations. We then conduct several experiments across various scenarios to verify the advantages and robustness of the proposed method.
Let $G$ be a group with undecidable domino problem (such as $\mathbb{Z}^2$). We prove the undecidability of all nontrivial dynamical properties for sofic $G$-subshifts, that such a result fails for SFTs, and an undecidability result for dynamical properties of $G$-SFTs similar to the Adian-Rabin theorem. For $G$ amenable we prove that topological entropy is not computable from presentations of SFTs, and a more general result for dynamical invariants taking values in partially ordered sets.
Complex conjugate matrix equations (CCME) have aroused the interest of many researchers because of computations and antilinear systems. Existing research is dominated by its time-invariant solving methods, but lacks proposed theories for solving its time-variant version. Moreover, artificial neural networks are rarely studied for solving CCME. In this paper, starting with the earliest CCME, zeroing neural dynamics (ZND) is applied to solve its time-variant version. Firstly, the vectorization and Kronecker product in the complex field are defined uniformly. Secondly, Con-CZND1 model and Con-CZND2 model are proposed and theoretically prove convergence and effectiveness. Thirdly, three numerical experiments are designed to illustrate the effectiveness of the two models, compare their differences, highlight the significance of neural dynamics in the complex field, and refine the theory related to ZND.
We consider Newton's method for finding zeros of mappings from a manifold $\mathcal{X}$ into a vector bundle $\mathcal{E}$. In this setting a connection on $\mathcal{E}$ is required to render the Newton equation well defined, and a retraction on $\mathcal{X}$ is needed to compute a Newton update. We discuss local convergence in terms of suitable differentiability concepts, using a Banach space variant of a Riemannian distance. We also carry over an affine covariant damping strategy to our setting. Finally, we will discuss some applications of our approach, namely, finding fixed points of vector fields, variational problems on manifolds and finding critical points of functionals.
We prove a characterization of first-order string-to-string transduction via $\lambda$-terms typed in non-commutative affine logic that compute with Church encoding, extending the analogous known characterization of star-free languages. We show that every first-order transduction can be computed by a $\lambda$-term using a known Krohn-Rhodes-style decomposition lemma. The converse direction is given by compiling $\lambda$-terms into two-way reversible planar transducers. The soundness of this translation involves showing that the transition functions of those transducers live in a monoidal closed category of diagrams in which we can interpret purely affine $\lambda$-terms. One challenge is that the unit of the tensor of the category in question is not a terminal object. As a result, our interpretation does not identify $\beta$-equivalent terms, but it does turn $\beta$-reductions into inequalities in a poset-enrichment of the category of diagrams.
We show that it is undecidable whether a system of linear equations over the Laurent polynomial ring $\mathbb{Z}[X^{\pm}]$ admit solutions where a specified subset of variables take value in the set of monomials $\{X^z \mid z \in \mathbb{Z}\}$. In particular, we construct a finitely presented $\mathbb{Z}[X^{\pm}]$-module, where it is undecidable whether a linear equation $X^{z_1} \boldsymbol{f}_1 + \cdots + X^{z_n} \boldsymbol{f}_n = \boldsymbol{f}_0$ has solutions $z_1, \ldots, z_n \in \mathbb{Z}$. This contrasts the decidability of the case $n = 1$, which can be deduced from Noskov's Lemma. We apply this result to settle a number of problems in computational group theory. We show that it is undecidable whether a system of equations has solutions in the wreath product $\mathbb{Z} \wr \mathbb{Z}$, providing a negative answer to an open problem of Kharlampovich, L\'{o}pez and Miasnikov (2020). We show that there exists a finitely generated abelian-by-cyclic group in which the problem of solving a single quadratic equation is undecidable. We also construct a finitely generated abelian-by-cyclic group, different to that of Mishchenko and Treier (2017), in which the Knapsack Problem is undecidable. In contrast, we show that the problem of Coset Intersection is decidable in all finitely generated abelian-by-cyclic groups.