We investigate the global existence of a solution for the stochastic fractional nonlinear Schr\"odinger equation with radially symmetric initial data in a suitable energy space $H^{\alpha}$. Using a variational principle, we demonstrate that the stochastic fractional nonlinear Schr\"odinger equation in the Stratonovich sense forms an infinite-dimensional stochastic Hamiltonian system, with its phase flow preserving symplecticity. We develop a structure-preserving algorithm for the stochastic fractional nonlinear Schr\"odinger equation from the perspective of symplectic geometry. It is established that the stochastic midpoint scheme satisfies the corresponding symplectic law in the discrete sense. Furthermore, since the midpoint scheme is implicit, we also develop a more effective mass-preserving splitting scheme. Consequently, the convergence order of the splitting scheme is shown to be $1$. Two numerical examples are conducted to validate the efficiency of the theory.
For factor analysis, many estimators, starting with the maximum likelihood estimator, are developed, and the statistical properties of most estimators are well discussed. In the early 2000s, a new estimator based on matrix factorization, called Matrix Decomposition Factor Analysis (MDFA), was developed. Although the estimator is obtained by minimizing the principal component analysis-like loss function, this estimator empirically behaves like other consistent estimators of factor analysis, not principal component analysis. Since the MDFA estimator cannot be formulated as a classical M-estimator, the statistical properties of the MDFA estimator have not yet been discussed. To explain this unexpected behavior theoretically, we establish the consistency of the MDFA estimator as the factor analysis. That is, we show that the MDFA estimator has the same limit as other consistent estimators of factor analysis.
It is well known that the quasi-optimality of the Galerkin finite element method for the Helmholtz equation is dependent on the mesh size and the wave-number. In literature, different criteria have been proposed to ensure quasi-optimality. Often these criteria are difficult to obtain and depend on wave-number explicit regularity estimates. In the present work, we focus on criteria based on T-coercivity and weak T-coercivity, which highlight mesh size dependence on the gap between the square of the wavenumber and Laplace eigenvalues. We also propose an adaptive scheme, coupled with a residual-based indicator, for optimal mesh generation with minimal degrees of freedom.
The worst-case complexity of group-theoretic algorithms has been studied for a long time. Generic-case complexity, or complexity on random inputs, was introduced and studied relatively recently. In this paper, we address the average-case time complexity of the word problem in several classes of groups and show that it is often the case that the average-case complexity is linear with respect to the length of an input word. The classes of groups that we consider include groups of matrices over rationals (in particular, polycyclic groups), some classes of solvable groups, as well as free products. Along the way, we improve several bounds for the worst-case complexity of the word problem in groups of matrices, in particular in nilpotent groups. For free products, we also address the average-case complexity of the subgroup membership problem and show that it is often linear, too. Finally, we discuss complexity of the identity problem that has not been considered before.
We consider Gibbs distributions, which are families of probability distributions over a discrete space $\Omega$ with probability mass function of the form $\mu^\Omega_\beta(\omega) \propto e^{\beta H(\omega)}$ for $\beta$ in an interval $[\beta_{\min}, \beta_{\max}]$ and $H( \omega ) \in \{0 \} \cup [1, n]$. The partition function is the normalization factor $Z(\beta)=\sum_{\omega \in\Omega}e^{\beta H(\omega)}$. Two important parameters of these distributions are the log partition ratio $q = \log \tfrac{Z(\beta_{\max})}{Z(\beta_{\min})}$ and the counts $c_x = |H^{-1}(x)|$. These are correlated with system parameters in a number of physical applications and sampling algorithms. Our first main result is to estimate the counts $c_x$ using roughly $\tilde O( \frac{q}{\varepsilon^2})$ samples for general Gibbs distributions and $\tilde O( \frac{n^2}{\varepsilon^2} )$ samples for integer-valued distributions (ignoring some second-order terms and parameters), and we show this is optimal up to logarithmic factors. We illustrate with improved algorithms for counting connected subgraphs, independent sets, and perfect matchings. As a key subroutine, we also develop algorithms to compute the partition function $Z$ using $\tilde O(\frac{q}{\varepsilon^2})$ samples for general Gibbs distributions and using $\tilde O(\frac{n^2}{\varepsilon^2})$ samples for integer-valued distributions.
The exponential trapezoidal rule is proposed and analyzed for the numerical integration of semilinear integro-differential equations. Although the method is implicit, the numerical solution is easily obtained by standard fixed-point iteration, making its implementation straightforward. Second-order convergence in time is shown in an abstract Hilbert space framework under reasonable assumptions on the problem. Numerical experiments illustrate the proven order of convergence.
Mendelian randomization is an instrumental variable method that utilizes genetic information to investigate the causal effect of a modifiable exposure on an outcome. In most cases, the exposure changes over time. Understanding the time-varying causal effect of the exposure can yield detailed insights into mechanistic effects and the potential impact of public health interventions. Recently, a growing number of Mendelian randomization studies have attempted to explore time-varying causal effects. However, the proposed approaches oversimplify temporal information and rely on overly restrictive structural assumptions, limiting their reliability in addressing time-varying causal problems. This paper considers a novel approach to estimate time-varying effects through continuous-time modelling by combining functional principal component analysis and weak-instrument-robust techniques. Our method effectively utilizes available data without making strong structural assumptions and can be applied in general settings where the exposure measurements occur at different timepoints for different individuals. We demonstrate through simulations that our proposed method performs well in estimating time-varying effects and provides reliable inference results when the time-varying effect form is correctly specified. The method could theoretically be used to estimate arbitrarily complex time-varying effects. However, there is a trade-off between model complexity and instrument strength. Estimating complex time-varying effects requires instruments that are unrealistically strong. We illustrate the application of this method in a case study examining the time-varying effects of systolic blood pressure on urea levels.
This work proposes a discretization of the acoustic wave equation with possibly oscillatory coefficients based on a superposition of discrete solutions to spatially localized subproblems computed with an implicit time discretization. Based on exponentially decaying entries of the global system matrices and an appropriate partition of unity, it is proved that the superposition of localized solutions is appropriately close to the solution of the (global) implicit scheme. It is thereby justified that the localized (and especially parallel) computation on multiple overlapping subdomains is reasonable. Moreover, a re-start is introduced after a certain amount of time steps to maintain a moderate overlap of the subdomains. Overall, the approach may be understood as a domain decomposition strategy in space on successive short time intervals that completely avoids inner iterations. Numerical examples are presented.
Functional Differential Equations (FDEs) play a fundamental role in many areas of mathematical physics, including fluid dynamics (Hopf characteristic functional equation), quantum field theory (Schwinger-Dyson equation), and statistical physics. Despite their significance, computing solutions to FDEs remains a longstanding challenge in mathematical physics. In this paper we address this challenge by introducing new approximation theory and high-performance computational algorithms designed for solving FDEs on tensor manifolds. Our approach involves approximating FDEs using high-dimensional partial differential equations (PDEs), and then solving such high-dimensional PDEs on a low-rank tensor manifold leveraging high-performance parallel tensor algorithms. The effectiveness of the proposed approach is demonstrated through its application to the Burgers-Hopf FDE, which governs the characteristic functional of the stochastic solution to the Burgers equation evolving from a random initial state.
We propose and analyze a novel approach to construct structure preserving approximations for the Poisson-Nernst-Planck equations, focusing on the positivity preserving and mass conservation properties. The strategy consists of a standard time marching step with a projection (or correction) step to satisfy the desired physical constraints (positivity and mass conservation). Based on the $L^2$ projection, we construct a second order Crank-Nicolson type finite difference scheme, which is linear (exclude the very efficient $L^2$ projection part), positivity preserving and mass conserving. Rigorous error estimates in $L^2$ norm are established, which are both second order accurate in space and time. The other choice of projection, e.g. $H^1$ projection, is discussed. Numerical examples are presented to verify the theoretical results and demonstrate the efficiency of the proposed method.
The minimum covariance determinant (MCD) estimator is a popular method for robustly estimating the mean and covariance of multivariate data. We extend the MCD to the setting where the observations are matrices rather than vectors and introduce the matrix minimum covariance determinant (MMCD) estimators for robust parameter estimation. These estimators hold equivariance properties, achieve a high breakdown point, and are consistent under elliptical matrix-variate distributions. We have also developed an efficient algorithm with convergence guarantees to compute the MMCD estimators. Using the MMCD estimators, we can compute robust Mahalanobis distances that can be used for outlier detection. Those distances can be decomposed into outlyingness contributions from each cell, row, or column of a matrix-variate observation using Shapley values, a concept for outlier explanation recently introduced in the multivariate setting. Simulations and examples reveal the excellent properties and usefulness of the robust estimators.