It is known that when the diffuse interface thickness $\epsilon$ vanishes, the sharp interface limit of the stochastic reaction-diffusion equation is formally a stochastic geometric flow. To capture and simulate such geometric flow, it is crucial to develop numerical approximations whose error bounds depends on $\frac 1\epsilon$ polynomially. However, due to loss of spectral estimate of the linearized stochastic reaction-diffusion equation, how to get such error bound of numerical approximation has been an open problem. In this paper, we solve this weak error bound problem for stochastic reaction-diffusion equations near sharp interface limit. We first introduce a regularized problem which enjoys the exponential ergodicity. Then we present the regularity analysis of the regularized Kolmogorov and Poisson equations which only depends on $\frac 1{\epsilon}$ polynomially. Furthermore, we establish such weak error bound. This phenomenon could be viewed as a kind of the regularization effect of noise on the numerical approximation of stochastic partial differential equation (SPDE). As a by-product, a central limit theorem of the weak approximation is shown near sharp interface limit. Our method of proof could be extended to a number of other spatial and temporal numerical approximations for semilinear SPDEs.
A novel overlapping domain decomposition splitting algorithm based on a Crank-Nisolson method is developed for the stochastic nonlinear Schroedinger equation driven by a multiplicative noise with non-periodic boundary conditions. The proposed algorithm can significantly reduce the computational cost while maintaining the similar conservation laws. Numerical experiments are dedicated to illustrating the capability of the algorithm for different spatial dimensions, as well as the various initial conditions. In particular, we compare the performance of the overlapping domain decomposition splitting algorithm with the stochastic multi-symplectic method in [S. Jiang, L. Wang and J. Hong, Commun. Comput. Phys., 2013] and the finite difference splitting scheme in [J. Cui, J. Hong, Z. Liu and W. Zhou, J. Differ. Equ., 2019]. We observe that our proposed algorithm has excellent computational efficiency and is highly competitive. It provides a useful tool for solving stochastic partial differential equations.
The nonlocality of the fractional operator causes numerical difficulties for long time computation of the time-fractional evolution equations. This paper develops a high-order fast time-stepping discontinuous Galerkin finite element method for the time-fractional diffusion equations, which saves storage and computational time. The optimal error estimate $O(N^{-p-1} + h^{m+1} + \varepsilon N^{r\alpha})$ of the current time-stepping discontinuous Galerkin method is rigorous proved, where $N$ denotes the number of time intervals, $p$ is the degree of polynomial approximation on each time subinterval, $h$ is the maximum space step, $r\ge1$, $m$ is the order of finite element space, and $\varepsilon>0$ can be arbitrarily small. Numerical simulations verify the theoretical analysis.
We present difference schemes for stochastic transport equations with low-regularity velocity fields. We establish $L^2$ stability and convergence of the difference approximations under conditions that are less strict than those required for deterministic transport equations. The $L^2$ estimate, crucial for the analysis, is obtained through a discrete duality argument and a comprehensive examination of a class of backward parabolic difference schemes.
A pair of linear codes whose intersection is of dimension $\ell$, where $\ell$ is a non-negetive integer, is called an $\ell$-intersection pair of codes. This paper focuses on studying $\ell$-intersection pairs of $\lambda_i$-constacyclic, $i=1,2,$ and conjucyclic codes. We first characterize an $\ell$-intersection pair of $\lambda_i$-constacyclic codes. A formula for $\ell$ has been established in terms of the degrees of the generator polynomials of $\lambda_i$-constacyclic codes. This allows obtaining a condition for $\ell$-linear complementary pairs (LPC) of constacyclic codes. Later, we introduce and characterize the $\ell$-intersection pair of conjucyclic codes over $\mathbb{F}_{q^2}$. The first observation in the process is that there are no non-trivial linear conjucyclic codes over finite fields. So focus on the characterization of additive conjucyclic (ACC) codes. We show that the largest $\mathbb{F}_q$-subcode of an ACC code over $\mathbb{F}_{q^2}$ is cyclic and obtain its generating polynomial. This enables us to find the size of an ACC code. Furthermore, we discuss the trace code of an ACC code and show that it is cyclic. Finally, we determine $\ell$-intersection pairs of trace codes of ACC codes over $\mathbb{F}_4$.
We show that for log-concave real random variables with fixed variance the Shannon differential entropy is minimized for an exponential random variable. We apply this result to derive upper bounds on capacities of additive noise channels with log-concave noise. We also improve constants in the reverse entropy power inequalities for log-concave random variables.
Challenges with data in the big-data era include (i) the dimension $p$ is often larger than the sample size $n$ (ii) outliers or contaminated points are frequently hidden and more difficult to detect. Challenge (i) renders most conventional methods inapplicable. Thus, it attracts tremendous attention from statistics, computer science, and bio-medical communities. Numerous penalized regression methods have been introduced as modern methods for analyzing high-dimensional data. Disproportionate attention has been paid to the challenge (ii) though. Penalized regression methods can do their job very well and are expected to handle the challenge (ii) simultaneously. Most of them, however, can break down by a single outlier (or single adversary contaminated point) as revealed in this article. The latter systematically examines leading penalized regression methods in the literature in terms of their robustness, provides quantitative assessment, and reveals that most of them can break down by a single outlier. Consequently, a novel robust penalized regression method based on the least sum of squares of depth trimmed residuals is proposed and studied carefully. Experiments with simulated and real data reveal that the newly proposed method can outperform some leading competitors in estimation and prediction accuracy in the cases considered.
We extend the error bounds from [SIMAX, Vol. 43, Iss. 2, pp. 787-811 (2022)] for the Lanczos method for matrix function approximation to the block algorithm. Numerical experiments suggest that our bounds are fairly robust to changing block size and have the potential for use as a practical stopping criteria. Further experiments work towards a better understanding of how certain hyperparameters should be chosen in order to maximize the quality of the error bounds, even in the previously studied block-size one case.
Multivariate imputation by chained equations (MICE) is one of the most popular approaches to address missing values in a data set. This approach requires specifying a univariate imputation model for every variable under imputation. The specification of which predictors should be included in these univariate imputation models can be a daunting task. Principal component analysis (PCA) can simplify this process by replacing all of the potential imputation model predictors with a few components summarizing their variance. In this article, we extend the use of PCA with MICE to include a supervised aspect whereby information from the variables under imputation is incorporated into the principal component estimation. We conducted an extensive simulation study to assess the statistical properties of MICE with different versions of supervised dimensionality reduction and we compared them with the use of classical unsupervised PCA as a simpler dimensionality reduction technique.
Binary responses arise in a multitude of statistical problems, including binary classification, bioassay, current status data problems and sensitivity estimation. There has been an interest in such problems in the Bayesian nonparametrics community since the early 1970s, but inference given binary data is intractable for a wide range of modern simulation-based models, even when employing MCMC methods. Recently, Christensen (2023) introduced a novel simulation technique based on counting permutations, which can estimate both posterior distributions and marginal likelihoods for any model from which a random sample can be generated. However, the accompanying implementation of this technique struggles when the sample size is too large (n > 250). Here we present perms, a new implementation of said technique which is substantially faster and able to handle larger data problems than the original implementation. It is available both as an R package and a Python library. The basic usage of perms is illustrated via two simple examples: a tractable toy problem and a bioassay problem. A more complex example involving changepoint analysis is also considered. We also cover the details of the implementation and illustrate the computational speed gain of perms via a simple simulation study.
We consider the minimal thermodynamic cost of an individual computation, where a single input $x$ is mapped to a single output $y$. In prior work, Zurek proposed that this cost was given by $K(x\vert y)$, the conditional Kolmogorov complexity of $x$ given $y$ (up to an additive constant which does not depend on $x$ or $y$). However, this result was derived from an informal argument, applied only to deterministic computations, and had an arbitrary dependence on the choice of protocol (via the additive constant). Here we use stochastic thermodynamics to derive a generalized version of Zurek's bound from a rigorous Hamiltonian formulation. Our bound applies to all quantum and classical processes, whether noisy or deterministic, and it explicitly captures the dependence on the protocol. We show that $K(x\vert y)$ is a minimal cost of mapping $x$ to $y$ that must be paid using some combination of heat, noise, and protocol complexity, implying a tradeoff between these three resources. Our result is a kind of "algorithmic fluctuation theorem" with implications for the relationship between the Second Law and the Physical Church-Turing thesis.