A fully discrete finite difference scheme for stochastic reaction-diffusion equations driven by a $1+1$-dimensional white noise is studied. The optimal strong rate of convergence is proved without posing any regularity assumption on the non-linear reaction term. The proof relies on stochastic sewing techniques.
Austrin showed that the approximation ratio $\beta\approx 0.94016567$ obtained by the MAX 2-SAT approximation algorithm of Lewin, Livnat and Zwick (LLZ) is optimal modulo the Unique Games Conjecture (UGC) and modulo a Simplicity Conjecture that states that the worst performance of the algorithm is obtained on so called simple configurations. We prove Austrin's conjecture, thereby showing the optimality of the LLZ approximation algorithm, relying only on the Unique Games Conjecture. Our proof uses a combination of analytic and computational tools. We also present new approximation algorithms for two restrictions of the MAX 2-SAT problem. For MAX HORN-$\{1,2\}$-SAT, i.e., MAX CSP$(\{x\lor y,\bar{x}\lor y,x,\bar{x}\})$, in which clauses are not allowed to contain two negated literals, we obtain an approximation ratio of $0.94615981$. For MAX CSP$(\{x\lor y,x,\bar{x}\})$, i.e., when 2-clauses are not allowed to contain negated literals, we obtain an approximation ratio of $0.95397990$. By adapting Austrin's and our arguments for the MAX 2-SAT problem we show that these two approximation ratios are also tight, modulo only the UGC conjecture. This completes a full characterization of the approximability of the MAX 2-SAT problem and its restrictions.
High-dimensional central limit theorems have been intensively studied with most focus being on the case where the data is sub-Gaussian or sub-exponential. However, heavier tails are omnipresent in practice. In this article, we study the critical growth rates of dimension $d$ below which Gaussian approximations are asymptotically valid but beyond which they are not. We are particularly interested in how these thresholds depend on the number of moments $m$ that the observations possess. For every $m\in(2,\infty)$, we construct i.i.d. random vectors $\textbf{X}_1,...,\textbf{X}_n$ in $\mathbb{R}^d$, the entries of which are independent and have a common distribution (independent of $n$ and $d$) with finite $m$th absolute moment, and such that the following holds: if there exists an $\varepsilon\in(0,\infty)$ such that $d/n^{m/2-1+\varepsilon}\not\to 0$, then the Gaussian approximation error (GAE) satisfies $$ \limsup_{n\to\infty}\sup_{t\in\mathbb{R}}\left[\mathbb{P}\left(\max_{1\leq j\leq d}\frac{1}{\sqrt{n}}\sum_{i=1}^n\textbf{X}_{ij}\leq t\right)-\mathbb{P}\left(\max_{1\leq j\leq d}\textbf{Z}_j\leq t\right)\right]=1,$$ where $\textbf{Z} \sim \mathsf{N}_d(\textbf{0}_d,\mathbf{I}_d)$. On the other hand, a result in Chernozhukov et al. (2023a) implies that the left-hand side above is zero if just $d/n^{m/2-1-\varepsilon}\to 0$ for some $\varepsilon\in(0,\infty)$. In this sense, there is a moment-dependent phase transition at the threshold $d=n^{m/2-1}$ above which the limiting GAE jumps from zero to one.
We present an energy/entropy stable and high order accurate finite difference method for solving the linear/nonlinear shallow water equations (SWE) in vector invariant form using the newly developed dual-pairing (DP) and dispersion-relation preserving (DRP) summation by parts (SBP) finite difference operators. We derive new well-posed boundary conditions for the SWE in one space dimension, formulated in terms of fluxes and applicable to linear and nonlinear problems. For nonlinear problems, entropy stability ensures the boundedness of numerical solutions, however, it does not guarantee convergence. Adequate amount of numerical dissipation is necessary to control high frequency errors which could ruin numerical simulations. Using the dual-pairing SBP framework, we derive high order accurate and nonlinear hyper-viscosity operator which dissipates entropy and enstrophy. The hyper-viscosity operator effectively tames oscillations from shocks and discontinuities, and eliminates poisonous high frequency grid-scale errors. The numerical method is most suitable for the simulations of sub-critical flows typical observed in atmospheric and geostrophic flow problems. We prove a priori error estimates for the semi-discrete approximations of both linear and nonlinear SWE. We verify convergence, accuracy and well-balanced property via the method of manufactured solutions (MMS) and canonical test problems such as the dam break, lake at rest, and a two-dimensional rotating and merging vortex problem.
A new mechanical model on noncircular shallow tunnelling considering initial stress field is proposed in this paper by constraining far-field ground surface to eliminate displacement singularity at infinity, and the originally unbalanced tunnel excavation problem in existing solutions is turned to an equilibrium one of mixed boundaries. By applying analytic continuation, the mixed boundaries are transformed to a homogenerous Riemann-Hilbert problem, which is subsequently solved via an efficient and accurate iterative method with boundary conditions of static equilibrium, displacement single-valuedness, and traction along tunnel periphery. The Lanczos filtering technique is used in the final stress and displacement solution to reduce the Gibbs phenomena caused by the constrained far-field ground surface for more accurte results. Several numerical cases are conducted to intensively verify the proposed solution by examining boundary conditions and comparing with existing solutions, and all the results are in good agreements. Then more numerical cases are conducted to investigate the stress and deformation distribution along ground surface and tunnel periphery, and several engineering advices are given. Further discussions on the defects of the proposed solution are also conducted for objectivity.
The distribution-free chain ladder of Mack justified the use of the chain ladder predictor and enabled Mack to derive an estimator of conditional mean squared error of prediction for the chain ladder predictor. Classical insurance loss models, i.e. of compound Poisson type, are not consistent with Mack's distribution-free chain ladder. However, for a sequence of compound Poisson loss models indexed by exposure (e.g. number of contracts), we show that the chain ladder predictor and Mack's estimator of conditional mean squared error of prediction can be derived by considering large exposure asymptotics. Hence, quantifying chain ladder prediction uncertainty can be done with Mack's estimator without relying on the validity of the model assumptions of the distribution-free chain ladder.
Rational function approximations provide a simple but flexible alternative to polynomial approximation, allowing one to capture complex non-linearities without oscillatory artifacts. However, there have been few attempts to use rational functions on noisy data due to the likelihood of creating spurious singularities. To avoid the creation of singularities, we use Bernstein polynomials and appropriate conditions on their coefficients to force the denominator to be strictly positive. While this reduces the range of rational polynomials that can be expressed, it keeps all the benefits of rational functions while maintaining the robustness of polynomial approximation in noisy data scenarios. Our numerical experiments on noisy data show that existing rational approximation methods continually produce spurious poles inside the approximation domain. This contrasts our method, which cannot create poles in the approximation domain and provides better fits than a polynomial approximation and even penalized splines on functions with multiple variables. Moreover, guaranteeing pole-free in an interval is critical for estimating non-constant coefficients when numerically solving differential equations using spectral methods. This provides a compact representation of the original differential equation, allowing numeric solvers to achieve high accuracy quickly, as seen in our experiments.
We investigate the randomized decision tree complexity of a specific class of read-once threshold functions. A read-once threshold formula can be defined by a rooted tree, every internal node of which is labeled by a threshold function $T_k^n$ (with output 1 only when at least $k$ out of $n$ input bits are 1) and each leaf by a distinct variable. Such a tree defines a Boolean function in a natural way. We focus on the randomized decision tree complexity of such functions, when the underlying tree is a uniform tree with all its internal nodes labeled by the same threshold function. We prove lower bounds of the form $c(k,n)^d$, where $d$ is the depth of the tree. We also treat trees with alternating levels of AND and OR gates separately and show asymptotically optimal bounds, extending the known bounds for the binary case.
Polar slice sampling, a Markov chain construction for approximate sampling, performs, under suitable assumptions on the target and initial distribution, provably independent of the state space dimension. We extend the aforementioned result of Roberts & Rosenthal (2002) by developing a theory which identifies conditions, in terms of a generalized level set function, that imply an explicit lower bound on the spectral gap even in a general slice sampling context. Verifying the identified conditions for polar slice sampling yields a lower bound of 1/2 on the spectral gap for arbitrary dimension if the target density is rotationally invariant, log-concave along rays emanating from the origin and sufficiently smooth. The general theoretical result is potentially applicable beyond the polar slice sampling framework.
A linearly implicit conservative difference scheme is applied to discretize the attractive coupled nonlinear Schr\"odinger equations with fractional Laplacian. Complex symmetric linear systems can be obtained, and the system matrices are indefinite and Toeplitz-plus-diagonal. Neither efficient preconditioned iteration method nor fast direct method is available to deal with these systems. In this paper, we propose a novel matrix splitting iteration method based on a normal splitting of an equivalent real block form of the complex linear systems. This new iteration method converges unconditionally, and the quasi-optimal iteration parameter is deducted. The corresponding new preconditioner is obtained naturally, which can be constructed easily and implemented efficiently by fast Fourier transform. Theoretical analysis indicates that the eigenvalues of the preconditioned system matrix are tightly clustered. Numerical experiments show that the new preconditioner can significantly accelerate the convergence rate of the Krylov subspace iteration methods. Specifically, the convergence behavior of the related preconditioned GMRES iteration method is spacial mesh-size-independent, and almost fractional order insensitive. Moreover, the linearly implicit conservative difference scheme in conjunction with the preconditioned GMRES iteration method conserves the discrete mass and energy in terms of a given precision.
Over the last decade, approximating functions in infinite dimensions from samples has gained increasing attention in computational science and engineering, especially in computational uncertainty quantification. This is primarily due to the relevance of functions that are solutions to parametric differential equations in various fields, e.g. chemistry, economics, engineering, and physics. While acquiring accurate and reliable approximations of such functions is inherently difficult, current benchmark methods exploit the fact that such functions often belong to certain classes of holomorphic functions to get algebraic convergence rates in infinite dimensions with respect to the number of (potentially adaptive) samples $m$. Our work focuses on providing theoretical approximation guarantees for the class of $(\boldsymbol{b},\varepsilon)$-holomorphic functions, demonstrating that these algebraic rates are the best possible for Banach-valued functions in infinite dimensions. We establish lower bounds using a reduction to a discrete problem in combination with the theory of $m$-widths, Gelfand widths and Kolmogorov widths. We study two cases, known and unknown anisotropy, in which the relative importance of the variables is known and unknown, respectively. A key conclusion of our paper is that in the latter setting, approximation from finite samples is impossible without some inherent ordering of the variables, even if the samples are chosen adaptively. Finally, in both cases, we demonstrate near-optimal, non-adaptive (random) sampling and recovery strategies which achieve close to same rates as the lower bounds.