We study the sharp interface limit of the stochastic Cahn-Hilliard equation with cubic double-well potential and additive space-time white noise $\epsilon^{\sigma}\dot{W}$ where $\epsilon>0$ is an interfacial width parameter. We prove that, for sufficiently large scaling constant $\sigma >0$, the stochastic Cahn-Hilliard equation converges to the deterministic Mullins-Sekerka/Hele-Shaw problem for $\epsilon\rightarrow 0$. The convergence is shown in suitable fractional Sobolev norms as well as in the $L^p$-norm for $p\in (2, 4]$ in spatial dimension $d=2,3$. This generalizes the existing result for the space-time white noise to dimension $d=3$ and improves the existing results for smooth noise, which were so far limited to $p\in \left(2, \frac{2d+8}{d+2}\right]$ in spatial dimension $d=2,3$. As a byproduct of the analysis of the stochastic problem with space-time white noise, we identify minimal regularity requirements on the noise which allow convergence to the sharp interface limit in the $\mathbb{H}^1$-norm and also provide improved convergence estimates for the sharp interface limit of the deterministic problem.
Grover's algorithm can solve NP-complete problems on quantum computers faster than all the known algorithms on classical computers. However, Grover's algorithm still needs exponential time. Due to the BBBV theorem, Grover's algorithm is optimal for searches in the domain of a function, when the function is used as a black box. We analyze the NP-complete set \[\{ (\langle M \rangle, 1^n, 1^t ) \mid \text{ TM }M\text{ accepts an }x\in\{0,1\}^n\text{ within }t\text{ steps}\}.\] If $t$ is large enough, then M accepts each word in $L(M)$ with length $n$ within $t$ steps. So, one can use methods from computability theory to show that black box searching is the fastest way to find a solution. Therefore, Grover's algorithm is optimal for NP-complete problems.
Methods and algorithms that work with data on nonlinear manifolds are collectively summarised under the term `Riemannian computing'. In practice, curvature can be a key limiting factor for the performance of Riemannian computing methods. Yet, curvature can also be a powerful tool in the theoretical analysis of Riemannian algorithms. In this work, we investigate the sectional curvature of the Stiefel and Grassmann manifold from the quotient space view point. On the Grassmannian, tight curvature bounds are known since the late 1960ies. On the Stiefel manifold under the canonical metric, it was believed that the sectional curvature does not exceed 5/4. For both of these manifolds, the sectional curvature is given by the Frobenius norm of certain structured commutator brackets of skew-symmetric matrices. We provide refined inequalities for such terms and pay special attention to the maximizers of the curvature bounds. In this way, we prove that the global bound of 5/4 for Stiefel holds indeed. With this addition, a complete account of the curvature bounds in all admissible dimensions is obtained. We observe that `high curvature means low-rank', more precisely, for the Stiefel and Grassmann manifolds, the global curvature maximum is attained at tangent plane sections that are spanned by rank-two matrices. Numerical examples are included for illustration purposes.
In this paper, we propose an RADI-type method for large-scale stochastic continuous-time algebraic Riccati equations with sparse and low-rank structures. The so-called ISC method is developed by using the Incorporation idea together with different Shifts to accelerate the convergence and Compressions to reduce the storage and complexity. Numerical experiments are given to show its efficiency.
Of all the possible projection methods for solving large-scale Lyapunov matrix equations, Galerkin approaches remain much more popular than minimal-residual ones. This is mainly due to the different nature of the projected problems stemming from these two families of methods. While a Galerkin approach leads to the solution of a low-dimensional matrix equation per iteration, a matrix least-squares problem needs to be solved per iteration in a minimal-residual setting. The significant computational cost of these least-squares problems has steered researchers towards Galerkin methods in spite of the appealing properties of minimal-residual schemes. In this paper we introduce a framework that allows for modifying the Galerkin approach by low-rank, additive corrections to the projected matrix equation problem with the two-fold goal of attaining monotonic convergence rates similar to those of minimal-residual schemes while maintaining essentially the same computational cost of the original Galerkin method. We analyze the well-posedness of our framework and determine possible scenarios where we expect the residual norm attained by two low-rank-modified variants to behave similarly to the one computed by a minimal-residual technique. A panel of diverse numerical examples shows the behavior and potential of our new approach.
In this paper, we consider the task of efficiently computing the numerical solution of evolutionary complex Ginzburg--Landau equations. To this aim, we employ high-order exponential methods of splitting and Lawson type for the time integration. These schemes enjoy favorable stability properties and, in particular, do not show restrictions on the time step size due to the underlying stiffness of the models. The needed actions of matrix exponentials are efficiently realized with pointwise operations in Fourier space (when the model is considered with periodic boundary conditions) or by using a tensor-oriented approach that suitably employs the so-called $\mu$-mode products (when the semidiscretization in space is performed with finite differences). The overall effectiveness of the approach is demonstrated by running simulations on a variety of two- and three-dimensional (systems of) complex Ginzburg--Landau equations with cubic and cubic-quintic nonlinearities, which are widely considered in literature to model relevant physical phenomena. In fact, in all instances high-order exponential-type schemes can outperform standard techniques to integrate in time the models under consideration, i.e., the well-known split-step method and the explicit fourth-order Runge--Kutta integrator.
Deep learning methods have access to be employed for solving physical systems governed by parametric partial differential equations (PDEs) due to massive scientific data. It has been refined to operator learning that focuses on learning non-linear mapping between infinite-dimensional function spaces, offering interface from observations to solutions. However, state-of-the-art neural operators are limited to constant and uniform discretization, thereby leading to deficiency in generalization on arbitrary discretization schemes for computational domain. In this work, we propose a novel operator learning algorithm, referred to as Dynamic Gaussian Graph Operator (DGGO) that expands neural operators to learning parametric PDEs in arbitrary discrete mechanics problems. The Dynamic Gaussian Graph (DGG) kernel learns to map the observation vectors defined in general Euclidean space to metric vectors defined in high-dimensional uniform metric space. The DGG integral kernel is parameterized by Gaussian kernel weighted Riemann sum approximating and using dynamic message passing graph to depict the interrelation within the integral term. Fourier Neural Operator is selected to localize the metric vectors on spatial and frequency domains. Metric vectors are regarded as located on latent uniform domain, wherein spatial and spectral transformation offer highly regular constraints on solution space. The efficiency and robustness of DGGO are validated by applying it to solve numerical arbitrary discrete mechanics problems in comparison with mainstream neural operators. Ablation experiments are implemented to demonstrate the effectiveness of spatial transformation in the DGG kernel. The proposed method is utilized to forecast stress field of hyper-elastic material with geometrically variable void as engineering application.
This paper develops a two-stage stochastic model to investigate evolution of random fields on the unit sphere $\bS^2$ in $\R^3$. The model is defined by a time-fractional stochastic diffusion equation on $\bS^2$ governed by a diffusion operator with the time-fractional derivative defined in the Riemann-Liouville sense. In the first stage, the model is characterized by a homogeneous problem with an isotropic Gaussian random field on $\bS^2$ as an initial condition. In the second stage, the model becomes an inhomogeneous problem driven by a time-delayed Brownian motion on $\bS^2$. The solution to the model is given in the form of an expansion in terms of complex spherical harmonics. An approximation to the solution is given by truncating the expansion of the solution at degree $L\geq1$. The rate of convergence of the truncation errors as a function of $L$ and the mean square errors as a function of time are also derived. It is shown that the convergence rates depend not only on the decay of the angular power spectrum of the driving noise and the initial condition, but also on the order of the fractional derivative. We study sample properties of the stochastic solution and show that the solution is an isotropic H\"{o}lder continuous random field. Numerical examples and simulations inspired by the cosmic microwave background (CMB) are given to illustrate the theoretical findings.
This study explores the sample complexity for two-layer neural networks to learn a generalized linear target function under Stochastic Gradient Descent (SGD), focusing on the challenging regime where many flat directions are present at initialization. It is well-established that in this scenario $n=O(d \log d)$ samples are typically needed. However, we provide precise results concerning the pre-factors in high-dimensional contexts and for varying widths. Notably, our findings suggest that overparameterization can only enhance convergence by a constant factor within this problem class. These insights are grounded in the reduction of SGD dynamics to a stochastic process in lower dimensions, where escaping mediocrity equates to calculating an exit time. Yet, we demonstrate that a deterministic approximation of this process adequately represents the escape time, implying that the role of stochasticity may be minimal in this scenario.
Acoustic wave equation is a partial differential equation (PDE) which describes propagation of acoustic waves through a material. In general, the solution to this PDE is nonunique. Therefore, it is necessary to impose initial conditions in the form of Cauchy conditions for obtaining a unique solution. Theoretically, solving the wave equation is equivalent to representing the wavefield in terms of a radiation source which possesses finite energy over space and time.The radiation source is represented by a forcing term in the right-hand-side of the wave equation. In practice, the source may be represented in terms of normal derivative of pressure or normal velocity over a surface. The pressure wavefield is then calculated by solving an associated boundary-value problem via imposing conditions on the boundary of a chosen solution space. From analytic point of view, this manuscript aims to review typical approaches for obtaining unique solution to the acoustic wave equation in terms of either a volumetric radiation source, or a surface source in terms of normal derivative of pressure or normal velocity. A numerical approximation of the derived formulae will then be explained. The key step for numerically approximating the derived analytic formulae is inclusion of source, and will be studied carefully in this manuscript.
The theory of mixed finite element methods for solving different types of elliptic partial differential equations in saddle point formulation is well established since many decades. This topic was mostly studied for variational formulations defined upon the same product spaces of both shape- and test-pairs of primal variable-multiplier. Whenever either these spaces or the two bilinear forms involving the multiplier are distinct, the saddle point problem is asymmetric. The three inf-sup conditions to be satisfied by the product spaces stipulated in work on the subject, in order to guarantee well-posedness, are well known. However, the material encountered in the literature addressing the approximation of this class of problems left room for improvement and clarifications. After making a brief review of the existing contributions to the topic that justifies such an assertion, in this paper we set up finer global error bounds for the pair primal variable-multiplier solving an asymmetric saddle point problem. Besides well-posedness, the three constants in the aforementioned inf-sup conditions are identified as all that is needed for determining the stability constant appearing therein, whose expression is exhibited. As a complement, refined error bounds depending only on these three constants are given for both unknowns separately.