An adaptive method for parabolic partial differential equations that combines sparse wavelet expansions in time with adaptive low-rank approximations in the spatial variables is constructed and analyzed. The method is shown to converge and satisfy similar complexity bounds as existing adaptive low-rank methods for elliptic problems, establishing its suitability for parabolic problems on high-dimensional spatial domains. The construction also yields computable rigorous a posteriori error bounds for such problems. The results are illustrated by numerical experiments.
The threshold theorem is a fundamental result in the theory of fault-tolerant quantum computation stating that arbitrarily long quantum computations can be performed with a polylogarithmic overhead provided the noise level is below a constant level. A recent work by Fawzi, Grospellier and Leverrier (FOCS 2018) building on a result by Gottesman (QIC 2013) has shown that the space overhead can be asymptotically reduced to a constant independent of the circuit provided we only consider circuits with a length bounded by a polynomial in the width. In this work, using a minimal model for quantum fault tolerance, we establish a general lower bound on the space overhead required to achieve fault tolerance. For any non-unitary qubit channel $\mathcal{N}$ and any quantum fault tolerance schemes against $\mathrm{i.i.d.}$ noise modeled by $\mathcal{N}$, we prove a lower bound of $\max\left\{\mathrm{Q}(\mathcal{N})^{-1}n,\alpha_\mathcal{N} \log T\right\}$ on the number of physical qubits, for circuits of length $T$ and width $n$. Here, $\mathrm{Q}(\mathcal{N})$ denotes the quantum capacity of $\mathcal{N}$ and $\alpha_\mathcal{N}>0$ is a constant only depending on the channel $\mathcal{N}$. In our model, we allow for qubits to be replaced by fresh ones during the execution of the circuit and we allow classical computation to be free and perfect. This improves upon results that assumed classical computations to be also affected by noise, and that sometimes did not allow for fresh qubits to be added. Along the way, we prove an exponential upper bound on the maximal length of fault-tolerant quantum computation with amplitude damping noise resolving a conjecture by Ben-Or, Gottesman, and Hassidim (2013).
In the present work, a new methodology is proposed for building surrogate parametric models of engineering systems based on modular assembly of pre-solved modules. Each module is a generic parametric solution considering parametric geometry, material and boundary conditions. By assembling these modules and satisfying continuity constraints at the interfaces, a parametric surrogate model of the full problem can be obtained. In the present paper, the PGD technique in connection with NURBS geometry representation is used to create a parametric model for each module. In this technique, the NURBS objects allow to map the governing boundary value problem from a parametric non-regular domain into a regular reference domain and the PGD is used to create a reduced model in the reference domain. In the assembly stage, an optimization problem is solved to satisfy the continuity constraints at the interfaces. The proposed procedure is based on the offline--online paradigm: the offline stage consists of creating multiple pre-solved modules which can be afterwards assembled in almost real-time during the online stage, enabling quick evaluations of the full system response. To show the potential of the proposed approach some numerical examples in heat conduction and structural plates under bending are presented.
It is known from the monograph [1, Chapter 5] that the weak convergence analysis of numerical schemes for stochastic Maxwell equations is an unsolved problem. This paper aims to fill the gap by establishing the long-time weak convergence analysis of the semi-implicit Euler scheme for stochastic Maxwell equations. Based on analyzing the regularity of transformed Kolmogorov equation associated to stochastic Maxwell equations and constructing a proper continuous adapted auxiliary process for the semi-implicit scheme, we present the long-time weak convergence analysis for this scheme and prove that the weak convergence order is one, which is twice the strong convergence order. As applications of this result, we obtain the convergence order of the numerical invariant measure, the strong law of large numbers and central limit theorem related to the numerical solution, and the error estimate of the multi-level Monte Carlo estimator. As far as we know, this is the first result on the weak convergence order for stochastic Maxwell equations.
Explicit time integration schemes coupled with Galerkin discretizations of time-dependent partial differential equations require solving a linear system with the mass matrix at each time step. For applications in structural dynamics, the solution of the linear system is frequently approximated through so-called mass lumping, which consists in replacing the mass matrix by some diagonal approximation. Mass lumping has been widely used in engineering practice for decades already and has a sound mathematical theory supporting it for finite element methods using the classical Lagrange basis. However, the theory for more general basis functions is still missing. Our paper partly addresses this shortcoming. Some special and practically relevant properties of lumped mass matrices are proved and we discuss how these properties naturally extend to banded and Kronecker product matrices whose structure allows to solve linear systems very efficiently. Our theoretical results are applied to isogeometric discretizations but are not restricted to them.
It is well known that the quasi-optimality of the Galerkin finite element method for the Helmholtz equation is dependent on the mesh size and the wave-number. In literature, different criteria have been proposed to ensure quasi-optimality. Often these criteria are difficult to obtain and depend on wave-number explicit regularity estimates. In the present work, we focus on criteria based on T-coercivity and weak T-coercivity, which highlight mesh size dependence on the gap between the square of the wavenumber and Laplace eigenvalues. We also propose an adaptive scheme, coupled with a residual-based indicator, for optimal mesh generation with minimal degrees of freedom.
In this work, we derive a $\gamma$-robust a posteriori error estimator for finite element approximations of the Allen-Cahn equation with variable non-degenerate mobility. The estimator utilizes spectral estimates for the linearized steady part of the differential operator as well as a conditional stability estimate based on a weighted sum of Bregman distances, based on the energy and a functional related to the mobility. A suitable reconstruction of the numerical solution in the stability estimate leads to a fully computable estimator.
The non-linear collision-induced breakage equation has significant applications in particulate processes. Two semi-analytical techniques, namely homotopy analysis method (HAM) and accelerated homotopy perturbation method (AHPM) are investigated along with the well-known finite volume method (FVM) to comprehend the dynamical behavior of the non-linear system, i.e., the concentration function, the total number and the total mass of the particles in the system. The theoretical convergence analyses of the series solutions of HAM and AHPM are discussed. In addition, the error estimations of the truncated solutions of both methods equip the maximum absolute error bound. To justify the applicability and accuracy of these methods, numerical simulations are compared with the findings of FVM and analytical solutions considering three physical problems.
Solutions to many important partial differential equations satisfy bounds constraints, but approximations computed by finite element or finite difference methods typically fail to respect the same conditions. Chang and Nakshatrala enforce such bounds in finite element methods through the solution of variational inequalities rather than linear variational problems. Here, we provide a theoretical justification for this method, including higher-order discretizations. We prove an abstract best approximation result for the linear variational inequality and estimates showing that bounds-constrained polynomials provide comparable approximation power to standard spaces. For any unconstrained approximation to a function, there exists a constrained approximation which is comparable in the $W^{1,p}$ norm. In practice, one cannot efficiently represent and manipulate the entire family of bounds-constrained polynomials, but applying bounds constraints to the coefficients of a polynomial in the Bernstein basis guarantees those constraints on the polynomial. Although our theoretical results do not guaruntee high accuracy for this subset of bounds-constrained polynomials, numerical results indicate optimal orders of accuracy for smooth solutions and sharp resolution of features in convection-diffusion problems, all subject to bounds constraints.
A preconditioning strategy is proposed for the iterative solve of large numbers of linear systems with variable matrix and right-hand side which arise during the computation of solution statistics of stochastic elliptic partial differential equations with random variable coefficients sampled by Monte Carlo. Building on the assumption that a truncated Karhunen-Lo\`{e}ve expansion of a known transform of the random variable coefficient is known, we introduce a compact representation of the random coefficient in the form of a Voronoi quantizer. The number of Voronoi cells, each of which is represented by a centroidal variable coefficient, is set to the prescribed number $P$ of preconditioners. Upon sampling the random variable coefficient, the linear system assembled with a given realization of the coefficient is solved with the preconditioner whose centroidal variable coefficient is the closest to the realization. We consider different ways to define and obtain the centroidal variable coefficients, and we investigate the properties of the induced preconditioning strategies in terms of average number of solver iterations for sequential simulations, and of load balancing for parallel simulations. Another approach, which is based on deterministic grids on the system of stochastic coordinates of the truncated representation of the random variable coefficient, is proposed with a stochastic dimension which increases with the number $P$ of preconditioners. This approach allows to bypass the need for preliminary computations in order to determine the optimal stochastic dimension of the truncated approximation of the random variable coefficient for a given number of preconditioners.
We consider a statistical model for symmetric matrix factorization with additive Gaussian noise in the high-dimensional regime where the rank $M$ of the signal matrix to infer scales with its size $N$ as $M = o(N^{1/10})$. Allowing for a $N$-dependent rank offers new challenges and requires new methods. Working in the Bayesian-optimal setting, we show that whenever the signal has i.i.d. entries the limiting mutual information between signal and data is given by a variational formula involving a rank-one replica symmetric potential. In other words, from the information-theoretic perspective, the case of a (slowly) growing rank is the same as when $M = 1$ (namely, the standard spiked Wigner model). The proof is primarily based on a novel multiscale cavity method allowing for growing rank along with some information-theoretic identities on worst noise for the Gaussian vector channel. We believe that the cavity method developed here will play a role in the analysis of a broader class of inference and spin models where the degrees of freedom are large arrays instead of vectors.