We prove sharp bounds on certain impedance-to-impedance maps (and their compositions) for the Helmholtz equation with large wavenumber (i.e., at high-frequency) using semiclassical defect measures. The paper [GGGLS] (Gong-Gander-Graham-Lafontaine-Spence, 2022) recently showed that the behaviour of these impedance-to-impedance maps (and their compositions) dictates the convergence of the parallel overlapping Schwarz domain-decomposition method with impedance boundary conditions on the subdomain boundaries. For a model decomposition with two subdomains and sufficiently-large overlap, the results of this paper combined with those in [GGGLS] show that the parallel Schwarz method is power contractive, independent of the wavenumber. For strip-type decompositions with many subdomains, the results of this paper show that the composite impedance-to-impedance maps, in general, behave "badly" with respect to the wavenumber; nevertheless, by proving results about the composite maps applied to a restricted class of data, we give insight into the wavenumber-robustness of the parallel Schwarz method observed in the numerical experiments in [GGGLS].
The ParaDiag family of algorithms solves differential equations by using preconditioners that can be inverted in parallel through diagonalization. In the context of optimal control of linear parabolic PDEs, the state-of-the-art ParaDiag method is limited to solving self-adjoint problems with a tracking objective. We propose three improvements to the ParaDiag method: the use of alpha-circulant matrices to construct an alternative preconditioner, a generalization of the algorithm for solving non-self-adjoint equations, and the formulation of an algorithm for terminal-cost objectives. We present novel analytic results about the eigenvalues of the preconditioned systems for all discussed ParaDiag algorithms in the case of self-adjoint equations, which proves the favorable properties the alpha-circulant preconditioner. We use these results to perform a theoretical parallel-scaling analysis of ParaDiag for self-adjoint problems. Numerical tests confirm our findings and suggest that the self-adjoint behavior, which is backed by theory, generalizes to the non-self-adjoint case. We provide a sequential, open-source reference solver in Matlab for all discussed algorithms.
The threshold theorem is a fundamental result in the theory of fault-tolerant quantum computation stating that arbitrarily long quantum computations can be performed with a polylogarithmic overhead provided the noise level is below a constant level. A recent work by Fawzi, Grospellier and Leverrier (FOCS 2018) building on a result by Gottesman (QIC 2013) has shown that the space overhead can be asymptotically reduced to a constant independent of the circuit provided we only consider circuits with a length bounded by a polynomial in the width. In this work, using a minimal model for quantum fault tolerance, we establish a general lower bound on the space overhead required to achieve fault tolerance. For any non-unitary qubit channel $\mathcal{N}$ and any quantum fault tolerance schemes against $\mathrm{i.i.d.}$ noise modeled by $\mathcal{N}$, we prove a lower bound of $\max\left\{\mathrm{Q}(\mathcal{N})^{-1}n,\alpha_\mathcal{N} \log T\right\}$ on the number of physical qubits, for circuits of length $T$ and width $n$. Here, $\mathrm{Q}(\mathcal{N})$ denotes the quantum capacity of $\mathcal{N}$ and $\alpha_\mathcal{N}>0$ is a constant only depending on the channel $\mathcal{N}$. In our model, we allow for qubits to be replaced by fresh ones during the execution of the circuit and we allow classical computation to be free and perfect. This improves upon results that assumed classical computations to be also affected by noise, and that sometimes did not allow for fresh qubits to be added. Along the way, we prove an exponential upper bound on the maximal length of fault-tolerant quantum computation with amplitude damping noise resolving a conjecture by Ben-Or, Gottesman, and Hassidim (2013).
Explicit time integration schemes coupled with Galerkin discretizations of time-dependent partial differential equations require solving a linear system with the mass matrix at each time step. For applications in structural dynamics, the solution of the linear system is frequently approximated through so-called mass lumping, which consists in replacing the mass matrix by some diagonal approximation. Mass lumping has been widely used in engineering practice for decades already and has a sound mathematical theory supporting it for finite element methods using the classical Lagrange basis. However, the theory for more general basis functions is still missing. Our paper partly addresses this shortcoming. Some special and practically relevant properties of lumped mass matrices are proved and we discuss how these properties naturally extend to banded and Kronecker product matrices whose structure allows to solve linear systems very efficiently. Our theoretical results are applied to isogeometric discretizations but are not restricted to them.
In this work, we derive a $\gamma$-robust a posteriori error estimator for finite element approximations of the Allen-Cahn equation with variable non-degenerate mobility. The estimator utilizes spectral estimates for the linearized steady part of the differential operator as well as a conditional stability estimate based on a weighted sum of Bregman distances, based on the energy and a functional related to the mobility. A suitable reconstruction of the numerical solution in the stability estimate leads to a fully computable estimator.
Backtracking linesearch is the de facto approach for minimizing continuously differentiable functions with locally Lipschitz gradient. In recent years, it has been shown that in the convex setting it is possible to avoid linesearch altogether, and to allow the stepsize to adapt based on a local smoothness estimate without any backtracks or evaluations of the function value. In this work we propose an adaptive proximal gradient method, adaPG, that uses novel estimates of the local smoothness modulus which leads to less conservative stepsize updates and that can additionally cope with nonsmooth terms. This idea is extended to the primal-dual setting where an adaptive three-term primal-dual algorithm, adaPD, is proposed which can be viewed as an extension of the PDHG method. Moreover, in this setting the "essentially" fully adaptive variant adaPD$^+$ is proposed that avoids evaluating the linear operator norm by invoking a backtracking procedure, that, remarkably, does not require extra gradient evaluations. Numerical simulations demonstrate the effectiveness of the proposed algorithms compared to the state of the art.
We consider fully discrete finite element approximations for a semilinear optimal control system of partial differential equations in two cases: for distributed and Robin boundary control. The ecological predator-prey optimal control model is approximated by conforming finite element methods mimicking the spatial part, while a discontinuous Galerkin method is used for the time discretization. We investigate the sensitivity of the solution distance from the target function, in cases with smooth and rough initial data. We employ low, and higher-order polynomials in time and space whenever proper regularity is present. The approximation schemes considered are with and without control constraints, driving efficiently the system to desired states realized using non-linear gradient methods.
Solutions to many important partial differential equations satisfy bounds constraints, but approximations computed by finite element or finite difference methods typically fail to respect the same conditions. Chang and Nakshatrala enforce such bounds in finite element methods through the solution of variational inequalities rather than linear variational problems. Here, we provide a theoretical justification for this method, including higher-order discretizations. We prove an abstract best approximation result for the linear variational inequality and estimates showing that bounds-constrained polynomials provide comparable approximation power to standard spaces. For any unconstrained approximation to a function, there exists a constrained approximation which is comparable in the $W^{1,p}$ norm. In practice, one cannot efficiently represent and manipulate the entire family of bounds-constrained polynomials, but applying bounds constraints to the coefficients of a polynomial in the Bernstein basis guarantees those constraints on the polynomial. Although our theoretical results do not guaruntee high accuracy for this subset of bounds-constrained polynomials, numerical results indicate optimal orders of accuracy for smooth solutions and sharp resolution of features in convection-diffusion problems, all subject to bounds constraints.
A preconditioning strategy is proposed for the iterative solve of large numbers of linear systems with variable matrix and right-hand side which arise during the computation of solution statistics of stochastic elliptic partial differential equations with random variable coefficients sampled by Monte Carlo. Building on the assumption that a truncated Karhunen-Lo\`{e}ve expansion of a known transform of the random variable coefficient is known, we introduce a compact representation of the random coefficient in the form of a Voronoi quantizer. The number of Voronoi cells, each of which is represented by a centroidal variable coefficient, is set to the prescribed number $P$ of preconditioners. Upon sampling the random variable coefficient, the linear system assembled with a given realization of the coefficient is solved with the preconditioner whose centroidal variable coefficient is the closest to the realization. We consider different ways to define and obtain the centroidal variable coefficients, and we investigate the properties of the induced preconditioning strategies in terms of average number of solver iterations for sequential simulations, and of load balancing for parallel simulations. Another approach, which is based on deterministic grids on the system of stochastic coordinates of the truncated representation of the random variable coefficient, is proposed with a stochastic dimension which increases with the number $P$ of preconditioners. This approach allows to bypass the need for preliminary computations in order to determine the optimal stochastic dimension of the truncated approximation of the random variable coefficient for a given number of preconditioners.
The transparent boundary condition for the free Schr\"{o}dinger equation on a rectangular computational domain requires implementation of an operator of the form $\sqrt{\partial_t-i\triangle_{\Gamma}}$ where $\triangle_{\Gamma}$ is the Laplace-Beltrami operator. It is known that this operator is nonlocal in time as well as space which poses a significant challenge in developing an efficient numerical method of solution. The computational complexity of the existing methods scale with the number of time-steps which can be attributed to the nonlocal nature of the boundary operator. In this work, we report an effectively local approximation for the boundary operator such that the resulting complexity remains independent of number of time-steps. At the heart of this algorithm is a Pad\'e approximant based rational approximation of certain fractional operators that handles corners of the domain adequately. For the spatial discretization, we use a Legendre-Galerkin spectral method with a new boundary adapted basis which ensures that the resulting linear system is banded. A compatible boundary-lifting procedure is also presented which accommodates the segments as well as the corners on the boundary. The proposed novel scheme can be implemented within the framework of any one-step time marching schemes. In particular, we demonstrate these ideas for two one-step methods, namely, the backward-differentiation formula of order 1 (BDF1) and the trapezoidal rule (TR). For the sake of comparison, we also present a convolution quadrature based scheme conforming to the one-step methods which is computationally expensive but serves as a golden standard. Finally, several numerical tests are presented to demonstrate the effectiveness of our novel method as well as to verify the order of convergence empirically.
We consider a statistical model for symmetric matrix factorization with additive Gaussian noise in the high-dimensional regime where the rank $M$ of the signal matrix to infer scales with its size $N$ as $M = o(N^{1/10})$. Allowing for a $N$-dependent rank offers new challenges and requires new methods. Working in the Bayesian-optimal setting, we show that whenever the signal has i.i.d. entries the limiting mutual information between signal and data is given by a variational formula involving a rank-one replica symmetric potential. In other words, from the information-theoretic perspective, the case of a (slowly) growing rank is the same as when $M = 1$ (namely, the standard spiked Wigner model). The proof is primarily based on a novel multiscale cavity method allowing for growing rank along with some information-theoretic identities on worst noise for the Gaussian vector channel. We believe that the cavity method developed here will play a role in the analysis of a broader class of inference and spin models where the degrees of freedom are large arrays instead of vectors.