This manuscript presents an efficient solver for the linear system that arises from the Hierarchical Poincar\'e-Steklov (HPS) discretization of three dimensional variable coefficient Helmholtz problems. Previous work on the HPS method has tied it with a direct solver. This work is the first efficient iterative solver for the linear system that results from the HPS discretization. The solution technique utilizes GMRES coupled with an exact block-Jacobi preconditioner. The construction of the block-Jacobi preconditioner involves two nested local solves that are accelerated by local homogenization. The local nature of the discretization and preconditioner naturally yield matrix-free application of the linear system. A distributed memory implementation allows the solution technique to tackle problems approximately $50$ wavelengths in each direction requiring more than a billion unknowns to get approximately 7 digits of accuracy in less than an hour. Additional numerical results illustrate the performance of the solution technique.
In $d$ dimensions, approximating an arbitrary function oscillating with frequency $\lesssim k$ requires $\sim k^d$ degrees of freedom. A numerical method for solving the Helmholtz equation (with wavenumber $k$) suffers from the pollution effect if, as $k\to \infty$, the total number of degrees of freedom needed to maintain accuracy grows faster than this natural threshold. While the $h$-version of the finite element method (FEM) (where accuracy is increased by decreasing the meshwidth $h$ and keeping the polynomial degree $p$ fixed) suffers from the pollution effect, the celebrated papers [Melenk, Sauter 2010], [Melenk, Sauter 2011], [Esterhazy, Melenk 2012], and [Melenk, Parsania, Sauter 2013] showed that the $hp$-FEM (where accuracy is increased by decreasing the meshwidth $h$ and increasing the polynomial degree $p$) applied to a variety of constant-coefficient Helmholtz problems does not suffer from the pollution effect. The heart of the proofs of these results is a PDE result splitting the solution of the Helmholtz equation into "high" and "low" frequency components. The main novelty of the present paper is that we prove this splitting for the constant-coefficient Helmholtz equation in full-space (i.e., in $\mathbb{R}^d$) using only integration by parts and elementary properties of the Fourier transform (this is contrast to the proof for this set-up in [Melenk, Sauter 2010] which uses somewhat-involved bounds on Bessel and Hankel functions). We combine this splitting with (i) standard arguments about convergence of the FEM applied to the Helmholtz equation (the so-called "Schatz argument", which we reproduce here) and (ii) polynomial-approximation results (which we quote from the literature without proof) to give a simple proof that the $hp$-FEM does not suffer from the pollution effect for the constant-coefficient full-space Helmholtz equation.
This paper proposes a regularization of the Monge-Amp\`ere equation in planar convex domains through uniformly elliptic Hamilton-Jacobi-Bellman equations. The regularized problem possesses a unique strong solution $u_\varepsilon$ and is accessible to the discretization with finite elements. This work establishes locally uniform convergence of $u_\varepsilon$ to the convex Alexandrov solution $u$ to the Monge-Amp\`ere equation as the regularization parameter $\varepsilon$ approaches $0$. A mixed finite element method for the approximation of $u_\varepsilon$ is proposed, and the regularized finite element scheme is shown to be locally uniformly convergent. Numerical experiments provide empirical evidence for the efficient approximation of singular solutions $u$.
A singularly perturbed parabolic problem of convection-diffusion type with a discontinuous initial condition is examined. An analytic function is identified which matches the discontinuity in the initial condition and also satisfies the homogenous parabolic differential equation associated with the problem. The difference between this analytical function and the solution of the parabolic problem is approximated numerically, using an upwind finite difference operator combined with an appropriate layer-adapted mesh. The numerical method is shown to be parameter-uniform. Numerical results are presented to illustrate the theoretical error bounds established in the paper.
We develop the first fast spectral algorithm to decompose a random third-order tensor over R^d of rank up to O(d^{3/2}/polylog(d)). Our algorithm only involves simple linear algebra operations and can recover all components in time O(d^{6.05}) under the current matrix multiplication time. Prior to this work, comparable guarantees could only be achieved via sum-of-squares [Ma, Shi, Steurer 2016]. In contrast, fast algorithms [Hopkins, Schramm, Shi, Steurer 2016] could only decompose tensors of rank at most O(d^{4/3}/polylog(d)). Our algorithmic result rests on two key ingredients. A clean lifting of the third-order tensor to a sixth-order tensor, which can be expressed in the language of tensor networks. A careful decomposition of the tensor network into a sequence of rectangular matrix multiplications, which allows us to have a fast implementation of the algorithm.
We prove a Central Limit Theorem for the empirical optimal transport cost, $\sqrt{\frac{nm}{n+m}}\{\mathcal{T}_c(P_n,Q_m)-\mathcal{T}_c(P,Q)\}$, in the semi discrete case, i.e when the distribution $P$ is supported in $N$ points, but without assumptions on $Q$. We show that the asymptotic distribution is the supremun of a centered Gaussian process, which is Gaussian under some additional conditions on the probability $Q$ and on the cost. Such results imply the central limit theorem for the $p$-Wassertein distance, for $p\geq 1$. This means that, for fixed $N$, the curse of dimensionality is avoided. To better understand the influence of such $N$, we provide bounds of $E|\mathcal{W}_1(P,Q_m)-\mathcal{W}_1(P,Q)|$ depending on $m$ and $N$. Finally, the semidiscrete framework provides a control on the second derivative of the dual formulation, which yields the first central limit theorem for the optimal transport potentials. The results are supported by simulations that help to visualize the given limits and bounds. We analyse also the cases where classical bootstrap works.
In this paper, we examine the structure of systems which are weighted homogeneous for several systems of weights, and how it impacts Gr\"obner basis computations. We present different ways to compute Gr\"obner bases for systems with this structure, either directly or by reducing to existing structures. We also present optimization techniques which are suitable for this structure. The most natural orderings to compute a Gr\"obner basis for systems with this structure are weighted orderings following the systems of weights, and we discuss the possibility to use the algorithms in order to directly compute a basis for such an order, regardless of the structure of the system. We discuss applicable notions of regularity which could be used to evaluate the complexity of the algorithm, and prove that they are generic if non-empty. Finally, we present experimental data from a prototype implementation of the algorithms in SageMath.
We present a polynomial time exact quantum algorithm for the hidden subgroup problem in $Z_{m^k}^n$. The algorithm uses the quantum Fourier transform modulo m and does not require factorization of m. For smooth m, i.e., when the prime factors of m are of size poly(log m), the quantum Fourier transform can be exactly computed using the method discovered independently by Cleve and Coppersmith, while for general m, the algorithm of Mosca and Zalka is available. Even for m=3 and k=1 our result appears to be new. We also present applications to compute the structure of abelian and solvable groups whose order has the same (but possibly unknown) prime factors as m. The applications for solvable groups also rely on an exact version of a technique proposed by Watrous for computing the uniform superposition of elements of subgroups.
Solutions of the Helmholtz equation are known to be well approximated by superpositions of propagative plane waves. This observation is the foundation of successful Trefftz methods. However, when too many plane waves are used, the computation of the expansion is known to be numerically unstable. We explain how this effect is due to the presence of exponentially large coefficients in the expansion and can drastically limit the efficiency of the approach. In this work, we show that the Helmholtz solutions on a disk can be exactly represented by a continuous superposition of evanescent plane waves, generalizing the standard Herglotz representation. Here, by evanescent plane waves, we mean exponential plane waves with complex-valued propagation vector, whose absolute value decays exponentially in one direction. In addition, the density in this representation is proved to be uniformly bounded in a suitable weighted Lebesgue norm, hence overcoming the instability observed with propagative plane waves and paving the way for stable discrete expansions. In view of practical implementations, discretization strategies are investigated. We construct suitable finite-dimensional sets of evanescent plane waves using sampling strategies in a parametric domain. Provided one uses sufficient oversampling and regularization, the resulting approximations are shown to be both controllably accurate and numerically stable, as supported by numerical evidence.
Physical systems are usually modeled by differential equations, but solving these differential equations analytically is often intractable. Instead, the differential equations can be solved numerically by discretization in a finite computational domain. The discretized equation is reduced to a large linear system, whose solution is typically found using an iterative solver. We start with an initial guess, x_0, and iterate the algorithm to obtain a sequence of solution vectors, x_m. The iterative algorithm is said to converge to solution $x$ if and only if x_m converges to $x$. Accuracy of the numerical solutions is important, especially in the design of safety critical systems such as airplanes, cars, or nuclear power plants. It is therefore important to formally guarantee that the iterative solvers converge to the "true" solution of the original differential equation. In this paper, we first formalize the necessary and sufficient conditions for iterative convergence in the Coq proof assistant. We then extend this result to two classical iterative methods: Gauss-Seidel iteration and Jacobi iteration. We formalize conditions for the convergence of the Gauss--Seidel classical iterative method, based on positive definiteness of the iterative matrix. We then formally state conditions for convergence of Jacobi iteration and instantiate it with an example to demonstrate convergence of iterative solutions to the direct solution of the linear system. We leverage recent developments of the Coq linear algebra and mathcomp library for our formalization.
While Generative Adversarial Networks (GANs) have empirically produced impressive results on learning complex real-world distributions, recent work has shown that they suffer from lack of diversity or mode collapse. The theoretical work of Arora et al.~\cite{AroraGeLiMaZh17} suggests a dilemma about GANs' statistical properties: powerful discriminators cause overfitting, whereas weak discriminators cannot detect mode collapse. In contrast, we show in this paper that GANs can in principle learn distributions in Wasserstein distance (or KL-divergence in many cases) with polynomial sample complexity, if the discriminator class has strong distinguishing power against the particular generator class (instead of against all possible generators). For various generator classes such as mixture of Gaussians, exponential families, and invertible neural networks generators, we design corresponding discriminators (which are often neural nets of specific architectures) such that the Integral Probability Metric (IPM) induced by the discriminators can provably approximate the Wasserstein distance and/or KL-divergence. This implies that if the training is successful, then the learned distribution is close to the true distribution in Wasserstein distance or KL divergence, and thus cannot drop modes. Our preliminary experiments show that on synthetic datasets the test IPM is well correlated with KL divergence, indicating that the lack of diversity may be caused by the sub-optimality in optimization instead of statistical inefficiency.