This paper presents a novel approach to construct regularizing operators for severely ill-posed Fredholm integral equations of the first kind by introducing parametrized discretization. The optimal values of discretization and regularization parameters are computed simultaneously by solving a minimization problem formulated based on a regularization parameter search criterion. The effectiveness of the proposed approach is demonstrated through examples of noisy Laplace transform inversions and the deconvolution of nuclear magnetic resonance relaxation data.
We present a new high-order accurate spectral element solution to the two-dimensional scalar Poisson equation subject to a general Robin boundary condition. The solution is based on a simplified version of the shifted boundary method employing a continuous arbitrary order $hp$-Galerkin spectral element method as the numerical discretization procedure. The simplification relies on a polynomial correction to avoid explicitly evaluating high-order partial derivatives from the Taylor series expansion, which traditionally have been used within the shifted boundary method. In this setting, we apply an extrapolation and novel interpolation approach to project the basis functions from the true domain onto the approximate surrogate domain. The resulting solution provides a method that naturally incorporates curved geometrical features of the domain, overcomes complex and cumbersome mesh generation, and avoids problems with small-cut-cells. Dirichlet, Neumann, and general Robin boundary conditions are enforced weakly through: i) a generalized Nitsche's method and ii) a generalized Aubin's method. For this, a consistent asymptotic preserving formulation of the embedded Robin formulations is presented. We present several numerical experiments and analysis of the algorithmic properties of the different weak formulations. With this, we include convergence studies under polynomial, $p$, increase of the basis functions, mesh, $h$, refinement, and matrix conditioning to highlight the spectral and algebraic convergence features, respectively. This is done to assess the influence of errors across variational formulations, polynomial order, mesh size, and mappings between the true and surrogate boundaries.
For problems of time-harmonic scattering by rational polygonal obstacles, embedding formulae express the far-field pattern induced by any incident plane wave in terms of the far-field patterns for a relatively small (frequency-independent) set of canonical incident angles. Although these remarkable formulae are exact in theory, here we demonstrate that: (i) they are highly sensitive to numerical errors in practice, and; (ii) direct calculation of the coefficients in these formulae may be impossible for particular sets of canonical incident angles, even in exact arithmetic. Only by overcoming these practical issues can embedding formulae provide a highly efficient approach to computing the far-field pattern induced by a large number of incident angles. Here we propose solutions for problems (i) and (ii), backed up by theory and numerical experiments. Problem (i) is solved using techniques from computational complex analysis: we reformulate the embedding formula as a complex contour integral and prove that this is much less sensitive to numerical errors. In practice, this contour integral can be efficiently evaluated by residue calculus. Problem (ii) is addressed using techniques from numerical linear algebra: we oversample, considering more canonical incident angles than are necessary, thus expanding the space of valid coefficients vectors. The coefficients vectors can then be selected using either a least squares approach or column subset selection.
We study parallel fault-tolerant quantum computing for families of homological quantum low-density parity-check (LDPC) codes defined on 3-manifolds with constant or almost-constant encoding rate. We derive generic formula for a transversal $T$ gate of color codes on general 3-manifolds, which acts as collective non-Clifford logical CCZ gates on any triplet of logical qubits with their logical-$X$ membranes having a $\mathbb{Z}_2$ triple intersection at a single point. The triple intersection number is a topological invariant, which also arises in the path integral of the emergent higher symmetry operator in a topological quantum field theory: the $\mathbb{Z}_2^3$ gauge theory. Moreover, the transversal $S$ gate of the color code corresponds to a higher-form symmetry supported on a codimension-1 submanifold, giving rise to exponentially many addressable and parallelizable logical CZ gates. We have developed a generic formalism to compute the triple intersection invariants for 3-manifolds and also study the scaling of the Betti number and systoles with volume for various 3-manifolds, which translates to the encoding rate and distance. We further develop three types of LDPC codes supporting such logical gates: (1) A quasi-hyperbolic code from the product of 2D hyperbolic surface and a circle, with almost-constant rate $k/n=O(1/\log(n))$ and $O(\log(n))$ distance; (2) A homological fibre bundle code with $O(1/\log^{\frac{1}{2}}(n))$ rate and $O(\log^{\frac{1}{2}}(n))$ distance; (3) A specific family of 3D hyperbolic codes: the Torelli mapping torus code, constructed from mapping tori of a pseudo-Anosov element in the Torelli subgroup, which has constant rate while the distance scaling is currently unknown. We then show a generic constant-overhead scheme for applying a parallelizable universal gate set with the aid of logical-$X$ measurements.
We develop a novel discontinuous Galerkin method for solving the rotating thermal shallow water equations (TRSW) on a curvilinear mesh. Our method is provably entropy stable, conserves mass, buoyancy and vorticity, while also semi-discretely conserving energy. This is achieved by using novel numerical fluxes and splitting the pressure and convection operators. We implement our method on a cubed sphere mesh and numerically verify our theoretical results. Our experiments demonstrate the robustness of the method for a regime of well developed turbulence, where it can be run stably without any dissipation. The entropy stable fluxes are sufficient to control the grid scale noise generated by geostrophic turbulence, eliminating the need for artificial stabilization.
The multispecies Landau collision operator describes the two-particle, small scattering angle or grazing collisions in a plasma made up of different species of particles such as electrons and ions. Recently, a structure preserving deterministic particle method arXiv:1910.03080 has been developed for the single species spatially homogeneous Landau equation. This method relies on a regularization of the Landau collision operator so that an approximate solution, which is a linear combination of Dirac delta distributions, is well-defined. Based on a weak form of the regularized Landau equation, the time dependent locations of the Dirac delta functions satisfy a system of ordinary differential equations. In this work, we extend this particle method to the multispecies case, and examine its conservation of mass, momentum, and energy, and decay of entropy properties. We show that the equilibrium distribution of the regularized multispecies Landau equation is a Maxwellian distribution, and state a critical condition on the regularization parameters that guarantees a species independent equilibrium temperature. A convergence study comparing an exact multispecies BKW solution to the particle solution shows approximately 2nd order accuracy. Important physical properties such as conservation, decay of entropy, and equilibrium distribution of the particle method are demonstrated with several numerical examples.
This paper presents a new algorithm for generating random inverse-Wishart matrices that directly generates the Cholesky factor of the matrix without computing the factorization. Whenever parameterized in terms of a precision matrix $\Omega=\Sigma^{-1}$, or its Cholesky factor, instead of a covariance matrix $\Sigma$, the new algorithm is more efficient than the current standard algorithm.
Meta-analysis is the aggregation of data from multiple studies to find patterns across a broad range relating to a particular subject. It is becoming increasingly useful to apply meta-analysis to summarize these studies being done across various fields. In meta-analysis, it is common to use the mean and standard deviation from each study to compare for analysis. While many studies reported mean and standard deviation for their summary statistics, some report other values including the minimum, maximum, median, and first and third quantiles. Often, the quantiles and median are reported when the data is skewed and does not follow a normal distribution. In order to correctly summarize the data and draw conclusions from multiple studies, it is necessary to estimate the mean and standard deviation from each study, considering variation and skewness within each study. In past literature, methods have been proposed to estimate the mean and standard deviation, but do not consider negative values. Data that include negative values are common and would increase the accuracy and impact of the me-ta-analysis. We propose a method that implements a generalized Box-Cox transformation to estimate the mean and standard deviation accounting for such negative values while maintaining similar accuracy.
Boundary value problems involving elliptic PDEs such as the Laplace and the Helmholtz equations are ubiquitous in physics and engineering. Many such problems have alternative formulations as integral equations that are mathematically more tractable than their PDE counterparts. However, the integral equation formulation poses a challenge in solving the dense linear systems that arise upon discretization. In cases where iterative methods converge rapidly, existing methods that draw on fast summation schemes such as the Fast Multipole Method are highly efficient and well established. More recently, linear complexity direct solvers that sidestep convergence issues by directly computing an invertible factorization have been developed. However, storage and compute costs are high, which limits their ability to solve large-scale problems in practice. In this work, we introduce a distributed-memory parallel algorithm based on an existing direct solver named ``strong recursive skeletonization factorization.'' The analysis of its parallel scalability applies generally to a class of existing methods that exploit the so-called strong admissibility. Specifically, we apply low-rank compression to certain off-diagonal matrix blocks in a way that minimizes data movement. Given a compression tolerance, our method constructs an approximate factorization of a discretized integral operator (dense matrix), which can be used to solve linear systems efficiently in parallel. Compared to iterative algorithms, our method is particularly suitable for problems involving ill-conditioned matrices or multiple right-hand sides. Large-scale numerical experiments are presented to demonstrate the performance of our implementation using the Julia language.
We propose a method for computing the Lyapunov exponents of renewal equations (delay equations of Volterra type) and of coupled systems of renewal and delay differential equations. The method consists in the reformulation of the delay equation as an abstract differential equation, the reduction of the latter to a system of ordinary differential equations via pseudospectral collocation, and the application of the standard discrete QR method. The effectiveness of the method is shown experimentally and a MATLAB implementation is provided.
We derive information-theoretic generalization bounds for supervised learning algorithms based on the information contained in predictions rather than in the output of the training algorithm. These bounds improve over the existing information-theoretic bounds, are applicable to a wider range of algorithms, and solve two key challenges: (a) they give meaningful results for deterministic algorithms and (b) they are significantly easier to estimate. We show experimentally that the proposed bounds closely follow the generalization gap in practical scenarios for deep learning.