G\'acs' coarse-grained algorithmic entropy leverages universal computation to quantify the information content of any given physical state. Unlike the Boltzmann and Shannon-Gibbs entropies, it requires no prior commitment to macrovariables or probabilistic ensembles. Whereas earlier work had made loose connections between the entropy of thermodynamic systems and information-processing systems, the algorithmic entropy formally unifies them both. After adapting G\'acs' definition to Markov processes, we prove a very general second law of thermodynamics, and discuss its advantages over previous formulations. Finally, taking inspiration from Maxwell's demon, we model an information engine powered by compressible data.
For problems of time-harmonic scattering by rational polygonal obstacles, embedding formulae express the far-field pattern induced by any incident plane wave in terms of the far-field patterns for a relatively small (frequency-independent) set of canonical incident angles. Although these remarkable formulae are exact in theory, here we demonstrate that: (i) they are highly sensitive to numerical errors in practice, and; (ii) direct calculation of the coefficients in these formulae may be impossible for particular sets of canonical incident angles, even in exact arithmetic. Only by overcoming these practical issues can embedding formulae provide a highly efficient approach to computing the far-field pattern induced by a large number of incident angles. Here we propose solutions for problems (i) and (ii), backed up by theory and numerical experiments. Problem (i) is solved using techniques from computational complex analysis: we reformulate the embedding formula as a complex contour integral and prove that this is much less sensitive to numerical errors. In practice, this contour integral can be efficiently evaluated by residue calculus. Problem (ii) is addressed using techniques from numerical linear algebra: we oversample, considering more canonical incident angles than are necessary, thus expanding the space of valid coefficients vectors. The coefficients vectors can then be selected using either a least squares approach or column subset selection.
We study parallel fault-tolerant quantum computing for families of homological quantum low-density parity-check (LDPC) codes defined on 3-manifolds with constant or almost-constant encoding rate. We derive generic formula for a transversal $T$ gate of color codes on general 3-manifolds, which acts as collective non-Clifford logical CCZ gates on any triplet of logical qubits with their logical-$X$ membranes having a $\mathbb{Z}_2$ triple intersection at a single point. The triple intersection number is a topological invariant, which also arises in the path integral of the emergent higher symmetry operator in a topological quantum field theory: the $\mathbb{Z}_2^3$ gauge theory. Moreover, the transversal $S$ gate of the color code corresponds to a higher-form symmetry supported on a codimension-1 submanifold, giving rise to exponentially many addressable and parallelizable logical CZ gates. We have developed a generic formalism to compute the triple intersection invariants for 3-manifolds and also study the scaling of the Betti number and systoles with volume for various 3-manifolds, which translates to the encoding rate and distance. We further develop three types of LDPC codes supporting such logical gates: (1) A quasi-hyperbolic code from the product of 2D hyperbolic surface and a circle, with almost-constant rate $k/n=O(1/\log(n))$ and $O(\log(n))$ distance; (2) A homological fibre bundle code with $O(1/\log^{\frac{1}{2}}(n))$ rate and $O(\log^{\frac{1}{2}}(n))$ distance; (3) A specific family of 3D hyperbolic codes: the Torelli mapping torus code, constructed from mapping tori of a pseudo-Anosov element in the Torelli subgroup, which has constant rate while the distance scaling is currently unknown. We then show a generic constant-overhead scheme for applying a parallelizable universal gate set with the aid of logical-$X$ measurements.
We construct a monotone continuous $Q^1$ finite element method on the uniform mesh for the anisotropic diffusion problem with a diagonally dominant diffusion coefficient matrix. The monotonicity implies the discrete maximum principle. Convergence of the new scheme is rigorously proven. On quadrilateral meshes, the matrix coefficient conditions translate into specific a mesh constraint.
We develop a novel discontinuous Galerkin method for solving the rotating thermal shallow water equations (TRSW) on a curvilinear mesh. Our method is provably entropy stable, conserves mass, buoyancy and vorticity, while also semi-discretely conserving energy. This is achieved by using novel numerical fluxes and splitting the pressure and convection operators. We implement our method on a cubed sphere mesh and numerically verify our theoretical results. Our experiments demonstrate the robustness of the method for a regime of well developed turbulence, where it can be run stably without any dissipation. The entropy stable fluxes are sufficient to control the grid scale noise generated by geostrophic turbulence, eliminating the need for artificial stabilization.
The multispecies Landau collision operator describes the two-particle, small scattering angle or grazing collisions in a plasma made up of different species of particles such as electrons and ions. Recently, a structure preserving deterministic particle method arXiv:1910.03080 has been developed for the single species spatially homogeneous Landau equation. This method relies on a regularization of the Landau collision operator so that an approximate solution, which is a linear combination of Dirac delta distributions, is well-defined. Based on a weak form of the regularized Landau equation, the time dependent locations of the Dirac delta functions satisfy a system of ordinary differential equations. In this work, we extend this particle method to the multispecies case, and examine its conservation of mass, momentum, and energy, and decay of entropy properties. We show that the equilibrium distribution of the regularized multispecies Landau equation is a Maxwellian distribution, and state a critical condition on the regularization parameters that guarantees a species independent equilibrium temperature. A convergence study comparing an exact multispecies BKW solution to the particle solution shows approximately 2nd order accuracy. Important physical properties such as conservation, decay of entropy, and equilibrium distribution of the particle method are demonstrated with several numerical examples.
Boundary value problems involving elliptic PDEs such as the Laplace and the Helmholtz equations are ubiquitous in physics and engineering. Many such problems have alternative formulations as integral equations that are mathematically more tractable than their PDE counterparts. However, the integral equation formulation poses a challenge in solving the dense linear systems that arise upon discretization. In cases where iterative methods converge rapidly, existing methods that draw on fast summation schemes such as the Fast Multipole Method are highly efficient and well established. More recently, linear complexity direct solvers that sidestep convergence issues by directly computing an invertible factorization have been developed. However, storage and compute costs are high, which limits their ability to solve large-scale problems in practice. In this work, we introduce a distributed-memory parallel algorithm based on an existing direct solver named ``strong recursive skeletonization factorization.'' The analysis of its parallel scalability applies generally to a class of existing methods that exploit the so-called strong admissibility. Specifically, we apply low-rank compression to certain off-diagonal matrix blocks in a way that minimizes data movement. Given a compression tolerance, our method constructs an approximate factorization of a discretized integral operator (dense matrix), which can be used to solve linear systems efficiently in parallel. Compared to iterative algorithms, our method is particularly suitable for problems involving ill-conditioned matrices or multiple right-hand sides. Large-scale numerical experiments are presented to demonstrate the performance of our implementation using the Julia language.
We consider various iterative algorithms for solving the linear equation $ax=b$ using a quantum computer operating on the principle of quantum annealing. Assuming that the computer's output is described by the Boltzmann distribution, it is shown under which conditions the equation-solving algorithms converge, and an estimate of their convergence rate is provided. The application of this approach to algorithms using both an infinite number of qubits and a small number of qubits is discussed.
We propose a method for computing the Lyapunov exponents of renewal equations (delay equations of Volterra type) and of coupled systems of renewal and delay differential equations. The method consists in the reformulation of the delay equation as an abstract differential equation, the reduction of the latter to a system of ordinary differential equations via pseudospectral collocation, and the application of the standard discrete QR method. The effectiveness of the method is shown experimentally and a MATLAB implementation is provided.
The monotonicity of discrete Laplacian implies discrete maximum principle, which in general does not hold for high order schemes. The $Q^2$ spectral element method has been proven monotone on a uniform rectangular mesh. In this paper we prove the monotonicity of the $Q^2$ spectral element method on quasi-uniform rectangular meshes under certain mesh constraints. In particular, we propose a relaxed Lorenz's condition for proving monotonicity.
We derive information-theoretic generalization bounds for supervised learning algorithms based on the information contained in predictions rather than in the output of the training algorithm. These bounds improve over the existing information-theoretic bounds, are applicable to a wider range of algorithms, and solve two key challenges: (a) they give meaningful results for deterministic algorithms and (b) they are significantly easier to estimate. We show experimentally that the proposed bounds closely follow the generalization gap in practical scenarios for deep learning.