We propose and analyze a space-time virtual element method for the discretization of the heat equation in a space-time cylinder, based on a standard Petrov-Galerkin formulation. Local discrete functions are solutions to a heat equation problem with polynomial data. Global virtual element spaces are nonconforming in space, so that the analysis and the design of the method are independent of the spatial dimension. The information between time slabs is transmitted by means of upwind terms involving polynomial projections of the discrete functions. We prove well posedness and optimal error estimates for the scheme, and validate them with several numerical tests.
Symmetry is a cornerstone of much of mathematics, and many probability distributions possess symmetries characterized by their invariance to a collection of group actions. Thus, many mathematical and statistical methods rely on such symmetry holding and ostensibly fail if symmetry is broken. This work considers under what conditions a sequence of probability measures asymptotically gains such symmetry or invariance to a collection of group actions. Considering the many symmetries of the Gaussian distribution, this work effectively proposes a non-parametric type of central limit theorem. That is, a Lipschitz function of a high dimensional random vector will be asymptotically invariant to the actions of certain compact topological groups. Applications of this include a partial law of the iterated logarithm for uniformly random points in an $\ell_p^n$-ball and an asymptotic equivalence between classical parametric statistical tests and their randomization counterparts even when invariance assumptions are violated.
A numerical method is proposed for simulation of composite open quantum systems. It is based on Lindblad master equations and adiabatic elimination. Each subsystem is assumed to converge exponentially towards a stationary subspace, slightly impacted by some decoherence channels and weakly coupled to the other subsystems. This numerical method is based on a perturbation analysis with an asymptotic expansion. It exploits the formulation of the slow dynamics with reduced dimension. It relies on the invariant operators of the local and nominal dissipative dynamics attached to each subsystem. Second-order expansion can be computed only with local numerical calculations. It avoids computations on the tensor-product Hilbert space attached to the full system. This numerical method is particularly well suited for autonomous quantum error correction schemes. Simulations of such reduced models agree with complete full model simulations for typical gates acting on one and two cat-qubits (Z, ZZ and CNOT) when the mean photon number of each cat-qubit is less than 8. For larger mean photon numbers and gates with three cat-qubits (ZZZ and CCNOT), full model simulations are almost impossible whereas reduced model simulations remain accessible. In particular, they capture both the dominant phase-flip error-rate and the very small bit-flip error-rate with its exponential suppression versus the mean photon number.
Directional interpolation is a fast and efficient compression technique for high-frequency Helmholtz boundary integral equations, but it requires a very large amount of storage in its original form. Algebraic recompression can significantly reduce the storage requirements and speed up the solution process accordingly. During the recompression process, weight matrices are required to correctly measure the influence of different basis vectors on the final result, and for highly accurate approximations, these weight matrices require more storage than the final compressed matrix. We present a compression method for the weight matrices and demonstrate that it introduces only a controllable error to the overall approximation. Numerical experiments show that the new method leads to a significant reduction in storage requirements.
Differential geometric approaches are ubiquitous in several fields of mathematics, physics and engineering, and their discretizations enable the development of network-based mathematical and computational frameworks, which are essential for large-scale data science. The Forman-Ricci curvature (FRC) - a statistical measure based on Riemannian geometry and designed for networks - is known for its high capacity for extracting geometric information from complex networks. However, extracting information from dense networks is still challenging due to the combinatorial explosion of high-order network structures. Motivated by this challenge we sought a set-theoretic representation theory for high-order network cells and FRC, as well as their associated concepts and properties, which together provide an alternative and efficient formulation for computing high-order FRC in complex networks. We provide a pseudo-code, a software implementation coined FastForman, as well as a benchmark comparison with alternative implementations. Crucially, our representation theory reveals previous computational bottlenecks and also accelerates the computation of FRC. As a consequence, our findings open new research possibilities in complex systems where higher-order geometric computations are required.
We present a semi-Lagrangian characteristic mapping method for the incompressible Euler equations on a rotating sphere. The numerical method uses a spatio-temporal discretization of the inverse flow map generated by the Eulerian velocity as a composition of sub-interval flows formed by $C^1$ spherical spline interpolants. This approximation technique has the capacity of resolving sub-grid scales generated over time without increasing the spatial resolution of the computational grid. The numerical method is analyzed and validated using standard test cases yielding third-order accuracy in the supremum norm. Numerical experiments illustrating the unique resolution properties of the method are performed and demonstrate the ability to reproduce the forward energy cascade at sub-grid scales by upsampling the numerical solution.
We present an energy/entropy stable and high order accurate finite difference method for solving the linear/nonlinear shallow water equations (SWE) in vector invariant form using the newly developed dual-pairing (DP) and dispersion-relation preserving (DRP) summation by parts (SBP) finite difference operators. We derive new well-posed boundary conditions for the SWE in one space dimension, formulated in terms of fluxes and applicable to linear and nonlinear problems. For nonlinear problems, entropy stability ensures the boundedness of numerical solutions, however, it does not guarantee convergence. Adequate amount of numerical dissipation is necessary to control high frequency errors which could ruin numerical simulations. Using the dual-pairing SBP framework, we derive high order accurate and nonlinear hyper-viscosity operator which dissipates entropy and enstrophy. The hyper-viscosity operator effectively tames oscillations from shocks and discontinuities, and eliminates poisonous high frequency grid-scale errors. The numerical method is most suitable for the simulations of sub-critical flows typical observed in atmospheric and geostrophic flow problems. We prove a priori error estimates for the semi-discrete approximations of both linear and nonlinear SWE. We verify convergence, accuracy and well-balanced property via the method of manufactured solutions (MMS) and canonical test problems such as the dam break, lake at rest, and a two-dimensional rotating and merging vortex problem.
The equioscillation theorem interleaves the Haar condition, the existence and uniqueness and strong uniqueness of the optimal Chebyshev approximation and its characterization by the equioscillation condition in a way that cannot extend to multivariate approximation: Rice~[\emph{Transaction of the AMS}, 1963] says ''A form of alternation is still present for functions of several variables. However, there is apparently no simple method of distinguishing between the alternation of a best approximation and the alternation of other approximating functions. This is due to the fact that there is no natural ordering of the critical points.'' In addition, in the context of multivariate approximation the Haar condition is typically not satisfied and strong uniqueness may hold or not. The present paper proposes an multivariate equioscillation theorem, which includes such a simple alternation condition on error extrema, existence and a sufficient condition for strong uniqueness. To this end, the relationship between the properties interleaved in the univariate equioscillation theorem is clarified: first, a weak Haar condition is proposed, which simply implies existence. Second, the equioscillation condition is shown to be equivalent to the optimality condition of convex optimization, hence characterizing optimality independently from uniqueness. It is reformulated as the synchronized oscillations between the error extrema and the components of a related Haar matrix kernel vector, in a way that applies to multivariate approximation. Third, an additional requirement on the involved Haar matrix and its kernel vector, called strong equioscillation, is proved to be sufficient for the strong uniqueness of the solution. These three disconnected conditions give rise to a multivariate equioscillation theorem, where existence, characterization by equioscillation and strong uniqueness are separated, without involving the too restrictive Haar condition. Remarkably, relying on optimality condition of convex optimization gives rise to a quite simple proof. Instances of multivariate problems with strongly unique, non-strong but unique and non-unique solutions are presented to illustrate the scope of the theorem.
In this paper, we view the statistical inverse problems of partial differential equations (PDEs) as PDE-constrained regression and focus on learning the prediction function of the prior probability measures. From this perspective, we propose general generalization bounds for learning infinite-dimensionally defined prior measures in the style of the probability approximately correct Bayesian learning theory. The theoretical framework is rigorously defined on infinite-dimensional separable function space, which makes the theories intimately connected to the usual infinite-dimensional Bayesian inverse approach. Inspired by the concept of $\alpha$-differential privacy, a generalized condition (containing the usual Gaussian measures employed widely in the statistical inverse problems of PDEs) has been proposed, which allows the learned prior measures to depend on the measured data (the prediction function with measured data as input and the prior measure as output can be introduced). After illustrating the general theories, the specific settings of linear and nonlinear problems have been given and can be easily casted into our general theories to obtain concrete generalization bounds. Based on the obtained generalization bounds, infinite-dimensionally well-defined practical algorithms are formulated. Finally, numerical examples of the backward diffusion and Darcy flow problems are provided to demonstrate the potential applications of the proposed approach in learning the prediction function of the prior probability measures.
In this paper, we consider an inverse space-dependent source problem for a time-fractional diffusion equation. To deal with the ill-posedness of the problem, we transform the problem into an optimal control problem with total variational (TV) regularization. In contrast to the classical Tikhonov model incorporating $L^2$ penalty terms, the inclusion of a TV term proves advantageous in reconstructing solutions that exhibit discontinuities or piecewise constancy. The control problem is approximated by a fully discrete scheme, and convergence results are provided within this framework. Furthermore, a lineraed primal-dual iterative algorithm is proposed to solve the discrete control model based on an equivalent saddle-point reformulation, and several numerical experiments are presented to demonstrate the efficiency of the algorithm.
Covariance matrices of random vectors contain information that is crucial for modelling. Certain structures and patterns of the covariances (or correlations) may be used to justify parametric models, e.g., autoregressive models. Until now, there have been only few approaches for testing such covariance structures systematically and in a unified way. In the present paper, we propose such a unified testing procedure, and we will exemplify the approach with a large variety of covariance structure models. This includes common structures such as diagonal matrices, Toeplitz matrices, and compound symmetry but also the more involved autoregressive matrices. We propose hypothesis tests for these structures, and we use bootstrap techniques for better small-sample approximation. The structures of the proposed tests invite for adaptations to other covariance patterns by choosing the hypothesis matrix appropriately. We prove their correctness for large sample sizes. The proposed methods require only weak assumptions. With the help of a simulation study, we assess the small sample properties of the tests. We also analyze a real data set to illustrate the application of the procedure.