Implicit solvers for atmospheric models are often accelerated via the solution of a preconditioned system. For block preconditioners this typically involves the factorisation of the (approximate) Jacobian resulting from linearization of the coupled system into a Helmholtz equation for some function of the pressure. Here we present a preconditioner for the compressible Euler equations with a flux form representation of the potential temperature on the Lorenz grid using mixed finite elements. This formulation allows for spatial discretisations that conserve both energy and potential temperature variance. By introducing the dry thermodynamic entropy as an auxiliary variable for the solution of the algebraic system, the resulting preconditioner is shown to have a similar block structure to an existing preconditioner for the material form transport of potential temperature on the Charney-Phillips grid. This new formulation is also shown to be more efficient and stable than both the material form transport of potential temperature on the Charney-Phillips grid, and a previous Helmholtz preconditioner for the flux form transport of density weighted potential temperature on the Lorenz grid for a 1D thermal bubble configuration. The new preconditioner is further verified against standard two dimensional test cases in a vertical slice geometry.
The local dependence function is important in many applications of probability and statistics. We extend the bivariate local dependence function introduced by Bairamov and Kotz (2000) and further developed by Bairamov et al. (2003) to three-variate and multivariate local dependence function characterizing the dependency between three and more random variables in a given specific point. The definition and properties of the three-variate local dependence function are discussed. An example of a three-variate local dependence function for underlying three-variate normal distribution is presented. The graphs and tables with numerical values are provided. The multivariate extension of the local dependence function that can characterize the dependency between multiple random variables at a specific point is also discussed.
The unpredictability of random numbers is fundamental to both digital security and applications that fairly distribute resources. However, existing random number generators have limitations-the generation processes cannot be fully traced, audited, and certified to be unpredictable. The algorithmic steps used in pseudorandom number generators are auditable, but they cannot guarantee that their outputs were a priori unpredictable given knowledge of the initial seed. Device-independent quantum random number generators can ensure that the source of randomness was unknown beforehand, but the steps used to extract the randomness are vulnerable to tampering. Here, for the first time, we demonstrate a fully traceable random number generation protocol based on device-independent techniques. Our protocol extracts randomness from unpredictable non-local quantum correlations, and uses distributed intertwined hash chains to cryptographically trace and verify the extraction process. This protocol is at the heart of a public traceable and certifiable quantum randomness beacon that we have launched. Over the first 40 days of operation, we completed the protocol 7434 out of 7454 attempts -- a success rate of 99.7%. Each time the protocol succeeded, the beacon emitted a pulse of 512 bits of traceable randomness. The bits are certified to be uniform with error times actual success probability bounded by $2^{-64}$. The generation of certifiable and traceable randomness represents one of the first public services that operates with an entanglement-derived advantage over comparable classical approaches.
We consider quantum circuit models where the gates are drawn from arbitrary gate ensembles given by probabilistic distributions over certain gate sets and circuit architectures, which we call stochastic quantum circuits. Of main interest in this work is the speed of convergence of stochastic circuits with different gate ensembles and circuit architectures to unitary t-designs. A key motivation for this theory is the varying preference for different gates and circuit architectures in different practical scenarios. In particular, it provides a versatile framework for devising efficient circuits for implementing $t$-designs and relevant applications including random circuit and scrambling experiments, as well as benchmarking the performance of gates and circuit architectures. We examine various important settings in depth. A key aspect of our study is an "ironed gadget" model, which allows us to systematically evaluate and compare the convergence efficiency of entangling gates and circuit architectures. Particularly notable results include i) gadgets of two-qubit gates with KAK coefficients $\left(\frac{\pi}{4}-\frac{1}{8}\arccos(\frac{1}{5}),\frac{\pi}{8},\frac{1}{8}\arccos(\frac{1}{5})\right)$ (which we call $\chi$ gates) directly form exact 2- and 3-designs; ii) the iSWAP gate family achieves the best efficiency for convergence to 2-designs under mild conjectures with numerical evidence, even outperforming the Haar-random gate, for generic many-body circuits; iii) iSWAP + complete graph achieve the best efficiency for convergence to 2-designs among all graph circuits. A variety of numerical results are provided to complement our analysis. We also derive robustness guarantees for our analysis against gate perturbations. Additionally, we provide cursory analysis on gates with higher locality and found that the Margolus gate outperforms various other well-known gates.
In decision-making, maxitive functions are used for worst-case and best-case evaluations. Maxitivity gives rise to a rich structure that is well-studied in the context of the pointwise order. In this article, we investigate maxitivity with respect to general preorders and provide a representation theorem for such functionals. The results are illustrated for different stochastic orders in the literature, including the usual stochastic order, the increasing convex/concave order, and the dispersive order.
In this work is considered an elliptic problem, referred to as the Ventcel problem, involvinga second order term on the domain boundary (the Laplace-Beltrami operator). A variationalformulation of the Ventcel problem is studied, leading to a finite element discretization. Thefocus is on the construction of high order curved meshes for the discretization of the physicaldomain and on the definition of the lift operator, which is aimed to transform a functiondefined on the mesh domain into a function defined on the physical one. This lift is definedin a way as to satisfy adapted properties on the boundary, relatively to the trace operator.The Ventcel problem approximation is investigated both in terms of geometrical error and offinite element approximation error. Error estimates are obtained both in terms of the meshorder r $\ge$ 1 and to the finite element degree k $\ge$ 1, whereas such estimates usually have beenconsidered in the isoparametric case so far, involving a single parameter k = r. The numericalexperiments we led, both in dimension 2 and 3, allow us to validate the results obtained andproved on the a priori error estimates depending on the two parameters k and r. A numericalcomparison is made between the errors using the former lift definition and the lift defined inthis work establishing an improvement in the convergence rate of the error in the latter case.
We consider linear models with scalar responses and covariates from a separable Hilbert space. The aim is to detect change points in the error distribution, based on sequential residual empirical distribution functions. Expansions for those estimated functions are more challenging in models with infinite-dimensional covariates than in regression models with scalar or vector-valued covariates due to a slower rate of convergence of the parameter estimators. Yet the suggested change point test is asymptotically distribution-free and consistent for one-change point alternatives. In the latter case we also show consistency of a change point estimator.
This work deals with the numerical approximation of plasmas which are confined by the effect of a fast oscillating magnetic field (see \cite{Bostan2012}) in the Vlasov model. The presence of this magnetic field induces oscillations (in time) to the solution of the characteristic equations. Due to its multiscale character, a standard time discretization would lead to an inefficient solver. In this work, time integrators are derived and analyzed for a class of highly oscillatory differential systems. We prove the uniform accuracy property of these time integrators, meaning that the accuracy does not depend on the small parameter $\varepsilon$. Moreover, we construct an extension of the scheme which degenerates towards an energy preserving numerical scheme for the averaged model, when $\varepsilon\to 0$. Several numerical results illustrate the capabilities of the method.
In the present work, strong approximation errors are analyzed for both the spatial semi-discretization and the spatio-temporal fully discretization of stochastic wave equations (SWEs) with cubic polynomial nonlinearities and additive noises. The fully discretization is achieved by the standard Galerkin ffnite element method in space and a novel exponential time integrator combined with the averaged vector ffeld approach. The newly proposed scheme is proved to exactly satisfy a trace formula based on an energy functional. Recovering the convergence rates of the scheme, however, meets essential difffculties, due to the lack of the global monotonicity condition. To overcome this issue, we derive the exponential integrability property of the considered numerical approximations, by the energy functional. Armed with these properties, we obtain the strong convergence rates of the approximations in both spatial and temporal direction. Finally, numerical results are presented to verify the previously theoretical findings.
When studying the dynamics of incompressible fluids in bounded domains the only available data often provide average flow rate conditions on portions of the domain's boundary. In engineering applications a common practice to complete these conditions is to prescribe a Dirichlet condition by assuming a-priori a spatial profile for the velocity field. However, this strongly influence the accuracy of the numerical solution. A more mathematically sound approach is to prescribe the flow rate conditions using Lagrange multipliers, resulting in an augmented weak formulation of the Navier-Stokes problem. In this paper we start from the SIMPLE preconditioner, introduced so far for the standard Navier-Stokes equations, and we derive two preconditioners for the monolithic solution of the augmented problem. This can be useful in complex applications where splitting the computation of the velocity/pressure and Lagrange multipliers numerical solutions can be very expensive. In particular, we investigate the numerical performance of the preconditioners in both idealized and real-life scenarios. Finally, we highlight the advantages of treating flow rate conditions with a Lagrange multipliers approach instead of prescribing a Dirichlet condition.
We propose a novel, highly efficient, second-order accurate, long-time unconditionally stable numerical scheme for a class of finite-dimensional nonlinear models that are of importance in geophysical fluid dynamics. The scheme is highly efficient in the sense that only a (fixed) symmetric positive definite linear problem (with varying right hand sides) is involved at each time-step. The solutions to the scheme are uniformly bounded for all time. We show that the scheme is able to capture the long-time dynamics of the underlying geophysical model, with the global attractors as well as the invariant measures of the scheme converge to those of the original model as the step size approaches zero. In our numerical experiments, we take an indirect approach, using long-term statistics to approximate the invariant measures. Our results suggest that the convergence rate of the long-term statistics, as a function of terminal time, is approximately first order using the Jensen-Shannon metric and half-order using the L1 metric. This implies that very long time simulation is needed in order to capture a few significant digits of long time statistics (climate) correct. Nevertheless, the second order scheme's performance remains superior to that of the first order one, requiring significantly less time to reach a small neighborhood of statistical equilibrium for a given step size.