We construct an interpolatory high-order cubature rule to compute integrals of smooth functions over self-affine sets with respect to an invariant measure. The main difficulty is the computation of the cubature weights, which we characterize algebraically, by exploiting a self-similarity property of the integral. We propose an $h$-version and a $p$-version of the cubature, present an error analysis and conduct numerical experiments.
In the recent breakthrough work \cite{xu2023lack}, a rigorous numerical analysis was conducted on the numerical solution of a scalar ODE containing a cubic polynomial derived from the Allen-Cahn equation. It was found that only the implicit Euler method converge to the correct steady state for any given initial value $u_0$ under the unique solvability and energy stability. But all the other commonly used second-order numerical schemes exhibit sensitivity to initial conditions and may converge to an incorrect equilibrium state as $t_n\to\infty$. This indicates that energy stability may not be decisive for the long-term qualitative correctness of numerical solutions. We found that using another fundamental property of the solution, namely monotonicity instead of energy stability, is sufficient to ensure that many common numerical schemes converge to the correct equilibrium state. This leads us to introduce the critical step size constant $h^*=h^*(u_0,\epsilon)$ that ensures the monotonicity and unique solvability of the numerical solutions, where the scaling parameter $\epsilon \in(0,1)$. We prove that the implicit Euler scheme $h^*=h^*(\epsilon)$, which is independent of $u_0$ and only depends on $\epsilon$. Hence regardless of the initial value taken, the simulation can be guaranteed to be correct when $h<h^*$. But for various other numerical methods, no mater how small the step size $h$ is in advance, there will always be initial values that cause simulation errors. In fact, for these numerical methods, we prove that $\inf_{u_0\in \mathbb{R}}h^*(u_0,\epsilon)=0$. Various numerical experiments are used to confirm the theoretical analysis.
We discuss nonparametric estimation of the trend coefficient in models governed by a stochastic differential equation driven by a multiplicative stochastic volatility.
Robust and stable high order numerical methods for solving partial differential equations are attractive because they are efficient on modern and next generation hardware architectures. However, the design of provably stable numerical methods for nonlinear hyperbolic conservation laws pose a significant challenge. We present the dual-pairing (DP) and upwind summation-by-parts (SBP) finite difference (FD) framework for accurate and robust numerical approximations of nonlinear conservation laws. The framework has an inbuilt "limiter" whose goal is to detect and effectively resolve regions where the solution is poorly resolved and/or discontinuities are found. The DP SBP FD operators are a dual-pair of backward and forward FD stencils, which together preserve the SBP property. In addition, the DP SBP FD operators are designed to be upwind, that is they come with some innate dissipation everywhere, as opposed to traditional SBP and collocated discontinuous Galerkin spectral element methods which can only induce dissipation through numerical fluxes acting at element interfaces. We combine the DP SBP operators together with skew-symmetric and upwind flux splitting of nonlinear hyperbolic conservation laws. Our semi-discrete approximation is provably entropy-stable for arbitrary nonlinear hyperbolic conservation laws. The framework is high order accurate, provably entropy-stable, convergent, and avoids several pitfalls of current state-of-the-art high order methods. We give specific examples using the in-viscid Burger's equation, nonlinear shallow water equations and compressible Euler equations of gas dynamics. Numerical experiments are presented to verify accuracy and demonstrate the robustness of our numerical framework.
Models implicitly defined through a random simulator of a process have become widely used in scientific and industrial applications in recent years. However, simulation-based inference methods for such implicit models, like approximate Bayesian computation (ABC), often scale poorly as data size increases. We develop a scalable inference method for implicitly defined models using a metamodel for the Monte Carlo log-likelihood estimator derived from simulations. This metamodel characterizes both statistical and simulation-based randomness in the distribution of the log-likelihood estimator across different parameter values. Our metamodel-based method quantifies uncertainty in parameter estimation in a principled manner, leveraging the local asymptotic normality of the mean function of the log-likelihood estimator. We apply this method to construct accurate confidence intervals for parameters of partially observed Markov process models where the Monte Carlo log-likelihood estimator is obtained using the bootstrap particle filter. We numerically demonstrate that our method enables accurate and highly scalable parameter inference across several examples, including a mechanistic compartment model for infectious diseases.
This short note introduces a novel diagnostic tool for evaluating the convection boundedness properties of numerical schemes across discontinuities. The proposed method is based on the convection boundedness criterion and the normalised variable diagram. By utilising this tool, we can determine the CFL conditions for numerical schemes to satisfy the convection boundedness criterion, identify the locations of over- and under-shoots, optimize the free parameters in the schemes, and develop strategies to prevent numerical oscillations across the discontinuity. We apply the diagnostic tool to assess representative discontinuity-capturing schemes, including THINC, fifth-order WENO, and fifth-order TENO, and validate the conclusions drawn through numerical tests. We further demonstrate the application of the proposed method by formulating a new THINC scheme with less stringent CFL conditions.
Fairness holds a pivotal role in the realm of machine learning, particularly when it comes to addressing groups categorised by protected attributes, e.g., gender, race. Prevailing algorithms in fair learning predominantly hinge on accessibility or estimations of these protected attributes, at least in the training process. We design a single group-blind projection map that aligns the feature distributions of both groups in the source data, achieving (demographic) group parity, without requiring values of the protected attribute for individual samples in the computation of the map, as well as its use. Instead, our approach utilises the feature distributions of the privileged and unprivileged groups in a boarder population and the essential assumption that the source data are unbiased representation of the population. We present numerical results on synthetic data and real data.
The unpredictability of random numbers is fundamental to both digital security and applications that fairly distribute resources. However, existing random number generators have limitations-the generation processes cannot be fully traced, audited, and certified to be unpredictable. The algorithmic steps used in pseudorandom number generators are auditable, but they cannot guarantee that their outputs were a priori unpredictable given knowledge of the initial seed. Device-independent quantum random number generators can ensure that the source of randomness was unknown beforehand, but the steps used to extract the randomness are vulnerable to tampering. Here, for the first time, we demonstrate a fully traceable random number generation protocol based on device-independent techniques. Our protocol extracts randomness from unpredictable non-local quantum correlations, and uses distributed intertwined hash chains to cryptographically trace and verify the extraction process. This protocol is at the heart of a public traceable and certifiable quantum randomness beacon that we have launched. Over the first 40 days of operation, we completed the protocol 7434 out of 7454 attempts -- a success rate of 99.7%. Each time the protocol succeeded, the beacon emitted a pulse of 512 bits of traceable randomness. The bits are certified to be uniform with error times actual success probability bounded by $2^{-64}$. The generation of certifiable and traceable randomness represents one of the first public services that operates with an entanglement-derived advantage over comparable classical approaches.
Statistical learning under distribution shift is challenging when neither prior knowledge nor fully accessible data from the target distribution is available. Distributionally robust learning (DRL) aims to control the worst-case statistical performance within an uncertainty set of candidate distributions, but how to properly specify the set remains challenging. To enable distributional robustness without being overly conservative, in this paper, we propose a shape-constrained approach to DRL, which incorporates prior information about the way in which the unknown target distribution differs from its estimate. More specifically, we assume the unknown density ratio between the target distribution and its estimate is isotonic with respect to some partial order. At the population level, we provide a solution to the shape-constrained optimization problem that does not involve the isotonic constraint. At the sample level, we provide consistency results for an empirical estimator of the target in a range of different settings. Empirical studies on both synthetic and real data examples demonstrate the improved accuracy of the proposed shape-constrained approach.
In decision-making, maxitive functions are used for worst-case and best-case evaluations. Maxitivity gives rise to a rich structure that is well-studied in the context of the pointwise order. In this article, we investigate maxitivity with respect to general preorders and provide a representation theorem for such functionals. The results are illustrated for different stochastic orders in the literature, including the usual stochastic order, the increasing convex/concave order, and the dispersive order.
In the present work, strong approximation errors are analyzed for both the spatial semi-discretization and the spatio-temporal fully discretization of stochastic wave equations (SWEs) with cubic polynomial nonlinearities and additive noises. The fully discretization is achieved by the standard Galerkin ffnite element method in space and a novel exponential time integrator combined with the averaged vector ffeld approach. The newly proposed scheme is proved to exactly satisfy a trace formula based on an energy functional. Recovering the convergence rates of the scheme, however, meets essential difffculties, due to the lack of the global monotonicity condition. To overcome this issue, we derive the exponential integrability property of the considered numerical approximations, by the energy functional. Armed with these properties, we obtain the strong convergence rates of the approximations in both spatial and temporal direction. Finally, numerical results are presented to verify the previously theoretical findings.