In 1976, Lai constructed a nontrivial confidence sequence for the mean $\mu$ of a Gaussian distribution with unknown variance $\sigma^2$. Curiously, he employed both an improper (right Haar) mixture over $\sigma$ and an improper (flat) mixture over $\mu$. Here, we elaborate carefully on the details of his construction, which use generalized nonintegrable martingales and an extended Ville's inequality. While this does yield a sequential t-test, it does not yield an "e-process" (due to the nonintegrability of his martingale). In this paper, we develop two new e-processes and confidence sequences for the same setting: one is a test martingale in a reduced filtration, while the other is an e-process in the canonical data filtration. These are respectively obtained by swapping Lai's flat mixture for a Gaussian mixture, and swapping the right Haar mixture over $\sigma$ with the maximum likelihood estimate under the null, as done in universal inference. We also analyze the width of resulting confidence sequences, which have a curious polynomial dependence on the error probability $\alpha$ that we prove to be not only unavoidable, but (for universal inference) even better than the classical fixed-sample t-test. Numerical experiments are provided along the way to compare and contrast the various approaches, including some recent suboptimal ones.
We propose a method utilizing physics-informed neural networks (PINNs) to solve Poisson equations that serve as control variates in the computation of transport coefficients via fluctuation formulas, such as the Green--Kubo and generalized Einstein-like formulas. By leveraging approximate solutions to the Poisson equation constructed through neural networks, our approach significantly reduces the variance of the estimator at hand. We provide an extensive numerical analysis of the estimators and detail a methodology for training neural networks to solve these Poisson equations. The approximate solutions are then incorporated into Monte Carlo simulations as effective control variates, demonstrating the suitability of the method for moderately high-dimensional problems where fully deterministic solutions are computationally infeasible.
We propose a method for estimating a log-concave density on $\mathbb R^d$ from samples, under the assumption that there exists an orthogonal transformation that makes the components of the random vector independent. While log-concave density estimation is hard both computationally and statistically, the independent components assumption alleviates both issues, while still maintaining a large non-parametric class. We prove that under mild conditions, at most $\tilde{\mathcal{O}}(\epsilon^{-4})$ samples (suppressing constants and log factors) suffice for our proposed estimator to be within $\epsilon$ of the original density in squared Hellinger distance. On the computational front, while the usual log-concave maximum likelihood estimate can be obtained via a finite-dimensional convex program, it is slow to compute -- especially in higher dimensions. We demonstrate through numerical experiments that our estimator can be computed efficiently, making it more practical to use.
The Hartman-Watson distribution with density $f_r(t)$ is a probability distribution defined on $t \geq 0$ which appears in several problems of applied probability. The density of this distribution is expressed in terms of an integral $\theta(r,t)$ which is difficult to evaluate numerically for small $t\to 0$. Using saddle point methods, we obtain the first two terms of the $t\to 0$ expansion of $\theta(\rho/t,t)$ at fixed $\rho >0$. An error bound is obtained by numerical estimates of the integrand, which is furthermore uniform in $\rho$. As an application we obtain the leading asymptotics of the density of the time average of the geometric Brownian motion as $t\to 0$. This has the form $\mathbb{P}(\frac{1}{t} \int_0^t e^{2(B_s+\mu s)} ds \in da) \sim (2\pi t)^{-1/2} g(a,\mu) e^{-\frac{1}{t} J(a)} da/a$, with an exponent $J(a)$ which reproduces the known result obtained previously using Large Deviations theory.
We propose and analyse a novel, fully discrete numerical algorithm for the approximation of the generalised Stokes system forced by transport noise -- a prototype model for non-Newtonian fluids including turbulence. Utilising the Gradient Discretisation Method, we show that the algorithm is long-term stable for a broad class of particular Gradient Discretisations. Building on the long-term stability and the derived continuity of the algorithm's solution operator, we construct two sequences of approximate invariant measures. At the moment, each sequence lacks one important feature: either the existence of a limit measure, or the invariance with respect to the discrete semigroup. We derive an abstract condition that merges both properties, recovering the existence of an invariant measure. We provide an example for which invariance and existence hold simultaneously, and characterise the invariant measure completely. We close the article by conducting two numerical experiments that show the influence of transport noise on the dynamics of power-law fluids; in particular, we find that transport noise enhances the dissipation of kinetic energy, the mixing of particles, as well as the size of vortices.
This paper addresses the problem of adaptively controlling the bias parameter in nonlinear opinion dynamics (NOD) to allocate agents into groups of arbitrary sizes for the purpose of maximizing collective rewards. In previous work, an algorithm based on the coupling of NOD with an multi-objective behavior optimization was successfully deployed as part of a multi-robot system in an autonomous task allocation field experiment. Motivated by the field results, in this paper we propose and analyze a new task allocation model that synthesizes NOD with an evolutionary game framework. We prove sufficient conditions under which it is possible to control the opinion state in the group to a desired allocation of agents between two tasks through an adaptive bias using decentralized feedback. We then verify the theoretical results with a simulation study of a collaborative evolutionary division of labor game.
We leverage the proximal Galerkin algorithm (Keith and Surowiec, Foundations of Computational Mathematics, 2024, DOI: 10.1007/s10208-024-09681-8), a recently introduced mesh-independent algorithm, to obtain a high-order finite element solver for variational problems with pointwise inequality constraints. This is achieved by discretizing the saddle point systems, arising from the latent variable proximal point method, with the hierarchical $p$-finite element basis. This results in discretized sparse Newton systems that admit a simple and effective block preconditioner. The solver can handle both obstacle-type, $u \leq \varphi$, and gradient-type, $|\nabla u| \leq \varphi$, constraints. We apply the resulting algorithm to solve obstacle problems with $hp$-adaptivity, a gradient-type constrained problem, and the thermoforming problem, an example of an obstacle-type quasi-variational inequality. We observe $hp$-robustness in the number of Newton iterations and only mild growth in the number of inner Krylov iterations to solve the Newton systems. Crucially we also provide wall-clock timings that are faster than low-order discretization counterparts.
We construct fully-discrete schemes for the Benjamin-Ono, Calogero-Sutherland DNLS, and cubic Szeg\H{o} equations on the torus, which are $\textit{exact in time}$ with $\textit{spectral accuracy}$ in space. We prove spectral convergence for the first two equations, of order $K^{-s+1}$ for initial data in $H^s(\mathbb T)$, with an error constant depending $\textit{linearly}$ on the final time instead of exponentially. These schemes are based on $\textit{explicit formulas}$, which have recently emerged in the theory of nonlinear integrable equations. Numerical simulations show the strength of the newly designed methods both at short and long time scales. These schemes open doors for the understanding of the long-time dynamics of integrable equations.
We consider the quantum query complexity of local search as a function of graph geometry. Given a graph $G = (V,E)$ with $n$ vertices and black box access to a function $f : V \to \mathbb{R}$, the goal is find a vertex $v$ that is a local minimum, i.e. with $f(v) \leq f(u)$ for all $(u,v) \in E$, using as few oracle queries as possible. We show that the quantum query complexity of local search on $G$ is $\Omega\bigl( \frac{n^{\frac{3}{4}}}{\sqrt{g}} \bigr)$, where $g$ is the vertex congestion of the graph. For a $\beta$-expander with maximum degree $\Delta$, this implies a lower bound of $ \Omega\bigl(\frac{\sqrt{\beta} \; n^{\frac{1}{4}}}{\sqrt{\Delta} \; \log{n}} \bigr)$. We obtain these bounds by applying the strong weighted adversary method to a construction by Br\^anzei, Choo, and Recker (2024). As a corollary, on constant degree expanders, we derive a lower bound of $\Omega\bigl(\frac{n^{\frac{1}{4}}}{ \sqrt{\log{n}}} \bigr)$. This improves upon the best prior quantum lower bound of $\Omega\bigl( \frac{n^{\frac{1}{8}}}{\log{n}}\bigr) $ by Santha and Szegedy (2004). In contrast to the classical setting, a gap remains in the quantum case between our lower bound and the best-known upper bound of $O\bigl( n^{\frac{1}{3}} \bigr)$ for such graphs.
Insurance losses due to flooding can be estimated by simulating and then summing a large number of losses for each in a large set of hypothetical years of flood events. Replicated realisations lead to Monte Carlo return-level estimates and associated uncertainty. The procedure, however, is highly computationally intensive. We develop and use a new, Bennett-like concentration inequality to provide conservative but relatively accurate estimates of return levels. Bennett's inequality accounts for the different variances of each of the variables in a sum but uses a uniform upper bound on their support. Motivated by the variability in the total insured value of risks within a portfolio, we incorporate both individual upper bounds and variances and obtain tractable concentration bounds. Simulation studies and application to a representative portfolio demonstrate a substantial tightening compared with Bennett's bound. We then develop an importance-sampling procedure that repeatedly samples the loss for each year from the distribution implied by the concentration inequality, leading to conservative estimates of the return levels and their uncertainty using orders of magnitude less computation. This enables a simulation study of the sensitivity of the predictions to perturbations in quantities that are usually assumed fixed and known but, in truth, are not.
Two common methods for solving absolute value equations (AVE) are SOR-like iteration method and fixed point iteration (FPI) method. In this paper, novel convergence analysis, which result wider convergence range, of the SOR-like iteration and the FPI are given. Based on the new analysis, a new optimal iterative parameter with a analytical form is obtained for the SOR-like iteration. In addition, an optimal iterative parameter with a analytical form is also obtained for FPI. Surprisingly, the SOR-like iteration and the FPI are the same whenever they are equipped with our optimal iterative parameters. As a by product, we give two new constructive proof for a well known sufficient condition such that AVE has a unique solution for any right hand side. Numerical results demonstrate our claims.