Twenty years after the discovery of the F5 algorithm, Gr\"obner bases with signatures are still challenging to understand and to adapt to different settings. This contrasts with Buchberger's algorithm, which we can bend in many directions keeping correctness and termination obvious. I propose an axiomatic approach to Gr\"obner bases with signatures with the purpose of uncoupling the theory and the algorithms, and giving general results applicable in many different settings (e.g. Gr\"obner for submodules, F4-style reduction, noncommutative rings, non-Noetherian settings, etc.).
For the stochastic heat equation with multiplicative noise we consider the problem of estimating the diffusivity parameter in front of the Laplace operator. Based on local observations in space, we first study an estimator that was derived for additive noise. A stable central limit theorem shows that this estimator is consistent and asymptotically mixed normal. By taking into account the quadratic variation, we propose two new estimators. Their limiting distributions exhibit a smaller (conditional) variance and the last estimator also works for vanishing noise levels. The proofs are based on local approximation results to overcome the intricate nonlinearities and on a stable central limit theorem for stochastic integrals with respect to cylindrical Brownian motion. Simulation results illustrate the theoretical findings.
We study the performance of stochastic first-order methods for finding saddle points of convex-concave functions. A notorious challenge faced by such methods is that the gradients can grow arbitrarily large during optimization, which may result in instability and divergence. In this paper, we propose a simple and effective regularization technique that stabilizes the iterates and yields meaningful performance guarantees even if the domain and the gradient noise scales linearly with the size of the iterates (and is thus potentially unbounded). Besides providing a set of general results, we also apply our algorithm to a specific problem in reinforcement learning, where it leads to performance guarantees for finding near-optimal policies in an average-reward MDP without prior knowledge of the bias span.
In this work, we present a simple and unified analysis of the Johnson-Lindenstrauss (JL) lemma, a cornerstone in the field of dimensionality reduction critical for managing high-dimensional data. Our approach not only simplifies the understanding but also unifies various constructions under the JL framework, including spherical, binary-coin, sparse JL, Gaussian and sub-Gaussian models. This simplification and unification make significant strides in preserving the intrinsic geometry of data, essential across diverse applications from streaming algorithms to reinforcement learning. Notably, we deliver the first rigorous proof of the spherical construction's effectiveness and provide a general class of sub-Gaussian constructions within this simplified framework. At the heart of our contribution is an innovative extension of the Hanson-Wright inequality to high dimensions, complete with explicit constants, marking a substantial leap in the literature. By employing simple yet powerful probabilistic tools and analytical techniques, such as an enhanced diagonalization process, our analysis not only solidifies the JL lemma's theoretical foundation but also extends its practical reach, showcasing its adaptability and importance in contemporary computational algorithms.
Handling an infinite number of inequality constraints in infinite-dimensional spaces occurs in many fields, from global optimization to optimal transport. These problems have been tackled individually in several previous articles through kernel Sum-Of-Squares (kSoS) approximations. We propose here a unified theorem to prove convergence guarantees for these schemes. Pointwise inequalities are turned into equalities within a class of nonnegative kSoS functions. Assuming further that the functions appearing in the problem are smooth, focusing on pointwise equality constraints enables the use of scattering inequalities to mitigate the curse of dimensionality in sampling the constraints. Our approach is illustrated in learning vector fields with side information, here the invariance of a set.
Faraday's law on electromagnetic induction, one of the most fundamental laws of nature, indicates that a change of magnetic field through a coil wire induces a current in the wire. Electromagnetic induction has many paramount technological applications today, and provides the link between electric and magnetic fields, which is crucial to explain the existence of electromagnetic waves. In our quest to replicate Faraday's law for mechanical systems, we design an infinite mass-spring "helix-like structure", which consists of a helix and a central line, and implement Bloch-Floquet conditions to obtain travelling wave solutions to the proposed problem. The structure's geometrical chirality is considered in conjunction with a dynamic chirality, introduced by placing gyroscopes along its central line. It is shown that the interplay between these two chiralities acts as a mechanical analogue of Faraday's law, breaking the symmetry of the associated dispersion diagram.
In a recent work (Dick et al, arXiv:2310.06187), we considered a linear stochastic elasticity equation with random Lam\'e parameters which are parameterized by a countably infinite number of terms in separate expansions. We estimated the expected values over the infinite dimensional parametric space of linear functionals ${\mathcal L}$ acting on the continuous solution $\vu$ of the elasticity equation. This was achieved by truncating the expansions of the random parameters, then using a high-order quasi-Monte Carlo (QMC) method to approximate the high dimensional integral combined with the conforming Galerkin finite element method (FEM) to approximate the displacement over the physical domain $\Omega.$ In this work, as a further development of aforementioned article, we focus on the case of a nearly incompressible linear stochastic elasticity equation. To serve this purpose, in the presence of stochastic inhomogeneous (variable Lam\'e parameters) nearly compressible material, we develop a new locking-free symmetric nonconforming Galerkin FEM that handles the inhomogeneity. In the case of nearly incompressible material, one known important advantage of nonconforming approximations is that they yield optimal order convergence rates that are uniform in the Poisson coefficient. Proving the convergence of the nonconforming FEM leads to another challenge that is summed up in showing the needed regularity properties of $\vu$. For the error estimates from the high-order QMC method, which is needed to estimate the expected value over the infinite dimensional parametric space of ${\mathcal L}\vu,$ we %rely on (Dick et al. 2022). We are required here to show certain regularity properties of $\vu$ with respect to the random coefficients. Some numerical results are delivered at the end.
We investigate a filtered Lie-Trotter splitting scheme for the ``good" Boussinesq equation and derive an error estimate for initial data with very low regularity. Through the use of discrete Bourgain spaces, our analysis extends to initial data in $H^{s}$ for $0<s\leq 2$, overcoming the constraint of $s>1/2$ imposed by the bilinear estimate in smooth Sobolev spaces. We establish convergence rates of order $\tau^{s/2}$ in $L^2$ for such levels of regularity. Our analytical findings are supported by numerical experiments.
In this article we aim to obtain the Fisher Riemann geodesics for nonparametric families of probability densities as a weak limit of the parametric case with increasing number of parameters.
Variation of empirical Fr\'echet means on a metric space with curvature bounded above is encoded via random fields indexed by unit tangent vectors. A central limit theorem shows these random tangent fields converge to a Gaussian such field and lays the foundation for more traditionally formulated central limit theorems in subsequent work.
We consider the Sherrington-Kirkpatrick model of spin glasses at high-temperature and no external field, and study the problem of sampling from the Gibbs distribution $\mu$ in polynomial time. We prove that, for any inverse temperature $\beta<1/2$, there exists an algorithm with complexity $O(n^2)$ that samples from a distribution $\mu^{alg}$ which is close in normalized Wasserstein distance to $\mu$. Namely, there exists a coupling of $\mu$ and $\mu^{alg}$ such that if $(x,x^{alg})\in\{-1,+1\}^n\times \{-1,+1\}^n$ is a pair drawn from this coupling, then $n^{-1}\mathbb E\{||x-x^{alg}||_2^2\}=o_n(1)$. The best previous results, by Bauerschmidt and Bodineau and by Eldan, Koehler, and Zeitouni, implied efficient algorithms to approximately sample (under a stronger metric) for $\beta<1/4$. We complement this result with a negative one, by introducing a suitable "stability" property for sampling algorithms, which is verified by many standard techniques. We prove that no stable algorithm can approximately sample for $\beta>1$, even under the normalized Wasserstein metric. Our sampling method is based on an algorithmic implementation of stochastic localization, which progressively tilts the measure $\mu$ towards a single configuration, together with an approximate message passing algorithm that is used to approximate the mean of the tilted measure.