In this paper, we investigate the strong convergence of the semi-implicit Euler-Maruyama (EM) method for stochastic differential equations driven by a class of L\'evy processes. Comparing to existing results, we obtain the convergence order of the numerical scheme and discover its relation with the parameters of the class of L\'evy processes. In addition, the existence and uniqueness of numerical invariant measure of the semi-implicit EM method is studied and its convergence to the underlying invariant measure is also proved. Numerical examples are provided to confirm our theoretical results.
A central issue in machine learning is how to train models on sensitive user data. Industry has widely adopted a simple algorithm: Stochastic Gradient Descent with noise (a.k.a. Stochastic Gradient Langevin Dynamics). However, foundational theoretical questions about this algorithm's privacy loss remain open -- even in the seemingly simple setting of smooth convex losses over a bounded domain. Our main result resolves these questions: for a large range of parameters, we characterize the differential privacy up to a constant factor. This result reveals that all previous analyses for this setting have the wrong qualitative behavior. Specifically, while previous privacy analyses increase ad infinitum in the number of iterations, we show that after a small burn-in period, running SGD longer leaks no further privacy. Our analysis departs from previous approaches based on fast mixing, instead using techniques based on optimal transport (namely, Privacy Amplification by Iteration) and the Sampled Gaussian Mechanism (namely, Privacy Amplification by Sampling). Our techniques readily extend to other settings, e.g., strongly convex losses, non-uniform stepsizes, arbitrary batch sizes, and random or cyclic choice of batches.
We adopt an information-theoretic framework to analyze the generalization behavior of the class of iterative, noisy learning algorithms. This class is particularly suitable for study under information-theoretic metrics as the algorithms are inherently randomized, and it includes commonly used algorithms such as Stochastic Gradient Langevin Dynamics (SGLD). Herein, we use the maximal leakage (equivalently, the Sibson mutual information of order infinity) metric, as it is simple to analyze, and it implies both bounds on the probability of having a large generalization error and on its expected value. We show that, if the update function (e.g., gradient) is bounded in $L_2$-norm, then adding isotropic Gaussian noise leads to optimal generalization bounds: indeed, the input and output of the learning algorithm in this case are asymptotically statistically independent. Furthermore, we demonstrate how the assumptions on the update function affect the optimal (in the sense of minimizing the induced maximal leakage) choice of the noise. Finally, we compute explicit tight upper bounds on the induced maximal leakage for several scenarios of interest.
Single-call stochastic extragradient methods, like stochastic past extragradient (SPEG) and stochastic optimistic gradient (SOG), have gained a lot of interest in recent years and are one of the most efficient algorithms for solving large-scale min-max optimization and variational inequalities problems (VIP) appearing in various machine learning tasks. However, despite their undoubted popularity, current convergence analyses of SPEG and SOG require a bounded variance assumption. In addition, several important questions regarding the convergence properties of these methods are still open, including mini-batching, efficient step-size selection, and convergence guarantees under different sampling strategies. In this work, we address these questions and provide convergence guarantees for two large classes of structured non-monotone VIPs: (i) quasi-strongly monotone problems (a generalization of strongly monotone problems) and (ii) weak Minty variational inequalities (a generalization of monotone and Minty VIPs). We introduce the expected residual condition, explain its benefits, and show how it can be used to obtain a strictly weaker bound than previously used growth conditions, expected co-coercivity, or bounded variance assumptions. Equipped with this condition, we provide theoretical guarantees for the convergence of single-call extragradient methods for different step-size selections, including constant, decreasing, and step-size-switching rules. Furthermore, our convergence analysis holds under the arbitrary sampling paradigm, which includes importance sampling and various mini-batching strategies as special cases.
This article presents a general approximation-theoretic framework to analyze measure-transport algorithms for sampling and characterizing probability measures. Sampling is a task that frequently arises in data science and uncertainty quantification. We provide error estimates in the continuum limit, i.e., when the measures (or their densities) are given, but when the transport map is discretized or approximated using a finite-dimensional function space. Our analysis relies on the regularity theory of transport maps, as well as on classical approximation theory for high-dimensional functions. A third element of our analysis, which is of independent interest, is the development of new stability estimates that relate the normed distance between two maps to the divergence between the pushforward measures they define. We further present a series of applications where quantitative convergence rates are obtained for practical problems using Wasserstein metrics, maximum mean discrepancy, and Kullback-Leibler divergence. Specialized rates for approximations of the popular triangular Kn{\"o}the-Rosenblatt maps are obtained, followed by numerical experiments that demonstrate and extend our theory.
We study the problem of designing adaptive multi-armed bandit algorithms that perform optimally in both the stochastic setting and the adversarial setting simultaneously (often known as a best-of-both-world guarantee). A line of recent works shows that when configured and analyzed properly, the Follow-the-Regularized-Leader (FTRL) algorithm, originally designed for the adversarial setting, can in fact optimally adapt to the stochastic setting as well. Such results, however, critically rely on an assumption that there exists one unique optimal arm. Recently, Ito (2021) took the first step to remove such an undesirable uniqueness assumption for one particular FTRL algorithm with the $\frac{1}{2}$-Tsallis entropy regularizer. In this work, we significantly improve and generalize this result, showing that uniqueness is unnecessary for FTRL with a broad family of regularizers and a new learning rate schedule. For some regularizers, our regret bounds also improve upon prior results even when uniqueness holds. We further provide an application of our results to the decoupled exploration and exploitation problem, demonstrating that our techniques are broadly applicable.
We introduce the Weak-form Estimation of Nonlinear Dynamics (WENDy) method for estimating model parameters for non-linear systems of ODEs. The core mathematical idea involves an efficient conversion of the strong form representation of a model to its weak form, and then solving a regression problem to perform parameter inference. The core statistical idea rests on the Errors-In-Variables framework, which necessitates the use of the iteratively reweighted least squares algorithm. Further improvements are obtained by using orthonormal test functions, created from a set of $C^{\infty}$ bump functions of varying support sizes. We demonstrate that WENDy is a highly robust and efficient method for parameter inference in differential equations. Without relying on any numerical differential equation solvers, WENDy computes accurate estimates and is robust to large (biologically relevant) levels of measurement noise. For low dimensional systems with modest amounts of data, WENDy is competitive with conventional forward solver-based nonlinear least squares methods in terms of speed and accuracy. For both higher dimensional systems and stiff systems, WENDy is typically both faster (often by orders of magnitude) and more accurate than forward solver-based approaches. We illustrate the method and its performance in some common population and neuroscience models, including logistic growth, Lotka-Volterra, FitzHugh-Nagumo, Hindmarsh-Rose, and a Protein Transduction Benchmark model. Software and code for reproducing the examples is available at (//github.com/MathBioCU/WENDy).
In this work, we extend the data-driven It\^{o} stochastic differential equation (SDE) framework for the pathwise assessment of short-term forecast errors to account for the time-dependent upper bound that naturally constrains the observable historical data and forecast. We propose a new nonlinear and time-inhomogeneous SDE model with a Jacobi-type diffusion term for the phenomenon of interest, simultaneously driven by the forecast and the constraining upper bound. We rigorously demonstrate the existence and uniqueness of a strong solution to the SDE model by imposing a condition for the time-varying mean-reversion parameter appearing in the drift term. The normalized forecast function is thresholded to keep such mean-reversion parameters bounded. The SDE model parameter calibration also covers the thresholding parameter of the normalized forecast by applying a novel iterative two-stage optimization procedure to user-selected approximations of the likelihood function. Another novel contribution is estimating the transition density of the forecast error process, not known analytically in a closed form, through a tailored kernel smoothing technique with the control variate method. We fit the model to the 2019 photovoltaic (PV) solar power daily production and forecast data in Uruguay, computing the daily maximum solar PV production estimation. Two statistical versions of the constrained SDE model are fit, with the beta and truncated normal distributions as proxies for the transition density. Empirical results include simulations of the normalized solar PV power production and pathwise confidence bands generated through an indirect inference method. An objective comparison of optimal parametric points associated with the two selected statistical approximations is provided by applying the innovative kernel density estimation technique of the transition function of the forecast error process.
Time-dependent Maxwell's equations govern electromagnetics. Under certain conditions, we can rewrite these equations into a partial differential equation of second order, which in this case is the vectorial wave equation. For the vectorial wave, we investigate the numerical application and the challenges in the implementation. For this purpose, we consider a space-time variational setting, i.e. time is just another spatial dimension. More specifically, we apply integration by parts in time as well as in space, leading to a space-time variational formulation with different trial and test spaces. Conforming discretizations of tensor-product type result in a Galerkin--Petrov finite element method that requires a CFL condition for stability. For this Galerkin--Petrov variational formulation, we study the CFL condition and its sharpness. To overcome the CFL condition, we use a Hilbert-type transformation that leads to a variational formulation with equal trial and test spaces. Conforming space-time discretizations result in a new Galerkin--Bubnov finite element method that is unconditionally stable. In numerical examples, we demonstrate the effectiveness of this Galerkin--Bubnov finite element method. Furthermore, we investigate different projections of the right-hand side and their influence on the convergence rates. This paper is the first step towards a more stable computation and a better understanding of vectorial wave equations in a conforming space-time approach.
Consider the problem of solving systems of linear algebraic equations $Ax=b$ with a real symmetric positive definite matrix $A$ using the conjugate gradient (CG) method. To stop the algorithm at the appropriate moment, it is important to monitor the quality of the approximate solution. One of the most relevant quantities for measuring the quality of the approximate solution is the $A$-norm of the error. This quantity cannot be easily computed, however, it can be estimated. In this paper we discuss and analyze the behaviour of the Gauss-Radau upper bound on the $A$-norm of the error, based on viewing CG as a procedure for approximating a certain Riemann-Stieltjes integral. This upper bound depends on a prescribed underestimate $\mu$ to the smallest eigenvalue of $A$. We concentrate on explaining a phenomenon observed during computations showing that, in later CG iterations, the upper bound loses its accuracy, and is almost independent of $\mu$. We construct a model problem that is used to demonstrate and study the behaviour of the upper bound in dependence of $\mu$, and developed formulas that are helpful in understanding this behavior. We show that the above mentioned phenomenon is closely related to the convergence of the smallest Ritz value to the smallest eigenvalue of $A$. It occurs when the smallest Ritz value is a better approximation to the smallest eigenvalue than the prescribed underestimate $\mu$. We also suggest an adaptive strategy for improving the accuracy of the upper bounds in the previous iterations.
Uncertain fractional differential equation (UFDE) is a kind of differential equation about uncertain process. As an significant mathematical tool to describe the evolution process of dynamic system, UFDE is better than the ordinary differential equation with integer derivatives because of its hereditability and memorability characteristics. However, in most instances, the precise analytical solutions of UFDE is difficult to obtain due to the complex form of the UFDE itself. Up to now, there is not plenty of researches about the numerical method of UFDE, as for the existing numerical algorithms, their accuracy is also not high. In this research, derive from the interval weighting method, a class of fractional adams method is innovatively proposed to solve UFDE. Meanwhile, such fractional adams method extends the traditional predictor-corrector method to higher order cases. The stability and truncation error limit of the improved algorithm are analyzed and deduced. As the application, several numerical simulations (including $\alpha$-path, extreme value and the first hitting time of the UFDE) are provided to manifest the higher accuracy and efficiency of the proposed numerical method.