亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We consider a generic and explicit tamed Euler--Maruyama scheme for multidimensional time-inhomogeneous stochastic differential equations with multiplicative Brownian noise. The diffusion coefficient is uniformly elliptic, H\"older continuous and weakly differentiable in the spatial variables while the drift satisfies the Ladyzhenskaya--Prodi--Serrin condition, as considered by Krylov and R\"ockner (2005). In the discrete scheme, the drift is tamed by replacing it by an approximation. A strong rate of convergence of the scheme is provided in terms of the approximation error of the drift in a suitable and possibly very weak topology. A few examples of approximating drifts are discussed in detail. The parameters of the approximating drifts can vary and be fine-tuned to achieve the standard $1/2$-strong convergence rate with a logarithmic factor.

相關內容

We consider Broyden's method and some accelerated schemes for nonlinear equations having a strongly regular singularity of first order with a one-dimensional nullspace. Our two main results are as follows. First, we show that the use of a preceding Newton-like step ensures convergence for starting points in a starlike domain with density 1. This extends the domain of convergence of these methods significantly. Second, we establish that the matrix updates of Broyden's method converge q-linearly with the same asymptotic factor as the iterates. This contributes to the long-standing question whether the Broyden matrices converge by showing that this is indeed the case for the setting at hand. Furthermore, we prove that the Broyden directions violate uniform linear independence, which implies that existing results for convergence of the Broyden matrices cannot be applied. Numerical experiments of high precision confirm the enlarged domain of convergence, the q-linear convergence of the matrix updates, and the lack of uniform linear independence. In addition, they suggest that these results can be extended to singularities of higher order and that Broyden's method can converge r-linearly without converging q-linearly. The underlying code is freely available.

An increasing number of machine learning problems, such as robust or adversarial variants of existing algorithms, require minimizing a loss function that is itself defined as a maximum. Carrying a loop of stochastic gradient ascent (SGA) steps on the (inner) maximization problem, followed by an SGD step on the (outer) minimization, is known as Epoch Stochastic Gradient \textit{Descent Ascent} (ESGDA). While successful in practice, the theoretical analysis of ESGDA remains challenging, with no clear guidance on choices for the inner loop size nor on the interplay between inner/outer step sizes. We propose RSGDA (Randomized SGDA), a variant of ESGDA with stochastic loop size with a simpler theoretical analysis. RSGDA comes with the first (among SGDA algorithms) almost sure convergence rates when used on nonconvex min/strongly-concave max settings. RSGDA can be parameterized using optimal loop sizes that guarantee the best convergence rates known to hold for SGDA. We test RSGDA on toy and larger scale problems, using distributionally robust optimization and single-cell data matching using optimal transport as a testbed.

We develop a rapid and accurate contour method for the solution of time-fractional PDEs. The method inverts the Laplace transform via an optimised stable quadrature rule, suitable for infinite-dimensional operators, whose error decreases like $\exp(-cN/\log(N))$ for $N$ quadrature points. The method is parallisable, avoids having to resolve singularities of the solution as $t\downarrow 0$, and avoids the large memory consumption that can be a challenge for time-stepping methods applied to time-fractional PDEs. The ODEs resulting from quadrature are solved using adaptive sparse spectral methods that converge exponentially with optimal linear complexity. These solutions of ODEs are reused for different times. We provide a complete analysis of our approach for fractional beam equations used to model small-amplitude vibration of viscoelastic materials with a fractional Kelvin-Voigt stress-strain relationship. We calculate the system's energy evolution over time and the surface deformation in cases of both constant and non-constant viscoelastic parameters. An infinite-dimensional ``solve-then-discretise'' approach considerably simplifies the analysis, which studies the generalisation of the numerical range of a quasi-linearisation of a suitable operator pencil. This allows us to build an efficient algorithm with explicit error control. The approach can be readily adapted to other time-fractional PDEs and is not constrained to fractional parameters in the range $0<\nu<1$.

The simultaneous estimation of many parameters based on data collected from corresponding studies is a key research problem that has received renewed attention in the high-dimensional setting. Many practical situations involve heterogeneous data where heterogeneity is captured by a nuisance parameter. Effectively pooling information across samples while correctly accounting for heterogeneity presents a significant challenge in large-scale estimation problems. We address this issue by introducing the "Nonparametric Empirical Bayes Structural Tweedie" (NEST) estimator, which efficiently estimates the unknown effect sizes and properly adjusts for heterogeneity via a generalized version of Tweedie's formula. For the normal means problem, NEST simultaneously handles the two main selection biases introduced by heterogeneity: one, the selection bias in the mean, which cannot be effectively corrected without also correcting for, two, selection bias in the variance. Our theoretical results show that NEST has strong asymptotic properties without requiring explicit assumptions about the prior. Extensions to other two-parameter members of the exponential family are discussed. Simulation studies show that NEST outperforms competing methods, with much efficiency gains in many settings. The proposed method is demonstrated on estimating the batting averages of baseball players and Sharpe ratios of mutual fund returns.

In this paper, we consider a boundary value problem (BVP) for a fourth order nonlinear functional integro-differential equation. We establish the existence and uniqueness of solution and construct a numerical method for solving it. We prove that the method is of second order accuracy and obtain an estimate for the total error. Some examples demonstrate the validity of the obtained theoretical results and the efficiency of the numerical method.

Given a real-valued hypothesis class $\mathcal{H}$, we investigate under what conditions there is a differentially private algorithm which learns an optimal hypothesis from $\mathcal{H}$ given i.i.d. data. Inspired by recent results for the related setting of binary classification (Alon et al., 2019; Bun et al., 2020), where it was shown that online learnability of a binary class is necessary and sufficient for its private learnability, Jung et al. (2020) showed that in the setting of regression, online learnability of $\mathcal{H}$ is necessary for private learnability. Here online learnability of $\mathcal{H}$ is characterized by the finiteness of its $\eta$-sequential fat shattering dimension, ${\rm sfat}_\eta(\mathcal{H})$, for all $\eta > 0$. In terms of sufficient conditions for private learnability, Jung et al. (2020) showed that $\mathcal{H}$ is privately learnable if $\lim_{\eta \downarrow 0} {\rm sfat}_\eta(\mathcal{H})$ is finite, which is a fairly restrictive condition. We show that under the relaxed condition $\lim \inf_{\eta \downarrow 0} \eta \cdot {\rm sfat}_\eta(\mathcal{H}) = 0$, $\mathcal{H}$ is privately learnable, establishing the first nonparametric private learnability guarantee for classes $\mathcal{H}$ with ${\rm sfat}_\eta(\mathcal{H})$ diverging as $\eta \downarrow 0$. Our techniques involve a novel filtering procedure to output stable hypotheses for nonparametric function classes.

This paper considers a novel multi-agent linear stochastic approximation algorithm driven by Markovian noise and general consensus-type interaction, in which each agent evolves according to its local stochastic approximation process which depends on the information from its neighbors. The interconnection structure among the agents is described by a time-varying directed graph. While the convergence of consensus-based stochastic approximation algorithms when the interconnection among the agents is described by doubly stochastic matrices (at least in expectation) has been studied, less is known about the case when the interconnection matrix is simply stochastic. For any uniformly strongly connected graph sequences whose associated interaction matrices are stochastic, the paper derives finite-time bounds on the mean-square error, defined as the deviation of the output of the algorithm from the unique equilibrium point of the associated ordinary differential equation. For the case of interconnection matrices being stochastic, the equilibrium point can be any unspecified convex combination of the local equilibria of all the agents in the absence of communication. Both the cases with constant and time-varying step-sizes are considered. In the case when the convex combination is required to be a straight average and interaction between any pair of neighboring agents may be uni-directional, so that doubly stochastic matrices cannot be implemented in a distributed manner, the paper proposes a push-sum-type distributed stochastic approximation algorithm and provides its finite-time bound for the time-varying step-size case by leveraging the analysis for the consensus-type algorithm with stochastic matrices and developing novel properties of the push-sum algorithm.

We analyze the convergence of the harmonic balance method for computing isolated periodic solutions of a large class of continuously differentiable Hilbert space valued differential-algebraic equations (DAEs). We establish asymptotic convergence estimates for (i) the approximate periodic solution in terms of the number of approximated harmonics and (ii) the inexact Newton method used to compute the approximate Fourier coefficients. The convergence estimates are deter-mined by the rate of convergence of the Fourier series of the exact solution and the structure of the DAE. Both the case that the period is known and unknown are analyzed, where in the latter case we require enforcing an appropriately defined phase condition. The theoretical results are illustrated with several numerical experiments from circuit modeling and structural dynamics.

We propose a numerical method based on physics-informed Random Projection Neural Networks for the solution of Initial Value Problems (IVPs) of Ordinary Differential Equations (ODEs) with a focus on stiff problems. We address an Extreme Learning Machine with a single hidden layer with radial basis functions having as widths uniformly distributed random variables, while the values of the weights between the input and the hidden layer are set equal to one. The numerical solution of the IVPs is obtained by constructing a system of nonlinear algebraic equations, which is solved with respect to the output weights by the Gauss-Newton method, using a simple adaptive scheme for adjusting the time interval of integration. To assess its performance, we apply the proposed method for the solution of four benchmark stiff IVPs, namely the Prothero-Robinson, van der Pol, ROBER and HIRES problems. Our method is compared with an adaptive Runge-Kutta method based on the Dormand-Prince pair, and a variable-step variable-order multistep solver based on numerical differentiation formulas, as implemented in the \texttt{ode45} and \texttt{ode15s} MATLAB functions, respectively. We show that the proposed scheme yields good approximation accuracy, thus outperforming \texttt{ode45} and \texttt{ode15s}, especially in the cases where steep gradients arise. Furthermore, the computational times of our approach are comparable with those of the two MATLAB solvers for practical purposes.

We introduce a new family of deep neural network models. Instead of specifying a discrete sequence of hidden layers, we parameterize the derivative of the hidden state using a neural network. The output of the network is computed using a black-box differential equation solver. These continuous-depth models have constant memory cost, adapt their evaluation strategy to each input, and can explicitly trade numerical precision for speed. We demonstrate these properties in continuous-depth residual networks and continuous-time latent variable models. We also construct continuous normalizing flows, a generative model that can train by maximum likelihood, without partitioning or ordering the data dimensions. For training, we show how to scalably backpropagate through any ODE solver, without access to its internal operations. This allows end-to-end training of ODEs within larger models.

北京阿比特科技有限公司