亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

A class of implicit Milstein type methods is introduced and analyzed in the present article for stochastic differential equations (SDEs) with non-globally Lipschitz drift and diffusion coefficients. By incorporating a pair of method parameters $\theta, \eta \in [0, 1]$ into both the drift and diffusion parts, the new schemes are indeed a kind of drift-diffusion double implicit methods. Within a general framework, we offer upper mean-square error bounds for the proposed schemes, based on certain error terms only getting involved with the exact solution processes. Such error bounds help us to easily analyze mean-square convergence rates of the schemes, without relying on a priori high-order moment estimates of numerical approximations. Putting further globally polynomial growth condition, we successfully recover the expected mean-square convergence rate of order one for the considered schemes with $\theta \in [\tfrac12, 1], \eta \in [0, 1]$. Also, some of the proposed schemes are applied to solve three SDE models evolving in the positive domain $(0, \infty)$. More specifically, the particular drift-diffusion implicit Milstein method ($ \theta = \eta = 1 $) is utilized to approximate the Heston $\tfrac32$-volatility model and the stochastic Lotka-Volterra competition model. The semi-implicit Milstein method ($\theta =1, \eta = 0$) is used to solve the Ait-Sahalia interest rate model. Thanks to the previously obtained error bounds, we reveal the optimal mean-square convergence rate of the positivity preserving schemes under more relaxed conditions, compared with existing relevant results in the literature. Numerical examples are also reported to confirm the previous findings.

相關內容

ACM/IEEE第23屆模型驅動工程語言和系統國際會議,是模型驅動軟件和系統工程的首要會議系列,由ACM-SIGSOFT和IEEE-TCSE支持組織。自1998年以來,模型涵蓋了建模的各個方面,從語言和方法到工具和應用程序。模特的參加者來自不同的背景,包括研究人員、學者、工程師和工業專業人士。MODELS 2019是一個論壇,參與者可以圍繞建模和模型驅動的軟件和系統交流前沿研究成果和創新實踐經驗。今年的版本將為建模社區提供進一步推進建模基礎的機會,并在網絡物理系統、嵌入式系統、社會技術系統、云計算、大數據、機器學習、安全、開源等新興領域提出建模的創新應用以及可持續性。 官網鏈接: · Learning · Continuity · 求逆 · 離散化 ·
2023 年 5 月 9 日

Ensuring safety of reinforcement learning (RL) algorithms is crucial to unlock their potential for many real-world tasks. However, vanilla RL does not guarantee safety. In recent years, several methods have been proposed to provide safety guarantees for RL by design. Yet, there is no comprehensive comparison of these provably safe RL methods. We therefore introduce a categorization of existing provably safe RL methods, present the theoretical foundations for both continuous and discrete action spaces, and benchmark the methods' performance empirically. The methods are categorized based on how the action is adapted by the safety method: action replacement, action projection, and action masking. Our experiments on an inverted pendulum and quadrotor stabilization task show that all provably safe methods are indeed always safe. Furthermore, their trained performance is comparable to unsafe baselines. The benchmarking suggests that different provably safe RL approaches should be selected depending on safety specifications, RL algorithms, and type of action space.

Overparameterized models may have many interpolating solutions; implicit regularization refers to the hidden preference of a particular optimization method towards a certain interpolating solution among the many. A by now established line of work has shown that (stochastic) gradient descent tends to have an implicit bias towards low rank and/or sparse solutions when used to train deep linear networks, explaining to some extent why overparameterized neural network models trained by gradient descent tend to have good generalization performance in practice. However, existing theory for square-loss objectives often requires very small initialization of the trainable weights, which is at odds with the larger scale at which weights are initialized in practice for faster convergence and better generalization performance. In this paper, we aim to close this gap by incorporating and analyzing gradient descent with weight normalization, where the weight vector is reparamterized in terms of polar coordinates, and gradient descent is applied to the polar coordinates. By analyzing key invariants of the gradient flow and using Lojasiewicz's Theorem, we show that weight normalization also has an implicit bias towards sparse solutions in the diagonal linear model, but that in contrast to plain gradient descent, weight normalization enables a robust bias that persists even if the weights are initialized at practically large scale. Experiments suggest that the gains in both convergence speed and robustness of the implicit bias are improved dramatically by using weight normalization in overparameterized diagonal linear network models.

Let a polytope $P$ be defined by a system $A x \leq b$. We consider the problem of counting the number of integer points inside $P$, assuming that $P$ is $\Delta$-modular, where the polytope $P$ is called $\Delta$-modular if all the rank sub-determinants of $A$ are bounded by $\Delta$ in the absolute value. We present a new FPT-algorithm, parameterized by $\Delta$ and by the maximal number of vertices in $P$, where the maximum is taken by all r.h.s. vectors $b$. We show that our algorithm is more efficient for $\Delta$-modular problems than the approach of A. Barvinok et al. To this end, we do not directly compute the short rational generating function for $P \cap Z^n$, which is commonly used for the considered problem. Instead, we use the dynamic programming principle to compute its particular representation in the form of exponential series that depends on a single variable. We completely do not rely to the Barvinok's unimodular sign decomposition technique. Using our new complexity bound, we consider different special cases that may be of independent interest. For example, we give FPT-algorithms for counting the integer points number in $\Delta$-modular simplices and similar polytopes that have $n + O(1)$ facets. As a special case, for any fixed $m$, we give an FPT-algorithm to count solutions of the unbounded $m$-dimensional $\Delta$-modular subset-sum problem.

We show that the sparsified block elimination algorithm for solving undirected Laplacian linear systems from [Kyng-Lee-Peng-Sachdeva-Spielman STOC'16] directly works for directed Laplacians. Given access to a sparsification algorithm that, on graphs with $n$ vertices and $m$ edges, takes time $\mathcal{T}_{\rm S}(m)$ to output a sparsifier with $\mathcal{N}_{\rm S}(n)$ edges, our algorithm solves a directed Eulerian system on $n$ vertices and $m$ edges to $\epsilon$ relative accuracy in time $$ O(\mathcal{T}_{\rm S}(m) + {\mathcal{N}_{\rm S}(n)\log {n}\log(n/\epsilon)}) + \tilde{O}(\mathcal{T}_{\rm S}(\mathcal{N}_{\rm S}(n)) \log n), $$ where the $\tilde{O}(\cdot)$ notation hides $\log\log(n)$ factors. By previous results, this implies improved runtimes for linear systems in strongly connected directed graphs, PageRank matrices, and asymmetric M-matrices. When combined with slower constructions of smaller Eulerian sparsifiers based on short cycle decompositions, it also gives a solver that runs in $O(n \log^{5}n \log(n / \epsilon))$ time after $O(n^2 \log^{O(1)} n)$ pre-processing. At the core of our analyses are constructions of augmented matrices whose Schur complements encode error matrices.

Motivated by applications in distributed storage, distributed computing, and homomorphic secret sharing, we study communication-efficient schemes for computing linear combinations of coded symbols. Specifically, we design low-bandwidth schemes that evaluate the weighted sum of $\ell$ coded symbols in a codeword $\pmb{c}\in\mathbb{F}^n$, when we are given access to $d$ of the remaining components in $\pmb{c}$. Formally, suppose that $\mathbb{F}$ is a field extension of $\mathbb{B}$ of degree $t$. Let $\pmb{c}$ be a codeword in a Reed-Solomon code of dimension $k$ and our task is to compute the weighted sum of $\ell$ coded symbols. In this paper, for some $s<t$, we provide an explicit scheme that performs this task by downloading $d(t-s)$ sub-symbols in $\mathbb{B}$ from $d$ available nodes, whenever $d\geq \ell|\mathbb{B}|^s-\ell+k$. In many cases, our scheme outperforms previous schemes in the literature. Furthermore, we provide a characterization of evaluation schemes for general linear codes. Then in the special case of Reed-Solomon codes, we use this characterization to derive a lower bound for the evaluation bandwidth.

Profile likelihoods are rarely used in geostatistical models due to the computational burden imposed by repeated decompositions of large variance matrices. Accounting for uncertainty in covariance parameters can be highly consequential in geostatistical models as some covariance parameters are poorly identified, the problem is severe enough that the differentiability parameter of the Matern correlation function is typically treated as fixed. The problem is compounded with anisotropic spatial models as there are two additional parameters to consider. In this paper, we make the following contributions: 1, A methodology is created for profile likelihoods for Gaussian spatial models with Mat\'ern family of correlation functions, including anisotropic models. This methodology adopts a novel reparametrization for generation of representative points, and uses GPUs for parallel profile likelihoods computation in software implementation. 2, We show the profile likelihood of the Mat\'ern shape parameter is often quite flat but still identifiable, it can usually rule out very small values. 3, Simulation studies and applications on real data examples show that profile-based confidence intervals of covariance parameters and regression parameters have superior coverage to the traditional standard Wald type confidence intervals.

It is known that standard stochastic Galerkin methods encounter challenges when solving partial differential equations with high dimensional random inputs, which are typically caused by the large number of stochastic basis functions required. It becomes crucial to properly choose effective basis functions, such that the dimension of the stochastic approximation space can be reduced. In this work, we focus on the stochastic Galerkin approximation associated with generalized polynomial chaos (gPC), and explore the gPC expansion based on the analysis of variance (ANOVA) decomposition. A concise form of the gPC expansion is presented for each component function of the ANOVA expansion, and an adaptive ANOVA procedure is proposed to construct the overall stochastic Galerkin system. Numerical results demonstrate the efficiency of our proposed adaptive ANOVA stochastic Galerkin method.

A modification of Newton's method for solving systems of $n$ nonlinear equations is presented. The new matrix-free method relies on a given decomposition of the invertible Jacobian of the residual into invertible sparse local Jacobians according to the chain rule of differentiation. It is motivated in the context of local Jacobians with bandwidth $2m+1$ for $m\ll n$. A reduction of the computational cost by $\mathcal{O}(\frac{n}{m})$ can be observed. Supporting run time measurements are presented for the tridiagonal case showing a reduction of the computational cost by $\mathcal{O}(n).$ Generalization yields the combinatorial Matrix-Free Newton Step problem. We prove NP-completeness and we present algorithmic components for building methods for the approximate solution. Inspired by adjoint Algorithmic Differentiation, the new method shares several challenges for the latter including the DAG Reversal problem. Further challenges are due to combinatorial problems in sparse linear algebra such as Bandwidth or Directed Elimination Ordering.

Peter Andrews has proposed, in 1971, the problem of finding an analog of the Skolem theorem for Simple Type Theory. A first idea lead to a naive rule that worked only for Simple Type Theory with the axiom of choice and the general case has only been solved, more than ten years later, by Dale Miller. More recently, we have proposed with Th{\'e}r{\`e}se Hardin and Claude Kirchner a new way to prove analogs of the Miller theorem for different, but equivalent, formulations of Simple Type Theory. In this paper, that does not contain new technical results, I try to show that the history of the skolemization problem and of its various solutions is an illustration of a tension between two points of view on Simple Type Theory: the logical and the theoretical points of view.

Training a neural network (NN) typically relies on some type of curve-following method, such as gradient descent (GD) (and stochastic gradient descent (SGD)), ADADELTA, ADAM or limited memory algorithms. Convergence for these algorithms usually relies on having access to a large quantity of observations in order to achieve a high level of accuracy and, with certain classes of functions, these algorithms could take multiple epochs of data points to catch on. Herein, a different technique with the potential of achieving dramatically better speeds of convergence, especially for shallow networks, is explored: it does not curve-follow but rather relies on 'decoupling' hidden layers and on updating their weighted connections through bootstrapping, resampling and linear regression. By utilizing resampled observations, the convergence of this process is empirically shown to be remarkably fast and to require a lower amount of data points: in particular, our experiments show that one needs a fraction of the observations that are required with traditional neural network training methods to approximate various classes of functions.

北京阿比特科技有限公司