亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We correct two errors in our paper [4]. First error concerns the definition of the SVI solution, where a boundary term which arises due to the Dirichlet boundary condition, was not included. The second error concerns the discrete estimate [4, Lemma 4.4], which involves the discrete Laplace operator. We provide an alternative proof of the estimate in spatial dimension $d=1$ by using a mass lumped version of the discrete Laplacian. Hence, after a minor modification of the fully discrete numerical scheme the convergence in $d=1$ follows along the lines of the original proof. The convergence proof of the time semi-discrete scheme, which relies on the continuous counterpart of the estimate [4, Lemma 4.4], remains valid in higher spatial dimension. The convergence of the fully discrete finite element scheme from [4] in any spatial dimension is shown in [3] by using a different approach.

相關內容

This paper considers numerical discretization of a nonlocal conservation law modeling vehicular traffic flows involving nonlocal inter-vehicle interactions. The nonlocal model involves an integral over the range measured by a horizon parameter and it recovers the local Lighthill-Richards-Whitham model as the nonlocal horizon parameter goes to zero. Good numerical schemes for simulating these parameterized nonlocal traffic flow models should be robust with respect to the change of the model parameters but this has not been systematically investigated in the literature. We fill this gap through a careful study of a class of finite volume numerical schemes with suitable discretizations of the nonlocal integral, which include several schemes proposed in the literature and their variants. Our main contributions are to demonstrate the asymptotically compatibility of the schemes, which includes both the uniform convergence of the numerical solutions to the unique solution of nonlocal continuum model for a given positive horizon parameter and the convergence to the unique entropy solution of the local model as the mesh size and the nonlocal horizon parameter go to zero simultaneously. It is shown that with the asymptotically compatibility, the schemes can provide robust numerical computation under the changes of the nonlocal horizon parameter.

We introduce a new tool for stochastic convex optimization (SCO): a Reweighted Stochastic Query (ReSQue) estimator for the gradient of a function convolved with a (Gaussian) probability density. Combining ReSQue with recent advances in ball oracle acceleration [CJJJLST20, ACJJS21], we develop algorithms achieving state-of-the-art complexities for SCO in parallel and private settings. For a SCO objective constrained to the unit ball in $\mathbb{R}^d$, we obtain the following results (up to polylogarithmic factors). We give a parallel algorithm obtaining optimization error $\epsilon_{\text{opt}}$ with $d^{1/3}\epsilon_{\text{opt}}^{-2/3}$ gradient oracle query depth and $d^{1/3}\epsilon_{\text{opt}}^{-2/3} + \epsilon_{\text{opt}}^{-2}$ gradient queries in total, assuming access to a bounded-variance stochastic gradient estimator. For $\epsilon_{\text{opt}} \in [d^{-1}, d^{-1/4}]$, our algorithm matches the state-of-the-art oracle depth of [BJLLS19] while maintaining the optimal total work of stochastic gradient descent. We give an $(\epsilon_{\text{dp}}, \delta)$-differentially private algorithm which, given $n$ samples of Lipschitz loss functions, obtains near-optimal optimization error and makes $\min(n, n^2\epsilon_{\text{dp}}^2 d^{-1}) + \min(n^{4/3}\epsilon_{\text{dp}}^{1/3}, (nd)^{2/3}\epsilon_{\text{dp}}^{-1})$ queries to the gradients of these functions. In the regime $d \le n \epsilon_{\text{dp}}^{2}$, where privacy comes at no cost in terms of the optimal loss up to constants, our algorithm uses $n + (nd)^{2/3}\epsilon_{\text{dp}}^{-1}$ queries and improves recent advancements of [KLL21, AFKT21]. In the moderately low-dimensional setting $d \le \sqrt n \epsilon_{\text{dp}}^{3/2}$, our query complexity is near-linear.

With the emergence of energy communities, where a number of prosumers invest in shared generation and storage, the issue of fair allocation of benefits is increasingly important. The Shapley value has attracted increasing interest for redistribution in energy settings - however, computing it exactly is intractable beyond a few dozen prosumers. In this paper, we first conduct a systematic review of the literature on the use of Shapley value in energy-related applications, as well as efforts to compute or approximate it. Next, we formalise the main methods for approximating the Shapley value in community energy settings, and propose a new one, which we call the stratified expected value approximation. To compare the performance of these methods, we design a novel method for exact Shapley value computation, which can be applied to communities of up to several hundred agents by clustering the prosumers into a smaller number of demand profiles. We perform a large-scale experimental comparison of the proposed methods, for communities of up to 200 prosumers, using large-scale, publicly available data from two large-scale energy trials in the UK (UKERC Energy Data Centre, 2017, UK Power Networks Innovation, 2021). Our analysis shows that, as the number of agents in the community increases, the relative difference to the exact Shapley value converges to under 1% for all the approximation methods considered. In particular, for most experimental scenarios, we show that there is no statistical difference between the newly proposed stratified expected value method and the existing state-of-the-art method that uses adaptive sampling (O'Brien et al., 2015), although the cost of computation for large communities is an order of magnitude lower.

General log-linear models specified by non-negative integer design matrices have a potentially wide range of applications, although using models without the genuine overall effect, that is, ones which cannot be reparameterized to include a normalizing constant, is still rare. The log-linear models without the overall effect arise naturally in practice, and can be handled in a similar manner to models with the overall effect. A novel iterative scaling procedure for the MLE computation under such models is proposed, and its convergence is proved. The results are illustrated using data from a recent clinical study.

We study a 1D geometry of a plasma confined between two conducting floating walls with applications to laboratory plasmas. These plasmas are characterized by a quasi-neutral bulk that is joined to the wall by a thin boundary layer called sheath that is positively charged. Although analytical solutions are available in the sheath and the pre-sheath, joining the two areas by one analytical solution is still an open problem which requires the numerical resolution of the fluid equations coupled to Poisson equation. Current numerical schemes use high-order discretizations to correctly capture the electron current in the sheath, presenting unsatisfactory results in the boundary layer and they are not adapted to all the possible collisional regimes. In this work, we identify the main numerical challenges that arise when attempting the simulations of such configuration and we propose explanations for the observed phenomena via numerical analysis. We propose a numerical scheme with controlled diffusion as well as new discrete boundary conditions that address the identified issues.

We consider the problem of estimating a multivariate function $f_0$ of bounded variation (BV), from noisy observations $y_i = f_0(x_i) + z_i$ made at random design points $x_i \in \mathbb{R}^d$, $i=1,\ldots,n$. We study an estimator that forms the Voronoi diagram of the design points, and then solves an optimization problem that regularizes according to a certain discrete notion of total variation (TV): the sum of weighted absolute differences of parameters $\theta_i,\theta_j$ (which estimate the function values $f_0(x_i),f_0(x_j)$) at all neighboring cells $i,j$ in the Voronoi diagram. This is seen to be equivalent to a variational optimization problem that regularizes according to the usual continuum (measure-theoretic) notion of TV, once we restrict the domain to functions that are piecewise constant over the Voronoi diagram. The regression estimator under consideration hence performs (shrunken) local averaging over adaptively formed unions of Voronoi cells, and we refer to it as the Voronoigram, following the ideas in Koenker (2005), and drawing inspiration from Tukey's regressogram (Tukey, 1961). Our contributions in this paper span both the conceptual and theoretical frontiers: we discuss some of the unique properties of the Voronoigram in comparison to TV-regularized estimators that use other graph-based discretizations; we derive the asymptotic limit of the Voronoi TV functional; and we prove that the Voronoigram is minimax rate optimal (up to log factors) for estimating BV functions that are essentially bounded.

We consider a mixed dimensional elliptic partial differential equation posed in a bulk domain with a large number of embedded interfaces. In particular, we study well-posedness of the problem and regularity of the solution. We also propose a fitted finite element approximation and prove an a priori error bound. For the solution of the arising linear system we propose and analyze an iterative method based on subspace decomposition. Finally, we present numerical experiments and achieve rapid convergence using the proposed preconditioner, confirming our theoretical findings.

Equilibrium properties in statistical physics are obtained by computing averages with respect to Boltzmann-Gibbs measures, sampled in practice using ergodic dynamics such as the Langevin dynamics. Some quantities however cannot be computed by simply sampling the Boltzmann-Gibbs measure, in particular transport coefficients, which relate the current of some physical quantity of interest to the forcing needed to induce it. For instance, a temperature difference induces an energy current, the proportionality factor between these two quantities being the thermal conductivity. From an abstract point of view, transport coefficients can also be considered as some form of sensitivity analysis with respect to an added forcing to the baseline dynamics. There are various numerical techniques to estimate transport coefficients, which all suffer from large errors, in particular large statistical errors. This contribution reviews the most popular methods, namely the Green-Kubo approach where the transport coefficient is expressed as some time-integrated correlation function, and the approach based on longtime averages of the stochastic dynamics perturbed by an external driving (so-called nonequilibrium molecular dynamics). In each case, the various sources of errors are made precise, in particular the bias related to the time discretization of the underlying continuous dynamics, and the variance of the associated Monte Carlo estimators. Some recent alternative techniques to estimate transport coefficients are also discussed.

Stochastic gradient methods have enabled variational inference for high-dimensional models. However, the steepest ascent direction in the parameter space of a statistical model is actually given by the natural gradient which premultiplies the widely used Euclidean gradient by the inverse Fisher information. Use of natural gradients can improve convergence, but inverting the Fisher information matrix is daunting in high-dimensions. In Gaussian variational approximation, natural gradient updates of the mean and precision of the normal distribution can be derived analytically, but do not ensure that the precision matrix remains positive definite. To tackle this issue, we consider Cholesky decomposition of the covariance or precision matrix, and derive analytic natural gradient updates of the Cholesky factor, which depend on either the first or second derivative of the log posterior density. Efficient natural gradient updates of the Cholesky factor are also derived under sparsity constraints representing different posterior correlation structures. As Adam's adaptive learning rate does not work well with natural gradients, we propose stochastic normalized natural gradient ascent with momentum. The efficiency of proposed methods are demonstrated using logistic regression and generalized linear mixed models.

This paper focuses on the expected difference in borrower's repayment when there is a change in the lender's credit decisions. Classical estimators overlook the confounding effects and hence the estimation error can be magnificent. As such, we propose another approach to construct the estimators such that the error can be greatly reduced. The proposed estimators are shown to be unbiased, consistent, and robust through a combination of theoretical analysis and numerical testing. Moreover, we compare the power of estimating the causal quantities between the classical estimators and the proposed estimators. The comparison is tested across a wide range of models, including linear regression models, tree-based models, and neural network-based models, under different simulated datasets that exhibit different levels of causality, different degrees of nonlinearity, and different distributional properties. Most importantly, we apply our approaches to a large observational dataset provided by a global technology firm that operates in both the e-commerce and the lending business. We find that the relative reduction of estimation error is strikingly substantial if the causal effects are accounted for correctly.

北京阿比特科技有限公司