亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We present a priori and a posteriori error analysis of a high order hybridizable discontinuous Galerkin (HDG) method applied to a semi-linear elliptic problem posed on a piecewise curved, non polygonal domain. We approximate $\Omega$ by a polygonal subdomain $\Omega_h$ and propose an HDG discretization, which is shown to be optimal under mild assumptions related to the non-linear source term and the distance between the boundaries of the polygonal subdomain $\Omega_h$ and the true domain $\Omega$. Moreover, a local non-linear post-processing of the scalar unknown is proposed and shown to provide an additional order of convergence. A reliable and locally efficient a posteriori error estimator that takes into account the error in the approximation of the boundary data of $\Omega_h$ is also provided.

相關內容

In this article, we focus on extending the notion of lattice linearity to self-stabilizing programs. Lattice linearity allows a node to execute its actions with old information about the state of other nodes and still preserve correctness. It increases the concurrency of the program execution by eliminating the need for synchronization among its nodes. The extension -- denoted as eventually lattice linear algorithms -- is performed with an example of the service-demand based minimal dominating set (SDDS) problem, which is a generalization of the dominating set problem; it converges in $2n$ moves. Subsequently, we also show that the same approach could be used in various other problems including minimal vertex cover, maximal independent set and graph coloring.

We introduce the multivariate decomposition finite element method for elliptic PDEs with lognormal diffusion coefficient $a=\exp(Z)$ where $Z$ is a Gaussian random field defined by an infinite series expansion $Z(\boldsymbol{y}) = \sum_{j\ge1} y_j\,\phi_j$ with $y_j\sim\mathcal{N}(0,1)$ and a given sequence of functions $\{\phi_j\}_{j\ge1}$. We use the MDFEM to approximate the expected value of a linear functional of the solution of the PDE which is an infinite-dimensional integral over the parameter space. The proposed algorithm uses the multivariate decomposition method (MDM) to compute the infinite-dimensional integral by a decomposition into finite-dimensional integrals, which we resolve using quasi-Monte Carlo (QMC) methods, and for which we use the finite element method (FEM) to solve different instances of the PDE. We develop higher-order quasi-Monte Carlo rules for integration over the finite-dimensional Euclidean space with respect to the Gaussian distribution by use of a truncation strategy. By linear transformations of interlaced polynomial lattice rules from the unit cube to a multivariate box of the Euclidean space we achieve higher-order convergence rates for functions belonging to a class of anchored Gaussian Sobolev spaces, taking into account the truncation error. Under appropriate conditions, the MDFEM achieves higher-order convergence rates in term of error versus cost, i.e., to achieve an accuracy of $O(\epsilon)$ the computational cost is $O(\epsilon^{-1/\lambda-d'/\lambda}) = O(\epsilon^{-(p^*+d'/\tau)/(1-p^*)})$ where $\epsilon^{-1/\lambda}$ and $\epsilon^{-d'/\lambda}$ are respectively the cost of the quasi-Monte Carlo cubature and the finite element approximations, with $d' = d \, (1+\delta')$ for some $\delta' \ge 0$ and $d$ the physical dimension, and $0 < p^* \le (2+d'/\tau)^{-1}$ is a parameter representing the sparsity of $\{\phi_j\}_{j\ge1}$.

Representing a sparse histogram, or more generally a sparse vector, is a fundamental task in differential privacy. An ideal solution would use space close to information-theoretical lower bounds, have an error distribution that depends optimally on the desired privacy level, and allow fast random access to entries in the vector. However, existing approaches have only achieved two of these three goals. In this paper we introduce the Approximate Laplace Projection (ALP) mechanism for approximating k-sparse vectors. This mechanism is shown to simultaneously have information-theoretically optimal space (up to constant factors), fast access to vector entries, and error of the same magnitude as the Laplace-mechanism applied to dense vectors. A key new technique is a unary representation of small integers, which we show to be robust against ``randomized response'' noise. This representation is combined with hashing, in the spirit of Bloom filters, to obtain a space-efficient, differentially private representation. Our theoretical performance bounds are complemented by simulations which show that the constant factors on the main performance parameters are quite small, suggesting practicality of the technique.

We consider a stochastic version of the proximal point algorithm for optimization problems posed on a Hilbert space. A typical application of this is supervised learning. While the method is not new, it has not been extensively analyzed in this form. Indeed, most related results are confined to the finite-dimensional setting, where error bounds could depend on the dimension of the space. On the other hand, the few existing results in the infinite-dimensional setting only prove very weak types of convergence, owing to weak assumptions on the problem. In particular, there are no results that show convergence with a rate. In this article, we bridge these two worlds by assuming more regularity of the optimization problem, which allows us to prove convergence with an (optimal) sub-linear rate also in an infinite-dimensional setting. In particular, we assume that the objective function is the expected value of a family of convex differentiable functions. While we require that the full objective function is strongly convex, we do not assume that its constituent parts are so. Further, we require that the gradient satisfies a weak local Lipschitz continuity property, where the Lipschitz constant may grow polynomially given certain guarantees on the variance and higher moments near the minimum. We illustrate these results by discretizing a concrete infinite-dimensional classification problem with varying degrees of accuracy.

Different from a typical independent identically distributed (IID) element assumption, this paper studies the estimation of IID row random matrix for the generalized linear model constructed by a linear mixing space and a row-wise mapping channel. The objective inference problem arises in many engineering fields, such as wireless communications, compressed sensing, and phase retrieval. We apply the replica method from statistical mechanics to analyze the exact minimum mean square error (MMSE) under the Bayes-optimal setting, in which the explicit replica symmetric solution of the exact MMSE estimator is obtained. Meanwhile, the input-output mutual information relation between the objective model and the equivalent single-vector system is established. To estimate the signal, we also propose a computationally efficient message passing based algorithm on expectation propagation (EP) perspective and analyze its dynamics. We verify that the asymptotic MSE of proposed algorithm predicted by its state evolution (SE) matches perfectly the exact MMSE estimator predicted by the replica method. That indicates, the optimal MSE error can be attained by the proposed algorithm if it has a unique fixed point.

We obtain explicit $p$-Wasserstein distance error bounds between the distribution of the multi-parameter MLE and the multivariate normal distribution. Our general bounds are given for possibly high-dimensional, independent and identically distributed random vectors. Our general bounds are of the optimal $\mathcal{O}(n^{-1/2})$ order. Explicit numerical constants are given when $p\in(1,2]$, and in the case $p>2$ the bounds are explicit up to a constant factor that only depends on $p$. We apply our general bounds to derive Wasserstein distance error bounds for the multivariate normal approximation of the MLE in several settings; these being single-parameter exponential families, the normal distribution under canonical parametrisation, and the multivariate normal distribution under non-canonical parametrisation. In addition, we provide upper bounds with respect to the bounded Wasserstein distance when the MLE is implicitly defined.

Under some regularity assumptions, we report an a priori error analysis of a dG scheme for the Poisson and Stokes flow problem in their dual mixed formulation. Both formulations satisfy a Babu\v{s}ka-Brezzi type condition within the space H(div) x L2. It is well known that the lowest order Crouzeix-Raviart element paired with piecewise constants satisfies such a condition on (broken) H1 x L2 spaces. In the present article, we use this pair. The continuity of the normal component is weakly imposed by penalizing jumps of the broken H(div) component. For the resulting methods, we prove well-posedness and convergence with constants independent of data and mesh size. We report error estimates in the methods natural norms and optimal local error estimates for the divergence error. In fact, our finite element solution shares for each triangle one DOF with the CR interpolant and the divergence is locally the best-approximation for any regularity. Numerical experiments support the findings and suggest that the other errors converge optimally even for the lowest regularity solutions and a crack-problem, as long as the crack is resolved by the mesh.

We present a framework that allows for the non-asymptotic study of the $2$-Wasserstein distance between the invariant distribution of an ergodic stochastic differential equation and the distribution of its numerical approximation in the strongly log-concave case. This allows us to study in a unified way a number of different integrators proposed in the literature for the overdamped and underdamped Langevin dynamics. In addition, we analyse a novel splitting method for the underdamped Langevin dynamics which only requires one gradient evaluation per time step. Under an additional smoothness assumption on a $d$--dimensional strongly log-concave distribution with condition number $\kappa$, the algorithm is shown to produce with an $\mathcal{O}\big(\kappa^{5/4} d^{1/4}\epsilon^{-1/2} \big)$ complexity samples from a distribution that, in Wasserstein distance, is at most $\epsilon>0$ away from the target distribution.

We present an a posteriori error estimate based on equilibrated stress reconstructions for the finite element approximation of a unilateral contact problem with weak enforcement of the contact conditions. We start by proving a guaranteed upper bound for the dual norm of the residual. This norm is shown to control the natural energy norm up to a boundary term, which can be removed under a saturation assumption. The basic estimate is then refined to distinguish the different components of the error, and is used as a starting point to design an algorithm including adaptive stopping criteria for the nonlinear solver and automatic tuning of a regularization parameter. We then discuss an actual way of computing the stress reconstruction based on the Arnold-Falk-Winther finite elements. Finally, after briefly discussing the efficiency of our estimators, we showcase their performance on a panel of numerical tests.

Many resource allocation problems in the cloud can be described as a basic Virtual Network Embedding Problem (VNEP): finding mappings of request graphs (describing the workloads) onto a substrate graph (describing the physical infrastructure). In the offline setting, the two natural objectives are profit maximization, i.e., embedding a maximal number of request graphs subject to the resource constraints, and cost minimization, i.e., embedding all requests at minimal overall cost. The VNEP can be seen as a generalization of classic routing and call admission problems, in which requests are arbitrary graphs whose communication endpoints are not fixed. Due to its applications, the problem has been studied intensively in the networking community. However, the underlying algorithmic problem is hardly understood. This paper presents the first fixed-parameter tractable approximation algorithms for the VNEP. Our algorithms are based on randomized rounding. Due to the flexible mapping options and the arbitrary request graph topologies, we show that a novel linear program formulation is required. Only using this novel formulation the computation of convex combinations of valid mappings is enabled, as the formulation needs to account for the structure of the request graphs. Accordingly, to capture the structure of request graphs, we introduce the graph-theoretic notion of extraction orders and extraction width and show that our algorithms have exponential runtime in the request graphs' maximal width. Hence, for request graphs of fixed extraction width, we obtain the first polynomial-time approximations. Studying the new notion of extraction orders we show that (i) computing extraction orders of minimal width is NP-hard and (ii) that computing decomposable LP solutions is in general NP-hard, even when restricting request graphs to planar ones.

北京阿比特科技有限公司