亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We introduce a local adaptive discontinuous Galerkin method for convection-diffusion-reaction equations. The proposed method is based on a coarse grid and iteratively improves the solution's accuracy by solving local elliptic problems in refined subdomains. For purely diffusion problems, we already proved that this scheme converges under minimal regularity assumptions [A. Abdulle and G.Rosilho de Souza, ESAIM: M2AN, 53(4):1269--1303, 2019]. In this paper, we provide an algorithm for the automatic identification of the local elliptic problems' subdomains employing a flux reconstruction strategy. Reliable error estimators are derived for the local adaptive method. Numerical comparisons with a classical nonlocal adaptive algorithm illustrate the efficiency of the method.

相關內容

We present a novel energy-based numerical analysis of semilinear diffusion-reaction boundary value problems. Based on a suitable variational setting, the proposed computational scheme can be seen as an energy minimisation approach. More specifically, this procedure aims to generate a sequence of numerical approximations, which results from the iterative solution of related (stabilised) linearised discrete problems, and tends to a local minimum of the underlying energy functional. Simultaneously, the finite-dimensional approximation spaces are adaptively refined; this is implemented in terms of a new mesh refinement strategy in the context of finite element discretisations, which again relies on the energy structure of the problem under consideration, and does not involve any a posteriori error indicators. In combination, the resulting adaptive algorithm consists of an iterative linearisation procedure on a sequence of hierarchically refined discrete spaces, which we prove to converge towards a solution of the continuous problem in an appropriate sense. Numerical experiments demonstrate the robustness and reliability of our approach for a series of examples.

This paper proposes a regularization of the Monge-Amp\`ere equation in planar convex domains through uniformly elliptic Hamilton-Jacobi-Bellman equations. The regularized problem possesses a unique strong solution $u_\varepsilon$ and is accessible to the discretization with finite elements. This work establishes locally uniform convergence of $u_\varepsilon$ to the convex Alexandrov solution $u$ to the Monge-Amp\`ere equation as the regularization parameter $\varepsilon$ approaches $0$. A mixed finite element method for the approximation of $u_\varepsilon$ is proposed, and the regularized finite element scheme is shown to be locally uniformly convergent. Numerical experiments provide empirical evidence for the efficient approximation of singular solutions $u$.

Low rank matrix approximation is a popular topic in machine learning. In this paper, we propose a new algorithm for this topic by minimizing the least-squares estimation over the Riemannian manifold of fixed-rank matrices. The algorithm is an adaptation of classical gradient descent within the framework of optimization on manifolds. In particular, we reformulate an unconstrained optimization problem on a low-rank manifold into a differential dynamic system. We develop a splitting numerical integration method by applying a splitting integration scheme to the dynamic system. We conduct the convergence analysis of our splitting numerical integration algorithm. It can be guaranteed that the error between the recovered matrix and true result is monotonically decreasing in the Frobenius norm. Moreover, our splitting numerical integration can be adapted into matrix completion scenarios. Experimental results show that our approach has good scalability for large-scale problems with satisfactory accuracy

This article proposes Convolutional Neural Network-based Auto Encoder (CNN-AE) to predict location-dependent rate and coverage probability of a network from its topology. We train the CNN utilising BS location data of India, Brazil, Germany, and the USA and compare its performance with stochastic geometry (SG) based analytical models. In comparison to the best-fitted SG-based model, CNN-AE improves the coverage and rate prediction errors by a margin of as large as $40\%$ and $25\%$ respectively. As an application, we propose a low complexity, provably convergent algorithm that, using trained CNN-AE, can compute locations of new BSs that need to be deployed in a network in order to satisfy pre-defined spatially heterogeneous performance goals.

This paper considers a two-user non-orthogonal multiple access (NOMA) based infrastructure-to-vehicle (I2V) network, where one user requires reliable safety-critical data transmission and the other pursues high-capacity services. Leveraging only slow fading of channel state information, we aim to maximize the expected sum throughput of the capacity hungry user subject to a constraint on the payload delivery success probability of the reliability sensitive user, by jointly optimizing the transmit powers, target rates, and decoding order. We introduce a dual variable and formulate the optimization as an unconstrained single-objective sequential decision problem. Then, we design a dynamic programming based algorithm to derive the optimal policy that maximizes the Lagrangian. Afterwards, a bisection search based method is proposed to find the optimal dual variable. The proposed strategy is shown by numerical results to be superior to the baseline approaches from the perspectives of expected return, performance region, and objective value.

Smooth minimax games often proceed by simultaneous or alternating gradient updates. Although algorithms with alternating updates are commonly used in practice, the majority of existing theoretical analyses focus on simultaneous algorithms for convenience of analysis. In this paper, we study alternating gradient descent-ascent (Alt-GDA) in minimax games and show that Alt-GDA is superior to its simultaneous counterpart~(Sim-GDA) in many settings. We prove that Alt-GDA achieves a near-optimal local convergence rate for strongly convex-strongly concave (SCSC) problems while Sim-GDA converges at a much slower rate. To our knowledge, this is the \emph{first} result of any setting showing that Alt-GDA converges faster than Sim-GDA by more than a constant. We further adapt the theory of integral quadratic constraints (IQC) and show that Alt-GDA attains the same rate \emph{globally} for a subclass of SCSC minimax problems. Empirically, we demonstrate that alternating updates speed up GAN training significantly and the use of optimism only helps for simultaneous algorithms.

The aim of this work is to devise and analyse an accurate numerical scheme to solve Erd\'elyi-Kober fractional diffusion equation. This solution can be thought as the marginal pdf of the stochastic process called the generalized grey Brownian motion (ggBm). The ggBm includes some well-known stochastic processes: Brownian motion, fractional Brownian motion and grey Brownian motion. To obtain convergent numerical scheme we transform the fractional diffusion equation into its weak form and apply the discretization of the Erd\'elyi-Kober fractional derivative. We prove the stability of the solution of the semi-discrete problem and its convergence to the exact solution. Due to the singular in time term appearing in the main equation the proposed method converges slower than first order. Finally, we provide the numerical analysis of the full-discrete problem using orthogonal expansion in terms of Hermite functions.

We develop a novel a posteriori error estimator for the L2 error committed by the finite element discretization of the solution of the fractional Laplacian. Our a posteriori error estimator takes advantage of the semi-discretization scheme using a rational approximation which allows to reformulate the fractional problem into a family of non-fractional parametric problems. The estimator involves applying the implicit Bank-Weiser error estimation strategy to each parametric non-fractional problem and reconstructing the fractional error through the same rational approximation used to compute the solution to the original fractional problem. We provide several numerical examples in both two and three-dimensions demonstrating the effectivity of our estimator for varying fractional powers and its ability to drive an adaptive mesh refinement strategy.

Large-scale machine learning systems often involve data distributed across a collection of users. Federated optimization algorithms leverage this structure by communicating model updates to a central server, rather than entire datasets. In this paper, we study stochastic optimization algorithms for a personalized federated learning setting involving local and global models subject to user-level (joint) differential privacy. While learning a private global model induces a cost of privacy, local learning is perfectly private. We show that coordinating local learning with private centralized learning yields a generically useful and improved tradeoff between accuracy and privacy. We illustrate our theoretical results with experiments on synthetic and real-world datasets.

We develop an approach to risk minimization and stochastic optimization that provides a convex surrogate for variance, allowing near-optimal and computationally efficient trading between approximation and estimation error. Our approach builds off of techniques for distributionally robust optimization and Owen's empirical likelihood, and we provide a number of finite-sample and asymptotic results characterizing the theoretical performance of the estimator. In particular, we show that our procedure comes with certificates of optimality, achieving (in some scenarios) faster rates of convergence than empirical risk minimization by virtue of automatically balancing bias and variance. We give corroborating empirical evidence showing that in practice, the estimator indeed trades between variance and absolute performance on a training sample, improving out-of-sample (test) performance over standard empirical risk minimization for a number of classification problems.

北京阿比特科技有限公司