亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We introduce two new lowest order methods, a mixed method, and a hybrid Discontinuous Galerkin (HDG) method, for the approximation of incompressible flows. Both methods use divergence-conforming linear Brezzi-Douglas-Marini space for approximating the velocity and the lowest order Raviart-Thomas space for approximating the vorticity. Our methods are based on the physically correct viscous stress tensor of the fluid, involving the symmetric gradient of velocity (rather than the gradient), provide exactly divergence-free discrete velocity solutions, and optimal error estimates that are also pressure robust. We explain how the methods are constructed using the minimal number of coupling degrees of freedom per facet. The stability analysis of both methods are based on a Korn-like inequality for vector finite elements with continuous normal component. Numerical examples illustrate the theoretical findings and offer comparisons of condition numbers between the two new methods.

相關內容

The subgradient method is one of the most fundamental algorithmic schemes for nonsmooth optimization. The existing complexity and convergence results for this algorithm are mainly derived for Lipschitz continuous objective functions. In this work, we first extend the typical complexity results for the subgradient method to convex and weakly convex minimization without assuming Lipschitz continuity. Specifically, we establish $\mathcal{O}(1/\sqrt{T})$ bound in terms of the suboptimality gap ``$f(x) - f^*$'' for convex case and $\mathcal{O}(1/{T}^{1/4})$ bound in terms of the gradient of the Moreau envelope function for weakly convex case. Furthermore, we provide convergence results for non-Lipschitz convex and weakly convex objective functions using proper diminishing rules on the step sizes. In particular, when $f$ is convex, we show $\mathcal{O}(\log(k)/\sqrt{k})$ rate of convergence in terms of the suboptimality gap. With an additional quadratic growth condition, the rate is improved to $\mathcal{O}(1/k)$ in terms of the squared distance to the optimal solution set. When $f$ is weakly convex, asymptotic convergence is derived. The central idea is that the dynamics of properly chosen step sizes rule fully controls the movement of the subgradient method, which leads to boundedness of the iterates, and then a trajectory-based analysis can be conducted to establish the desired results. To further illustrate the wide applicability of our framework, we extend the complexity results to the truncated subgradient, the stochastic subgradient, the incremental subgradient, and the proximal subgradient methods for non-Lipschitz functions.

The solution of computational fluid dynamics problems is one of the most computationally hard tasks, especially in the case of complex geometries and turbulent flow regimes. We propose to use Tensor Train (TT) methods, which possess logarithmic complexity in problem size and have great similarities with quantum algorithms in the structure of data representation. We develop the Tensor train Finite Element Method -- TetraFEM -- and the explicit numerical scheme for the solution of the incompressible Navier-Stokes equation via Tensor Trains. We test this approach on the simulation of liquids mixing in a T-shape mixer, which, to our knowledge, was done for the first time using tensor methods in such non-trivial geometries. As expected, we achieve exponential compression in memory of all FEM matrices and demonstrate an exponential speed-up compared to the conventional FEM implementation on dense meshes. In addition, we discuss the possibility of extending this method to a quantum computer to solve more complex problems. This paper is based on work we conducted for Evonik Industries AG.

We consider the problem of unconstrained minimization of finite sums of functions. We propose a simple, yet, practical way to incorporate variance reduction techniques into SignSGD, guaranteeing convergence that is similar to the full sign gradient descent. The core idea is first instantiated on the problem of minimizing sums of convex and Lipschitz functions and is then extended to the smooth case via variance reduction. Our analysis is elementary and much simpler than the typical proof for variance reduction methods. We show that for smooth functions our method gives $\mathcal{O}(1 / \sqrt{T})$ rate for expected norm of the gradient and $\mathcal{O}(1/T)$ rate in the case of smooth convex functions, recovering convergence results of deterministic methods, while preserving computational advantages of SignSGD.

Mirror descent value iteration (MDVI), an abstraction of Kullback-Leibler (KL) and entropy-regularized reinforcement learning (RL), has served as the basis for recent high-performing practical RL algorithms. However, despite the use of function approximation in practice, the theoretical understanding of MDVI has been limited to tabular Markov decision processes (MDPs). We study MDVI with linear function approximation through its sample complexity required to identify an $\varepsilon$-optimal policy with probability $1-\delta$ under the settings of an infinite-horizon linear MDP, generative model, and G-optimal design. We demonstrate that least-squares regression weighted by the variance of an estimated optimal value function of the next state is crucial to achieving minimax optimality. Based on this observation, we present Variance-Weighted Least-Squares MDVI (VWLS-MDVI), the first theoretical algorithm that achieves nearly minimax optimal sample complexity for infinite-horizon linear MDPs. Furthermore, we propose a practical VWLS algorithm for value-based deep RL, Deep Variance Weighting (DVW). Our experiments demonstrate that DVW improves the performance of popular value-based deep RL algorithms on a set of MinAtar benchmarks.

This paper analyses the problem of a semi-infinite fluid-driven fracture propagating through multiple stress layers in a permeable elastic medium. Such a problem represents the tip region of a planar hydraulic fracture. When the hydraulic fracture crosses a stress layer, the use of a standard tip asymptotic solution may lead to a considerable reduction of accuracy, even for the simplest case of a height-contained fracture. In this study, we propose three approaches to incorporate the effect of stress layers into the tip asymptote: non-singular integral formulation, toughness-corrected asymptote, and an ordinary differential equation approximation of the non-singular integral formulation mentioned above. As illustrated in the paper, these approaches for stress-corrected asymptotes differ in computational complexity, the complexity of implementation, and the accuracy of the approximation. In addition, the size of the validity region of the stress-corrected asymptote is evaluated, and it is shown to be greatly reduced relative to the case without layers. In order to address the issue, the stress relaxation factor is introduced. This, in turn, allows for enhancing the accuracy of the layer-crossing computation on a relatively coarse mesh to utilize the stress-corrected asymptote in hydraulic fracturing simulators for the purpose of front tracking.

This paper introduces a formulation of the variable density incompressible Navier-Stokes equations by modifying the nonlinear terms in a consistent way. For Galerkin discretizations, the formulation leads to full discrete conservation of mass, squared density, momentum, angular momentum and kinetic energy without the divergence-free constraint being strongly enforced. In addition to favorable conservation properties, the formulation is shown to make the density field invariant to global shifts. The effect of viscous regularizations on conservation properties is also investigated. Numerical tests validate the theory developed in this work. The new formulation shows superior performance compared to other formulations from the literature, both in terms of accuracy for smooth problems and in terms of robustness.

In recent years, there has been a significant growth in research focusing on minimum $\ell_2$ norm (ridgeless) interpolation least squares estimators. However, the majority of these analyses have been limited to a simple regression error structure, assuming independent and identically distributed errors with zero mean and common variance, independent of the feature vectors. Additionally, the main focus of these theoretical analyses has been on the out-of-sample prediction risk. This paper breaks away from the existing literature by examining the mean squared error of the ridgeless interpolation least squares estimator, allowing for more general assumptions about the regression errors. Specifically, we investigate the potential benefits of overparameterization by characterizing the mean squared error in a finite sample. Our findings reveal that including a large number of unimportant parameters relative to the sample size can effectively reduce the mean squared error of the estimator. Notably, we establish that the estimation difficulties associated with the variance term can be summarized through the trace of the variance-covariance matrix of the regression errors.

Physics-informed neural networks (PINNs) [4, 10] are an approach for solving boundary value problems based on differential equations (PDEs). The key idea of PINNs is to use a neural network to approximate the solution to the PDE and to incorporate the residual of the PDE as well as boundary conditions into its loss function when training it. This provides a simple and mesh-free approach for solving problems relating to PDEs. However, a key limitation of PINNs is their lack of accuracy and efficiency when solving problems with larger domains and more complex, multi-scale solutions. In a more recent approach, finite basis physics-informed neural networks (FBPINNs) [8] use ideas from domain decomposition to accelerate the learning process of PINNs and improve their accuracy. In this work, we show how Schwarz-like additive, multiplicative, and hybrid iteration methods for training FBPINNs can be developed. We present numerical experiments on the influence of these different training strategies on convergence and accuracy. Furthermore, we propose and evaluate a preliminary implementation of coarse space correction for FBPINNs.

A nonlinear sea-ice problem is considered in a least-squares finite element setting. The corresponding variational formulation approximating simultaneously the stress tensor and the velocity is analysed. In particular, the least-squares functional is coercive and continuous in an appropriate solution space and this proves the well-posedness of the problem. As the method does not require a compatibility condition between the finite element space, the formulation allows the use of piecewise polynomial spaces of the same approximation order for both the stress and the velocity approximations. A Newton-type iterative method is used to linearize the problem and numerical tests are provided to illustrate the theory.

Sampling methods (e.g., node-wise, layer-wise, or subgraph) has become an indispensable strategy to speed up training large-scale Graph Neural Networks (GNNs). However, existing sampling methods are mostly based on the graph structural information and ignore the dynamicity of optimization, which leads to high variance in estimating the stochastic gradients. The high variance issue can be very pronounced in extremely large graphs, where it results in slow convergence and poor generalization. In this paper, we theoretically analyze the variance of sampling methods and show that, due to the composite structure of empirical risk, the variance of any sampling method can be decomposed into \textit{embedding approximation variance} in the forward stage and \textit{stochastic gradient variance} in the backward stage that necessities mitigating both types of variance to obtain faster convergence rate. We propose a decoupled variance reduction strategy that employs (approximate) gradient information to adaptively sample nodes with minimal variance, and explicitly reduces the variance introduced by embedding approximation. We show theoretically and empirically that the proposed method, even with smaller mini-batch sizes, enjoys a faster convergence rate and entails a better generalization compared to the existing methods.

北京阿比特科技有限公司