亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

相關內容

We present and implement an algorithm for computing the invariant circle and the corresponding stable manifolds for 2-dimensional maps. The algorithm is based on the parameterization method, and it is backed up by an a-posteriori theorem established in [YdlL21]. The algorithm works irrespective of whether the internal dynamics in the invariant circle is a rotation or it is phase-locked. The algorithm converges quadratically and the number of operations and memory requirements for each step of the iteration is linear with respect to the size of the discretization. We also report on the result of running the implementation in some standard models to uncover new phenomena. In particular, we explored a bundle merging scenario in which the invariant circle loses hyperbolicity because the angle between the stable directions and the tangent becomes zero even if the rates of contraction are separated. We also discuss and implement a generalization of the algorithm to 3 dimensions, and implement it on the 3-dimensional Fattened Arnold Family (3D-FAF) map with non-resonant eigenvalues and present numerical results.

We consider flux-corrected finite element discretizations of 3D convection-dominated transport problems and assess the computational efficiency of algorithms based on such approximations. The methods under investigation include flux-corrected transport schemes and monolithic limiters. We discretize in space using a continuous Galerkin method and $\mathbb{P}_1$ or $\mathbb{Q}_1$ finite elements. Time integration is performed using the Crank-Nicolson method or an explicit strong stability preserving Runge-Kutta method. Nonlinear systems are solved using a fixed-point iteration method, which requires solution of large linear systems at each iteration or time step. The great variety of options in the choice of discretization methods and solver components calls for a dedicated comparative study of existing approaches. To perform such a study, we define new 3D test problems for time-dependent and stationary convection-diffusion-reaction equations. The results of our numerical experiments illustrate how the limiting technique, time discretization and solver impact on the overall performance.

The primary emphasis of this work is the development of a finite element based space-time discretization for solving the stochastic Lagrangian averaged Navier-Stokes (LANS-$\alpha$) equations of incompressible fluid turbulence with multiplicative random forcing, under nonperiodic boundary conditions within a bounded polygonal (or polyhedral) domain of R^d , d $\in$ {2, 3}. The convergence analysis of a fully discretized numerical scheme is investigated and split into two cases according to the spacial scale $\alpha$, namely we first assume $\alpha$ to be controlled by the step size of the space discretization so that it vanishes when passing to the limit, then we provide an alternative study when $\alpha$ is fixed. A preparatory analysis of uniform estimates in both $\alpha$ and discretization parameters is carried out. Starting out from the stochastic LANS-$\alpha$ model, we achieve convergence toward the continuous strong solutions of the stochastic Navier-Stokes equations in 2D when $\alpha$ vanishes at the limit. Additionally, convergence toward the continuous strong solutions of the stochastic LANS-$\alpha$ model is accomplished if $\alpha$ is fixed.

We develop methods for forming prediction sets in an online setting where the data generating distribution is allowed to vary over time in an unknown fashion. Our framework builds on ideas from conformal inference to provide a general wrapper that can be combined with any black box method that produces point predictions of the unseen label or estimated quantiles of its distribution. While previous conformal inference methods rely on the assumption that the data points are exchangeable, our adaptive approach provably achieves the desired coverage frequency over long-time intervals irrespective of the true data generating process. We accomplish this by modelling the distribution shift as a learning problem in a single parameter whose optimal value is varying over time and must be continuously re-estimated. We test our method, adaptive conformal inference, on two real world datasets and find that its predictions are robust to visible and significant distribution shifts.

We develop a novel unified randomized block-coordinate primal-dual algorithm to solve a class of nonsmooth constrained convex optimization problems, which covers different existing variants and model settings from the literature. We prove that our algorithm achieves optimal $\mathcal{O}(n/k)$ and $\mathcal{O}(n^2/k^2)$ convergence rates (up to a constant factor) in two cases: general convexity and strong convexity, respectively, where $k$ is the iteration counter and n is the number of block-coordinates. Our convergence rates are obtained through three criteria: primal objective residual and primal feasibility violation, dual objective residual, and primal-dual expected gap. Moreover, our rates for the primal problem are on the last iterate sequence. Our dual convergence guarantee requires additionally a Lipschitz continuity assumption. We specify our algorithm to handle two important special cases, where our rates are still applied. Finally, we verify our algorithm on two well-studied numerical examples and compare it with two existing methods. Our results show that the proposed method has encouraging performance on different experiments.

We consider robust variants of the standard optimal transport, named robust optimal transport, where marginal constraints are relaxed via Kullback-Leibler divergence. We show that Sinkhorn-based algorithms can approximate the optimal cost of robust optimal transport in $\widetilde{\mathcal{O}}(\frac{n^2}{\varepsilon})$ time, in which $n$ is the number of supports of the probability distributions and $\varepsilon$ is the desired error. Furthermore, we investigate a fixed-support robust barycenter problem between $m$ discrete probability distributions with at most $n$ number of supports and develop an approximating algorithm based on iterative Bregman projections (IBP). For the specific case $m = 2$, we show that this algorithm can approximate the optimal barycenter value in $\widetilde{\mathcal{O}}(\frac{mn^2}{\varepsilon})$ time, thus being better than the previous complexity $\widetilde{\mathcal{O}}(\frac{mn^2}{\varepsilon^2})$ of the IBP algorithm for approximating the Wasserstein barycenter.

In data analysis problems where we are not able to rely on distributional assumptions, what types of inference guarantees can still be obtained? Many popular methods, such as holdout methods, cross-validation methods, and conformal prediction, are able to provide distribution-free guarantees for predictive inference, but the problem of providing inference for the underlying regression function (for example, inference on the conditional mean $\mathbb{E}[Y|X]$) is more challenging. In the setting where the features $X$ are continuously distributed, recent work has established that any confidence interval for $\mathbb{E}[Y|X]$ must have non-vanishing width, even as sample size tends to infinity. At the other extreme, if $X$ takes only a small number of possible values, then inference on $\mathbb{E}[Y|X]$ is trivial to achieve. In this work, we study the problem in settings in between these two extremes. We find that there are several distinct regimes in between the finite setting and the continuous setting, where vanishing-width confidence intervals are achievable if and only if the effective support size of the distribution of $X$ is smaller than the square of the sample size.

The Restricted Additive Schwarz method with impedance transmission conditions, also known as the Optimised Restricted Additive Schwarz (ORAS) method, is a simple overlapping one-level parallel domain decomposition method, which has been successfully used as an iterative solver and as a preconditioner for discretized Helmholtz boundary-value problems. In this paper, we give, for the first time, a convergence analysis for ORAS as an iterative solver -- and also as a preconditioner -- for nodal finite element Helmholtz systems of any polynomial order. The analysis starts by showing (for general domain decompositions) that ORAS as an unconventional finite element approximation of a classical parallel iterative Schwarz method, formulated at the PDE (non-discrete) level. This non-discrete Schwarz method was recently analysed in [Gong, Gander, Graham, Lafontaine, Spence, arXiv 2106.05218], and the present paper gives a corresponding discrete version of this analysis. In particular, for domain decompositions in strips in 2-d, we show that, when the mesh size is small enough, ORAS inherits the convergence properties of the Schwarz method, independent of polynomial order. The proof relies on characterising the ORAS iteration in terms of discrete `impedance-to-impedance maps', which we prove (via a novel weighted finite-element error analysis) converge as $h\rightarrow 0$ in the operator norm to their non-discrete counterparts.

Stokes flows are a type of fluid flow where convective forces are small in comparison with viscous forces, and momentum transport is entirely due to viscous diffusion. Besides being routinely used as benchmark test cases in numerical fluid dynamics, Stokes flows are relevant in several applications in science and engineering including porous media flow, biological flows, microfluidics, microrobotics, and hydrodynamic lubrication. The present study concerns the discretization of the equations of motion of Stokes flows in three dimensions utilizing the MINI mixed finite element, focusing on the superconvergence of the method which was investigated with numerical experiments using five purpose-made benchmark test cases with analytical solution. Despite the fact that the MINI element is only linearly convergent according to standard mixed finite element theory, a recent theoretical development proves that, for structured meshes in two dimensions, the pressure superconverges with order 1.5, as well as the linear part of the computed velocity with respect to the piecewise-linear nodal interpolation of the exact velocity. The numerical experiments documented herein suggest a more general validity of the superconvergence in pressure, possibly to unstructured tetrahedral meshes and even up to quadratic convergence which was observed with one test problem, thereby indicating that there is scope to further extend the available theoretical results on convergence.

We give lower bounds on the performance of two of the most popular sampling methods in practice, the Metropolis-adjusted Langevin algorithm (MALA) and multi-step Hamiltonian Monte Carlo (HMC) with a leapfrog integrator, when applied to well-conditioned distributions. Our main result is a nearly-tight lower bound of $\widetilde{\Omega}(\kappa d)$ on the mixing time of MALA from an exponentially warm start, matching a line of algorithmic results up to logarithmic factors and answering an open question of Chewi et. al. We also show that a polynomial dependence on dimension is necessary for the relaxation time of HMC under any number of leapfrog steps, and bound the gains achievable by changing the step count. Our HMC analysis draws upon a novel connection between leapfrog integration and Chebyshev polynomials, which may be of independent interest.

北京阿比特科技有限公司