亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

When studying the dynamics of incompressible fluids in bounded domains the only available data often provide average flow rate conditions on portions of the domain's boundary. In engineering applications a common practice to complete these conditions is to prescribe a Dirichlet condition by assuming a-priori a spatial profile for the velocity field. However, this strongly influence the accuracy of the numerical solution. A more mathematically sound approach is to prescribe the flow rate conditions using Lagrange multipliers, resulting in an augmented weak formulation of the Navier-Stokes problem. In this paper we start from the SIMPLE preconditioner, introduced so far for the standard Navier-Stokes equations, and we derive two preconditioners for the monolithic solution of the augmented problem. This can be useful in complex applications where splitting the computation of the velocity/pressure and Lagrange multipliers numerical solutions can be very expensive. In particular, we investigate the numerical performance of the preconditioners in both idealized and real-life scenarios. Finally, we highlight the advantages of treating flow rate conditions with a Lagrange multipliers approach instead of prescribing a Dirichlet condition.

相關內容

We study the recovery of one-dimensional semipermeable barriers for a stochastic process in a planar domain. The considered process acts like Brownian motion when away from the barriers and is reflected upon contact until a sufficient but random amount of interaction has occurred, determined by the permeability, after which it passes through. Given a sequence of samples, we wonder when one can determine the location and shape of the barriers. This paper identifies several different recovery regimes, determined by the available observation period and the time between samples, with qualitatively different behavior. The observation period $T$ dictates if the full barriers or only certain pieces can be recovered, and the sampling rate significantly influences the convergence rate as $T\to \infty$. This rate turns out polynomial for fixed-frequency data, but exponentially fast in a high-frequency regime. Further, the environment's impact on the difficulty of the problem is quantified using interpretable parameters in the recovery guarantees, and is found to also be regime-dependent. For instance, the curvature of the barriers affects the convergence rate for fixed-frequency data, but becomes irrelevant when $T\to \infty$ with high-frequency data. The results are accompanied by explicit algorithms, and we conclude by illustrating the application to real-life data.

This work studies the parameter-dependent diffusion equation in a two-dimensional domain consisting of locally mirror symmetric layers. It is assumed that the diffusion coefficient is a constant in each layer. The goal is to find approximate parameter-to-solution maps that have a small number of terms. It is shown that in the case of two layers one can find a solution formula consisting of three terms with explicit dependencies on the diffusion coefficient. The formula is based on decomposing the solution into orthogonal parts related to both of the layers and the interface between them. This formula is then expanded to an approximate one for the multi-layer case. We give an analytical formula for square layers and use the finite element formulation for more general layers. The results are illustrated with numerical examples and have applications for reduced basis methods by analyzing the Kolmogorov n-width.

We propose a method utilizing physics-informed neural networks (PINNs) to solve Poisson equations that serve as control variates in the computation of transport coefficients via fluctuation formulas, such as the Green--Kubo and generalized Einstein-like formulas. By leveraging approximate solutions to the Poisson equation constructed through neural networks, our approach significantly reduces the variance of the estimator at hand. We provide an extensive numerical analysis of the estimators and detail a methodology for training neural networks to solve these Poisson equations. The approximate solutions are then incorporated into Monte Carlo simulations as effective control variates, demonstrating the suitability of the method for moderately high-dimensional problems where fully deterministic solutions are computationally infeasible.

Preconditioned eigenvalue solvers offer the possibility to incorporate preconditioners for the solution of large-scale eigenvalue problems, as they arise from the discretization of partial differential equations. The convergence analysis of such methods is intricate. Even for the relatively simple preconditioned inverse iteration (PINVIT), which targets the smallest eigenvalue of a symmetric positive definite matrix, the celebrated analysis by Neymeyr is highly nontrivial and only yields convergence if the starting vector is fairly close to the desired eigenvector. In this work, we prove a new non-asymptotic convergence result for a variant of PINVIT. Our proof proceeds by analyzing an equivalent Riemannian steepest descent method and leveraging convexity-like properties. We show a convergence rate that nearly matches the one of PINVIT. As a major benefit, we require a condition on the starting vector that tends to be less stringent. This improved global convergence property is demonstrated for two classes of preconditioners with theoretical bounds and a range of numerical experiments.

A statistical network model with overlapping communities can be generated as a superposition of mutually independent random graphs of varying size. The model is parameterized by the number of nodes, the number of communities, and the joint distribution of the community size and the edge probability. This model admits sparse parameter regimes with power-law limiting degree distributions and non-vanishing clustering coefficients. This article presents large-scale approximations of clique and cycle frequencies for graph samples generated by the model, which are valid for regimes with unbounded numbers of overlapping communities. Our results reveal the growth rates of these subgraph frequencies and show that their theoretical densities can be reliably estimated from data.

Statistical learning under distribution shift is challenging when neither prior knowledge nor fully accessible data from the target distribution is available. Distributionally robust learning (DRL) aims to control the worst-case statistical performance within an uncertainty set of candidate distributions, but how to properly specify the set remains challenging. To enable distributional robustness without being overly conservative, in this paper, we propose a shape-constrained approach to DRL, which incorporates prior information about the way in which the unknown target distribution differs from its estimate. More specifically, we assume the unknown density ratio between the target distribution and its estimate is isotonic with respect to some partial order. At the population level, we provide a solution to the shape-constrained optimization problem that does not involve the isotonic constraint. At the sample level, we provide consistency results for an empirical estimator of the target in a range of different settings. Empirical studies on both synthetic and real data examples demonstrate the improved accuracy of the proposed shape-constrained approach.

Insurance losses due to flooding can be estimated by simulating and then summing a large number of losses for each in a large set of hypothetical years of flood events. Replicated realisations lead to Monte Carlo return-level estimates and associated uncertainty. The procedure, however, is highly computationally intensive. We develop and use a new, Bennett-like concentration inequality to provide conservative but relatively accurate estimates of return levels. Bennett's inequality accounts for the different variances of each of the variables in a sum but uses a uniform upper bound on their support. Motivated by the variability in the total insured value of risks within a portfolio, we incorporate both individual upper bounds and variances and obtain tractable concentration bounds. Simulation studies and application to a representative portfolio demonstrate a substantial tightening compared with Bennett's bound. We then develop an importance-sampling procedure that repeatedly samples the loss for each year from the distribution implied by the concentration inequality, leading to conservative estimates of the return levels and their uncertainty using orders of magnitude less computation. This enables a simulation study of the sensitivity of the predictions to perturbations in quantities that are usually assumed fixed and known but, in truth, are not.

The proposed two-dimensional geometrically exact beam element extends our previous work by including the effects of shear distortion, and also of distributed forces and moments acting along the beam. The general flexibility-based formulation exploits the kinematic equations combined with the inverted sectional equations and the integrated form of equilibrium equations. The resulting set of three first-order differential equations is discretized by finite differences and the boundary value problem is converted into an initial value problem using the shooting method. Due to the special structure of the governing equations, the scheme remains explicit even though the first derivatives are approximated by central differences, leading to high accuracy. The main advantage of the adopted approach is that the error can be efficiently reduced by refining the computational grid used for finite differences at the element level while keeping the number of global degrees of freedom low. The efficiency is also increased by dealing directly with the global centerline coordinates and sectional inclination with respect to global axes as the primary unknowns at the element level, thereby avoiding transformations between local and global coordinates. Two formulations of the sectional equations, referred to as the Reissner and Ziegler models, are presented and compared. In particular, stability of an axially loaded beam/column is investigated and the connections to the Haringx and Engesser stability theories are discussed. Both approaches are tested in a series of numerical examples, which illustrate (i) high accuracy with quadratic convergence when the spatial discretization is refined, (ii) easy modeling of variable stiffness along the element (such as rigid joint offsets), (iii) efficient and accurate characterization of the buckling and post-buckling behavior.

This work presents an abstract framework for the design, implementation, and analysis of the multiscale spectral generalized finite element method (MS-GFEM), a particular numerical multiscale method originally proposed in [I. Babuska and R. Lipton, Multiscale Model.\;\,Simul., 9 (2011), pp.~373--406]. MS-GFEM is a partition of unity method employing optimal local approximation spaces constructed from local spectral problems. We establish a general local approximation theory demonstrating exponential convergence with respect to local degrees of freedom under certain assumptions, with explicit dependence on key problem parameters. Our framework applies to a broad class of multiscale PDEs with $L^{\infty}$-coefficients in both continuous and discrete, finite element settings, including highly indefinite problems (convection-dominated diffusion, as well as the high-frequency Helmholtz, Maxwell and elastic wave equations with impedance boundary conditions), and higher-order problems. Notably, we prove a local convergence rate of $O(e^{-cn^{1/d}})$ for MS-GFEM for all these problems, improving upon the $O(e^{-cn^{1/(d+1)}})$ rate shown by Babuska and Lipton. Moreover, based on the abstract local approximation theory for MS-GFEM, we establish a unified framework for showing low-rank approximations to multiscale PDEs. This framework applies to the aforementioned problems, proving that the associated Green's functions admit an $O(|\log\epsilon|^{d})$-term separable approximation on well-separated domains with error $\epsilon>0$. Our analysis improves and generalizes the result in [M. Bebendorf and W. Hackbusch, Numerische Mathematik, 95 (2003), pp.~1-28] where an $O(|\log\epsilon|^{d+1})$-term separable approximation was proved for Poisson-type problems.

We present a novel spatial discretization for the anisotropic heat conduction equation, aimed at improved accuracy at the high levels of anisotropy seen in a magnetized plasma, for example, for magnetic confinement fusion. The new discretization is based on a mixed formulation, introducing a form of the directional derivative along the magnetic field as an auxiliary variable and discretizing both the temperature and auxiliary fields in a continuous Galerkin (CG) space. Both the temperature and auxiliary variable equations are stabilized using the streamline upwind Petrov-Galerkin (SUPG) method, ensuring a better representation of the directional derivatives and therefore an overall more accurate solution. This approach can be seen as the CG-based version of our previous work (Wimmer, Southworth, Gregory, Tang, 2024), where we considered a mixed discontinuous Galerkin (DG) spatial discretization including DG-upwind stabilization. We prove consistency of the novel discretization, and demonstrate its improved accuracy over existing CG-based methods in test cases relevant to magnetic confinement fusion. This includes a long-run tokamak equilibrium sustainment scenario, demonstrating a 35% and 32% spurious heat loss for existing primal and mixed CG-based formulations versus 4% for our novel SUPG-stabilized discretization.

北京阿比特科技有限公司