亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This work is concerned with the analysis of a space-time finite element discontinuous Galerkin method on polytopal meshes (XT-PolydG) for the numerical discretization of wave propagation in coupled poroelastic-elastic media. The mathematical model consists of the low-frequency Biot's equations in the poroelastic medium and the elastodynamics equation for the elastic one. To realize the coupling, suitable transmission conditions on the interface between the two domains are (weakly) embedded in the formulation. The proposed PolydG discretization in space is then coupled with a dG time integration scheme, resulting in a full space-time dG discretization. We present the stability analysis for both the continuous and the semidiscrete formulations, and we derive error estimates for the semidiscrete formulation in a suitable energy norm. The method is applied to a wide set of numerical test cases to verify the theoretical bounds. Examples of physical interest are also presented to investigate the capability of the proposed method in relevant geophysical scenarios.

相關內容

In this paper we propose a method to approximate the Gaussian function on ${\mathbb R}$ by a short cosine sum. We extend the differential approximation method proposed in [4,39] to approximate $\mathrm{e}^{-t^{2}/2\sigma}$ in the weighted space $L_2({\mathbb R}, \mathrm{e}^{-t^{2}/2\rho})$ where $\sigma, \, \rho >0$. We prove that the optimal frequency parameters $\lambda_1, \ldots , \lambda_{N}$ for this method in the approximation problem $ \min\limits_{\lambda_{1},\ldots, \lambda_{N}, \gamma_{1} \ldots \gamma_{N}}\|\mathrm{e}^{-\cdot^{2}/2\sigma} - \sum\limits_{j=1}^{N} \gamma_{j} \, {\mathrm e}^{\lambda_{j} \cdot}\|_{L_{2}({\mathbb R}, \mathrm{e}^{-t^{2}/2\rho})}$, are zeros of a scaled Hermite polynomial. This observation leads us to a numerically stable approximation method with low computational cost of $\mathit{O}(N^{3})$ operations. Furthermore, we derive a direct algorithm to solve this approximation problem based on a matrix pencil method for a special structured matrix. The entries of this matrix are determined by hypergeometric functions. For the weighted $L_{2}$-norm, we prove that the approximation error decays exponentially with respect to the length $N$ of the sum. An exponentially decaying error in the (unweighted) $L^{2}$-norm is achieved using a truncated cosine sum.

In this paper, we study a general low-rank matrix recovery problem with linear measurements corrupted by some noise. The objective is to understand under what conditions on the restricted isometry property (RIP) of the problem local search methods can find the ground truth with a small error. By analyzing the landscape of the non-convex problem, we first propose a global guarantee on the maximum distance between an arbitrary local minimizer and the ground truth under the assumption that the RIP constant is smaller than $1/2$. We show that this distance shrinks to zero as the intensity of the noise reduces. Our new guarantee is sharp in terms of the RIP constant and is much stronger than the existing results. We then present a local guarantee for problems with an arbitrary RIP constant, which states that any local minimizer is either considerably close to the ground truth or far away from it. Next, we prove the strict saddle property, which guarantees the global convergence of the perturbed gradient descent method in polynomial time. The developed results demonstrate how the noise intensity and the RIP constant of the problem affect the landscape of the problem.

Let $\hat\Sigma=\frac{1}{n}\sum_{i=1}^n X_i\otimes X_i$ denote the sample covariance operator of centered i.i.d. observations $X_1,\dots,X_n$ in a real separable Hilbert space, and let $\Sigma=\mathbf{E}(X_1\otimes X_1)$. The focus of this paper is to understand how well the bootstrap can approximate the distribution of the operator norm error $\sqrt n\|\hat\Sigma-\Sigma\|_{\text{op}}$, in settings where the eigenvalues of $\Sigma$ decay as $\lambda_j(\Sigma)\asymp j^{-2\beta}$ for some fixed parameter $\beta>1/2$. Our main result shows that the bootstrap can approximate the distribution of $\sqrt n\|\hat\Sigma-\Sigma\|_{\text{op}}$ at a rate of order $n^{-\frac{\beta-1/2}{2\beta+4+\epsilon}}$ with respect to the Kolmogorov metric, for any fixed $\epsilon>0$. In particular, this shows that the bootstrap can achieve near $n^{-1/2}$ rates in the regime of large $\beta$--which substantially improves on previous near $n^{-1/6}$ rates in the same regime. In addition to obtaining faster rates, our analysis leverages a fundamentally different perspective based on coordinate-free techniques. Moreover, our result holds in greater generality, and we propose a new model that is compatible with both elliptical and Mar\v{c}enko-Pastur models in high-dimensional Euclidean spaces, which may be of independent interest.

The main objective of the present paper is to construct a new class of space-time discretizations for the stochastic $p$-Stokes system and analyze its stability and convergence properties. We derive regularity results for the approximation that are similar to the natural regularity of solutions. One of the key arguments relies on discrete extrapolation that allows to relate lower moments of discrete maximal processes. We show that, if the generic spatial discretization is constraint conforming, then the velocity approximation satisfies a best-approximation property in the natural distance. Moreover, we present an example such that the resulting velocity approximation converges with rate $1/2$ in time and $1$ in space towards the (unknown) target velocity with respect to the natural distance.

We present a machine-learning strategy for finite element analysis of solid mechanics wherein we replace complex portions of a computational domain with a data-driven surrogate. In the proposed strategy, we decompose a computational domain into an "outer" coarse-scale domain that we resolve using a finite element method (FEM) and an "inner" fine-scale domain. We then develop a machine-learned (ML) model for the impact of the inner domain on the outer domain. In essence, for solid mechanics, our machine-learned surrogate performs static condensation of the inner domain degrees of freedom. This is achieved by learning the map from (virtual) displacements on the inner-outer domain interface boundary to forces contributed by the inner domain to the outer domain on the same interface boundary. We consider two such mappings, one that directly maps from displacements to forces without constraints, and one that maps from displacements to forces by virtue of learning a symmetric positive semi-definite (SPSD) stiffness matrix. We demonstrate, in a simplified setting, that learning an SPSD stiffness matrix results in a coarse-scale problem that is well-posed with a unique solution. We present numerical experiments on several exemplars, ranging from finite deformations of a cube to finite deformations with contact of a fastener-bushing geometry. We demonstrate that enforcing an SPSD stiffness matrix is critical for accurate FEM-ML coupled simulations, and that the resulting methods can accurately characterize out-of-sample loading configurations with significant speedups over the standard FEM simulations.

We study the fundamental problem of fairly allocating a set of indivisible goods among $n$ agents with additive valuations using the desirable fairness notion of maximin share (MMS). MMS is the most popular share-based notion, in which an agent finds an allocation fair to her if she receives goods worth at least her MMS value. An allocation is called MMS if all agents receive at least their MMS value. Since MMS allocations need not exist when $n>2$, a series of works showed the existence of approximate MMS allocations with the current best factor of $\frac34 + O(\frac{1}{n})$. However, a simple example in [DFL82, BEF21, AGST23] showed the limitations of existing approaches and proved that they cannot improve this factor to $3/4 + \Omega(1)$. In this paper, we bypass these barriers to show the existence of $(\frac{3}{4} + \frac{3}{3836})$-MMS allocations by developing new reduction rules and analysis techniques.

In this work, we propose and computationally investigate a monolithic space-time multirate scheme for coupled problems. The novelty lies in the monolithic formulation of the multirate approach as this requires a careful design of the functional framework, corresponding discretization, and implementation. Our method of choice is a tensor-product Galerkin space-time discretization. The developments are carried out for both prototype interface- and volume coupled problems such as coupled wave-heat-problems and a displacement equation coupled to Darcy flow in a poro-elastic medium. The latter is applied to the well-known Mandel's benchmark. Detailed computational investigations and convergence analyses give evidence that our monolithic multirate framework performs well.

The proximal Galerkin finite element method is a high-order, nonlinear numerical method that preserves the geometric and algebraic structure of bound constraints in infinite-dimensional function spaces. This paper introduces the proximal Galerkin method and applies it to solve free-boundary problems, enforce discrete maximum principles, and develop scalable, mesh-independent algorithms for optimal design. The paper begins with a derivation of the latent variable proximal point (LVPP) method: an unconditionally stable alternative to the interior point method. LVPP is an infinite-dimensional optimization algorithm that may be viewed as having an adaptive (Bayesian) barrier function that is updated with a new informative prior at each (outer loop) optimization iteration. One of the main benefits of this algorithm is witnessed when analyzing the classical obstacle problem. Therein, we find that the original variational inequality can be replaced by a sequence of semilinear partial differential equations (PDEs) that are readily discretized and solved with, e.g., high-order finite elements. Throughout this work, we arrive at several unexpected contributions that may be of independent interest. These include (1) a semilinear PDE we refer to as the entropic Poisson equation; (2) an algebraic/geometric connection between high-order positivity-preserving discretizations and infinite-dimensional Lie groups; and (3) a gradient-based, bound-preserving algorithm for two-field density-based topology optimization. The complete latent variable proximal Galerkin methodology combines ideas from nonlinear programming, functional analysis, tropical algebra, and differential geometry and can potentially lead to new synergies among these areas as well as within variational and numerical analysis.

In this work, we consider space-time goal-oriented a posteriori error estimation for parabolic problems. Temporal and spatial discretizations are based on Galerkin finite elements of continuous and discontinuous type. The main objectives are the development and analysis of space-time estimators, in which the localization is based on a weak form employing a partition-of-unity. The resulting error indicators are used for temporal and spatial adaptivity. Our developments are substantiated with several numerical examples.

In inverse problems, one attempts to infer spatially variable functions from indirect measurements of a system. To practitioners of inverse problems, the concept of "information" is familiar when discussing key questions such as which parts of the function can be inferred accurately and which cannot. For example, it is generally understood that we can identify system parameters accurately only close to detectors, or along ray paths between sources and detectors, because we have "the most information" for these places. Although referenced in many publications, the "information" that is invoked in such contexts is not a well understood and clearly defined quantity. Herein, we present a definition of information density that is based on the variance of coefficients as derived from a Bayesian reformulation of the inverse problem. We then discuss three areas in which this information density can be useful in practical algorithms for the solution of inverse problems, and illustrate the usefulness in one of these areas -- how to choose the discretization mesh for the function to be reconstructed -- using numerical experiments.

北京阿比特科技有限公司