亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We present and analyze a cut finite element method for the weak imposition of the Neumann boundary conditions of the Darcy problem. The Raviart-Thomas mixed element on both triangular and quadrilateral meshes is considered. Our method is based on the Nitsche formulation studied in [10.1515/jnma-2021-0042] and can be considered as a first attempt at extension in the unfitted case. The key feature is to add two ghost penalty operators to stabilize both the velocity and pressure fields. We rigorously prove our stabilized formulation to be well-posed and derive a priori error estimates for the velocity and pressure fields. We show that an upper bound for the condition number of the stiffness matrix holds as well. Numerical examples corroborating the theory are included.

相關內容

iOS 8 提供的應用間和應用跟系統的功能交互特性。
  • Today (iOS and OS X): widgets for the Today view of Notification Center
  • Share (iOS and OS X): post content to web services or share content with others
  • Actions (iOS and OS X): app extensions to view or manipulate inside another app
  • Photo Editing (iOS): edit a photo or video in Apple's Photos app with extensions from a third-party apps
  • Finder Sync (OS X): remote file storage in the Finder with support for Finder content annotation
  • Storage Provider (iOS): an interface between files inside an app and other apps on a user's device
  • Custom Keyboard (iOS): system-wide alternative keyboards

Source:

In $d$ dimensions, approximating an arbitrary function oscillating with frequency $\lesssim k$ requires $\sim k^d$ degrees of freedom. A numerical method for solving the Helmholtz equation (with wavenumber $k$ and in $d$ dimensions) suffers from the pollution effect if, as $k\to\infty$, the total number of degrees of freedom needed to maintain accuracy grows faster than this natural threshold (i.e., faster than $k^d$ for domain-based formulations, such as finite element methods, and $k^{d-1}$ for boundary-based formulations, such as boundary element methods). It is well known that the $h$-version of the finite element method (FEM) (where accuracy is increased by decreasing the meshwidth $h$ and keeping the polynomial degree $p$ fixed) suffers from the pollution effect, and research over the last $\sim$ 30 years has resulted in a near-complete rigorous understanding of how quickly the number of degrees of freedom must grow with $k$ (and how this depends on both $p$ and properties of the scatterer). In contrast to the $h$-FEM, at least empirically, the $h$-version of the boundary element method (BEM) does $\textit{not}$ suffer from the pollution effect (recall that in the boundary element method the scattering problem is reformulated as an integral equation on the boundary of the scatterer, with this integral equation then solved numerically using a finite-element-type approximation space). However, the current best results in the literature on how quickly the number of degrees of freedom for the $h$-BEM must grow with $k$ fall short of proving this. In this paper, we prove that the $h$-version of the Galerkin method applied to the standard second-kind boundary integral equations for solving the Helmholtz exterior Dirichlet problem does not suffer from the pollution effect when the obstacle is nontrapping (i.e., does not trap geometric-optic rays).

We demonstrate the effectiveness of an adaptive explicit Euler method for the approximate solution of the Cox-Ingersoll-Ross model. This relies on a class of path-bounded timestepping strategies which work by reducing the stepsize as solutions approach a neighbourhood of zero. The method is hybrid in the sense that a convergent backstop method is invoked if the timestep becomes too small, or to prevent solutions from overshooting zero and becoming negative. Under parameter constraints that imply Feller's condition, we prove that such a scheme is strongly convergent, of order at least 1/2. Control of the strong error is important for multi-level Monte Carlo techniques. Under Feller's condition we also prove that the probability of ever needing the backstop method to prevent a negative value can be made arbitrarily small. Numerically, we compare this adaptive method to fixed step implicit and explicit schemes, and a novel semi-implicit adaptive variant. We observe that the adaptive approach leads to methods that are competitive in a domain that extends beyond Feller's condition, indicating suitability for the modelling of stochastic volatility in Heston-type asset models.

We consider fully discrete embedded finite element approximations for a shallow water hyperbolic problem and its reduced-order model. Our approach is based on a fixed background mesh and an embedded reduced basis. The Shifted Boundary Method for spatial discretization is combined with an explicit predictor/multi-corrector time integration to integrate in time the numerical solutions to the shallow water equations, both for the full and reduced-order model. In order to improve the approximation of the solution manifold also for geometries that are untested during the offline stage, the snapshots have been pre-processed by means of an interpolation procedure that precedes the reduced basis computation. The methodology is tested on geometrically parametrized shapes with varying size and position.

We introduce Stochastic Asymptotical Regularization (SAR) methods for the uncertainty quantification of the stable approximate solution of ill-posed linear-operator equations, which are deterministic models for numerous inverse problems in science and engineering. We prove the regularizing properties of SAR with regard to mean-square convergence. We also show that SAR is an optimal-order regularization method for linear ill-posed problems provided that the terminating time of SAR is chosen according to the smoothness of the solution. This result is proven for both a priori and a posteriori stopping rules under general range-type source conditions. Furthermore, some converse results of SAR are verified. Two iterative schemes are developed for the numerical realization of SAR, and the convergence analyses of these two numerical schemes are also provided. A toy example and a real-world problem of biosensor tomography are studied to show the accuracy and the advantages of SAR: compared with the conventional deterministic regularization approaches for deterministic inverse problems, SAR can provide the uncertainty quantification of the quantity of interest, which can in turn be used to reveal and explicate the hidden information about real-world problems, usually obscured by the incomplete mathematical modeling and the ascendence of complex-structured noise.

In this paper, we analyse a proximal method based on the idea of forward-backward splitting for sampling from distributions with densities that are not necessarily smooth. In particular, we study the non-asymptotic properties of the Euler-Maruyama discretization of the Langevin equation, where the forward-backward envelope is used to deal with the non-smooth part of the dynamics. An advantage of this envelope, when compared to widely-used Moreu-Yoshida one and the MYULA algorithm, is that it maintains the MAP estimator of the original non-smooth distribution. We also study a number of numerical experiments that corroborate that support our theoretical findings.

We study orbit-finite systems of linear equations, in the setting of sets with atoms. Our principal contribution is a decision procedure for solvability of such systems. The procedure works for every field (and even commutative ring) under mild effectiveness assumptions, and reduces a given orbit-finite system to a number of finite ones: exponentially many in general, but polynomially many when atom dimension of input systems is fixed. Towards obtaining the procedure we push further the theory of vector spaces generated by orbit-finite sets, and show that each such vector space admits an orbit-finite basis. This fundamental property is a key tool in our development, but should be also of wider interest.

Structural matrix-variate observations routinely arise in diverse fields such as multi-layer network analysis and brain image clustering. While data of this type have been extensively investigated with fruitful outcomes being delivered, the fundamental questions like its statistical optimality and computational limit are largely under-explored. In this paper, we propose a low-rank Gaussian mixture model (LrMM) assuming each matrix-valued observation has a planted low-rank structure. Minimax lower bounds for estimating the underlying low-rank matrix are established allowing a whole range of sample sizes and signal strength. Under a minimal condition on signal strength, referred to as the information-theoretical limit or statistical limit, we prove the minimax optimality of a maximum likelihood estimator which, in general, is computationally infeasible. If the signal is stronger than a certain threshold, called the computational limit, we design a computationally fast estimator based on spectral aggregation and demonstrate its minimax optimality. Moreover, when the signal strength is smaller than the computational limit, we provide evidences based on the low-degree likelihood ratio framework to claim that no polynomial-time algorithm can consistently recover the underlying low-rank matrix. Our results reveal multiple phase transitions in the minimax error rates and the statistical-to-computational gap. Numerical experiments confirm our theoretical findings. We further showcase the merit of our spectral aggregation method on the worldwide food trading dataset.

In this work, two problems associated with a downlink multi-user system are considered with the aid of intelligent reflecting surface (IRS): weighted sum-rate maximization and weighted minimal-rate maximization. For the first problem, a novel DOuble Manifold ALternating Optimization (DOMALO) algorithm is proposed by exploiting the matrix manifold theory and introducing the beamforming matrix and reflection vector using complex sphere manifold and complex oblique manifold, respectively, which incorporate the inherent geometrical structure and the required constraint. A smooth double manifold alternating optimization (S-DOMALO) algorithm is then developed based on the Dinkelbach-type algorithm and smooth exponential penalty function for the second problem. Finally, possible cooperative beamforming gain between IRSs and the IRS phase shift with limited resolution is studied, providing a reference for practical implementation. Numerical results show that our proposed algorithms can significantly outperform the benchmark schemes.

In this paper, we present numerical procedures to compute solutions of partial differential equations posed on fractals. In particular, we consider the strong form of the equation using standard graph Laplacian matrices and also weak forms of the equation derived using standard length or area measure on a discrete approximation of the fractal set. We then introduce a numerical procedure to normalize the obtained diffusions, that is, a way to compute the renormalization constant needed in the definitions of the actual partial differential equation on the fractal set. A particular case that is studied in detail is the solution of the Dirichlet problem in the Sierpinski triangle. Other examples are also presented including a non-planar Hata tree.

This paper describes a suite of algorithms for constructing low-rank approximations of an input matrix from a random linear image of the matrix, called a sketch. These methods can preserve structural properties of the input matrix, such as positive-semidefiniteness, and they can produce approximations with a user-specified rank. The algorithms are simple, accurate, numerically stable, and provably correct. Moreover, each method is accompanied by an informative error bound that allows users to select parameters a priori to achieve a given approximation quality. These claims are supported by numerical experiments with real and synthetic data.

北京阿比特科技有限公司