亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In approximation of functions based on point values, least-squares methods provide more stability than interpolation, at the expense of increasing the sampling budget. We show that near-optimal approximation error can nevertheless be achieved, in an expected $L^2$ sense, as soon as the sample size $m$ is larger than the dimension $n$ of the approximation space by a constant ratio. On the other hand, for $m=n$, we obtain an interpolation strategy with a stability factor of order $n$. The proposed sampling algorithms are greedy procedures based on arXiv:0808.0163 and arXiv:1508.03261, with polynomial computational complexity.

相關內容

Kinetic approaches are generally accurate in dealing with microscale plasma physics problems but are computationally expensive for large-scale or multiscale systems. One of the long-standing problems in plasma physics is the integration of kinetic physics into fluid models, which is often achieved through sophisticated analytical closure terms. In this paper, we successfully construct a multi-moment fluid model with an implicit fluid closure included in the neural network using machine learning. The multi-moment fluid model is trained with a small fraction of sparsely sampled data from kinetic simulations of Landau damping, using the physics-informed neural network (PINN) and the gradient-enhanced physics-informed neural network (gPINN). The multi-moment fluid model constructed using either PINN or gPINN reproduces the time evolution of the electric field energy, including its damping rate, and the plasma dynamics from the kinetic simulations. In addition, we introduce a variant of the gPINN architecture, namely, gPINN$p$ to capture the Landau damping process. Instead of including the gradients of all the equation residuals, gPINN$p$ only adds the gradient of the pressure equation residual as one additional constraint. Among the three approaches, the gPINN$p$-constructed multi-moment fluid model offers the most accurate results. This work sheds light on the accurate and efficient modeling of large-scale systems, which can be extended to complex multiscale laboratory, space, and astrophysical plasma physics problems.

The power of Clifford or, geometric, algebra lies in its ability to represent geometric operations in a concise and elegant manner. Clifford algebras provide the natural generalizations of complex, dual numbers and quaternions into non-commutative multivectors. The paper demonstrates an algorithm for the computation of inverses of such numbers in a non-degenerate Clifford algebra of an arbitrary dimension. The algorithm is a variation of the Faddeev-LeVerrier-Souriau algorithm and is implemented in the open-source Computer Algebra System Maxima. Symbolic and numerical examples in different Clifford algebras are presented.

In the general setting of long-memory multivariate time series, the long-memory characteristics are defined by two components. The long-memory parameters describe the autocorrelation of each time series. And the long-run covariance measures the coupling between time series, with general phase parameters. It is of interest to estimate the long-memory, long-run covariance and general phase parameters of time series generated by this wide class of models although they are not necessarily Gaussian nor stationary. This estimation is thus not directly possible using real wavelets decomposition or Fourier analysis. Our purpose is to define an inference approach based on a representation using quasi-analytic wavelets. We first show that the covariance of the wavelet coefficients provides an adequate estimator of the covariance structure including the phase term. Consistent estimators based on a local Whittle approximation are then proposed. Simulations highlight a satisfactory behavior of the estimation on finite samples on linear time series and on multivariate fractional Brownian motions. An application on a real neuroscience dataset is presented, where long-memory and brain connectivity are inferred.

This paper focuses on investigating the learning operators for identifying weak solutions to the Navier-Stokes equations. Our objective is to establish a connection between the initial data as input and the weak solution as output. To achieve this, we employ a combination of deep learning methods and compactness argument to derive learning operators for weak solutions for any large initial data in 2D, and for low-dimensional initial data in 3D. Additionally, we utilize the universal approximation theorem to derive a lower bound on the number of sensors required to achieve accurate identification of weak solutions to the Navier-Stokes equations. Our results demonstrate the potential of using deep learning techniques to address challenges in the study of fluid mechanics, particularly in identifying weak solutions to the Navier-Stokes equations.

The aim of this work is to present a model reduction technique in the framework of optimal control problems for partial differential equations. We combine two approaches used for reducing the computational cost of the mathematical numerical models: domain-decomposition (DD) methods and reduced-order modelling (ROM). In particular, we consider an optimisation-based domain-decomposition algorithm for the parameter-dependent stationary incompressible Navier-Stokes equations. Firstly, the problem is described on the subdomains coupled at the interface and solved through an optimal control problem, which leads to the complete separation of the subdomain problems in the DD method. On top of that, a reduced model for the obtained optimal-control problem is built; the procedure is based on the Proper Orthogonal Decomposition technique and a further Galerkin projection. The presented methodology is tested on two fluid dynamics benchmarks: the stationary backward-facing step and lid-driven cavity flow. The numerical tests show a significant reduction of the computational costs in terms of both the problem dimensions and the number of optimisation iterations in the domain-decomposition algorithm.

We provide a framework for the numerical approximation of distributed optimal control problems, based on least-squares finite element methods. Our proposed method simultaneously solves the state and adjoint equations and is $\inf$--$\sup$ stable for any choice of conforming discretization spaces. A reliable and efficient a posteriori error estimator is derived for problems where box constraints are imposed on the control. It can be localized and therefore used to steer an adaptive algorithm. For unconstrained optimal control problems, i.e., the set of controls being a Hilbert space, we obtain a coercive least-squares method and, in particular, quasi-optimality for any choice of discrete approximation space. For constrained problems we derive and analyze a variational inequality where the PDE part is tackled by least-squares finite element methods. We show that the abstract framework can be applied to a wide range of problems, including scalar second-order PDEs, the Stokes problem, and parabolic problems on space-time domains. Numerical examples for some selected problems are presented.

We consider two-phase fluid deformable surfaces as model systems for biomembranes. Such surfaces are modeled by incompressible surface Navier-Stokes-Cahn-Hilliard-like equations with bending forces. We derive this model using the Lagrange-D'Alembert principle considering various dissipation mechanisms. The highly nonlinear model is solved numerically to explore the tight interplay between surface evolution, surface phase composition, surface curvature and surface hydrodynamics. It is demonstrated that hydrodynamics can enhance bulging and furrow formation, which both can further develop to pinch-offs. The numerical approach builds on a Taylor-Hood element for the surface Navier-Stokes part, a semi-implicit approach for the Cahn-Hilliard part, higher order surface parametrizations, appropriate approximations of the geometric quantities, and mesh redistribution. We demonstrate convergence properties that are known to be optimal for simplified sub-problems.

This paper studies the identification of a linear combination of point sources from a finite number of measurements. Since the data are typically contaminated by Gaussian noise, a statistical framework for its recovery is considered. It relies on two main ingredients, first, a convex but non-smooth Tikhonov point estimator over the space of Radon measures and, second, a suitable mean-squared error based on its Hellinger-Kantorovich distance to the ground truth. Assuming standard non-degenerate source conditions as well as applying careful linearization arguments, a computable upper bound on the latter is derived. On the one hand, this allows to derive asymptotic convergence results for the mean-squared error of the estimator in the small small variance case. On the other, it paves the way for applying optimal sensor placement approaches to sparse inverse problems.

We develop a numerical method for computing with orthogonal polynomials that are orthogonal on multiple, disjoint intervals for which analytical formulae are currently unknown. Our approach exploits the Fokas--Its--Kitaev Riemann--Hilbert representation of the orthogonal polynomials to produce an $\text{O}(N)$ method to compute the first $N$ recurrence coefficients. The method can also be used for pointwise evaluation of the polynomials and their Cauchy transforms throughout the complex plane. The method encodes the singularity behavior of weight functions using weighted Cauchy integrals of Chebyshev polynomials. This greatly improves the efficiency of the method, outperforming other available techniques. We demonstrate the fast convergence of our method and present applications to integrable systems and approximation theory.

Developing an efficient computational scheme for high-dimensional Bayesian variable selection in generalised linear models and survival models has always been a challenging problem due to the absence of closed-form solutions for the marginal likelihood. The RJMCMC approach can be employed to samples model and coefficients jointly, but effective design of the transdimensional jumps of RJMCMC can be challenge, making it hard to implement. Alternatively, the marginal likelihood can be derived using data-augmentation scheme e.g. Polya-gamma data argumentation for logistic regression) or through other estimation methods. However, suitable data-augmentation schemes are not available for every generalised linear and survival models, and using estimations such as Laplace approximation or correlated pseudo-marginal to derive marginal likelihood within a locally informed proposal can be computationally expensive in the "large n, large p" settings. In this paper, three main contributions are presented. Firstly, we present an extended Point-wise implementation of Adaptive Random Neighbourhood Informed proposal (PARNI) to efficiently sample models directly from the marginal posterior distribution in both generalised linear models and survival models. Secondly, in the light of the approximate Laplace approximation, we also describe an efficient and accurate estimation method for the marginal likelihood which involves adaptive parameters. Additionally, we describe a new method to adapt the algorithmic tuning parameters of the PARNI proposal by replacing the Rao-Blackwellised estimates with the combination of a warm-start estimate and an ergodic average. We present numerous numerical results from simulated data and 8 high-dimensional gene fine mapping data-sets to showcase the efficiency of the novel PARNI proposal compared to the baseline add-delete-swap proposal.

北京阿比特科技有限公司