亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Most ordinary differential equation (ODE) models used to describe biological or physical systems must be solved approximately using numerical methods. Perniciously, even those solvers which seem sufficiently accurate for the forward problem, i.e., for obtaining an accurate simulation, may not be sufficiently accurate for the inverse problem, i.e., for inferring the model parameters from data. We show that for both fixed step and adaptive step ODE solvers, solving the forward problem with insufficient accuracy can distort likelihood surfaces, which may become jagged, causing inference algorithms to get stuck in local "phantom" optima. We demonstrate that biases in inference arising from numerical approximation of ODEs are potentially most severe in systems involving low noise and rapid nonlinear dynamics. We reanalyze an ODE changepoint model previously fit to the COVID-19 outbreak in Germany and show the effect of the step size on simulation and inference results. We then fit a more complicated rainfall-runoff model to hydrological data and illustrate the importance of tuning solver tolerances to avoid distorted likelihood surfaces. Our results indicate that when performing inference for ODE model parameters, adaptive step size solver tolerances must be set cautiously and likelihood surfaces should be inspected for characteristic signs of numerical issues.

相關內容

The Bayesian inference approach is widely used to tackle inverse problems due to its versatile and natural ability to handle ill-posedness. However, it often faces challenges when dealing with situations involving continuous fields or large-resolution discrete representations (high-dimensional). Moreover, the prior distribution of unknown parameters is commonly difficult to be determined. In this study, an Operator Learning-based Generative Adversarial Network (OL-GAN) is proposed and integrated into the Bayesian inference framework to handle these issues. Unlike most Bayesian approaches, the distinctive characteristic of the proposed method is to learn the joint distribution of parameters and responses. By leveraging the trained generative model, the posteriors of the unknown parameters can theoretically be approximated by any sampling algorithm (e.g., Markov Chain Monte Carlo, MCMC) in a low-dimensional latent space shared by the components of the joint distribution. The latent space is typically a simple and easy-to-sample distribution (e.g., Gaussian, uniform), which significantly reduces the computational cost associated with the Bayesian inference while avoiding prior selection concerns. Furthermore, incorporating operator learning enables resolution-independent in the generator. Predictions can be obtained at desired coordinates, and inversions can be performed even if the observation data are misaligned with the training data. Finally, the effectiveness of the proposed method is validated through several numerical experiments.

This paper proposes a hierarchy of numerical fluxes for the compressible flow equations which are kinetic-energy and pressure equilibrium preserving and asymptotically entropy conservative, i.e., they are able to arbitrarily reduce the numerical error on entropy production due to the spatial discretization. The fluxes are based on the use of the harmonic mean for internal energy and only use algebraic operations, making them less computationally expensive than the entropy-conserving fluxes based on the logarithmic mean. The use of the geometric mean is also explored and identified to be well-suited to reduce errors on entropy evolution. Results of numerical tests confirmed the theoretical predictions and the entropy-conserving capabilities of a selection of schemes have been compared.

We discuss a system of stochastic differential equations with a stiff linear term and additive noise driven by fractional Brownian motions (fBms) with Hurst parameter H>1/2, which arise e. g., from spatial approximations of stochastic partial differential equations. For their numerical approximation, we present an exponential Euler scheme and show that it converges in the strong sense with an exact rate close to the Hurst parameter H. Further, based on [2], we conclude the existence of a unique stationary solution of the exponential Euler scheme that is pathwise asymptotically stable.

Symbolic regression with polynomial neural networks and polynomial neural ordinary differential equations (ODEs) are two recent and powerful approaches for equation recovery of many science and engineering problems. However, these methods provide point estimates for the model parameters and are currently unable to accommodate noisy data. We address this challenge by developing and validating the following Bayesian inference methods: the Laplace approximation, Markov Chain Monte Carlo (MCMC) sampling methods, and variational inference. We have found the Laplace approximation to be the best method for this class of problems. Our work can be easily extended to the broader class of symbolic neural networks to which the polynomial neural network belongs.

Partial differential equations (PDEs) have become an essential tool for modeling complex physical systems. Such equations are typically solved numerically via mesh-based methods, such as finite element methods, the outputs of which consist of the solutions on a set of mesh nodes over the spatial domain. However, these simulations are often prohibitively costly to survey the input space. In this paper, we propose an efficient emulator that simultaneously predicts the outputs on a set of mesh nodes, with theoretical justification of its uncertainty quantification. The novelty of the proposed method lies in the incorporation of the mesh node coordinates into the statistical model. In particular, the proposed method segments the mesh nodes into multiple clusters via a Dirichlet process prior and fits a Gaussian process model in each. Most importantly, by revealing the underlying clustering structures, the proposed method can extract valuable flow physics present in the systems that can be used to guide further investigations. Real examples are demonstrated to show that our proposed method has smaller prediction errors than its main competitors, with competitive computation time, and identifies interesting clusters of mesh nodes that exhibit coherent input-output relationships and possess physical significance, such as satisfying boundary conditions. An R package for the proposed methodology is provided in an open repository.

Geometric quantiles are location parameters which extend classical univariate quantiles to normed spaces (possibly infinite-dimensional) and which include the geometric median as a special case. The infinite-dimensional setting is highly relevant in the modeling and analysis of functional data, as well as for kernel methods. We begin by providing new results on the existence and uniqueness of geometric quantiles. Estimation is then performed with an approximate M-estimator and we investigate its large-sample properties in infinite dimension. When the population quantile is not uniquely defined, we leverage the theory of variational convergence to obtain asymptotic statements on subsequences in the weak topology. When there is a unique population quantile, we show that the estimator is consistent in the norm topology for a wide range of Banach spaces including every separable uniformly convex space. In separable Hilbert spaces, we establish weak Bahadur-Kiefer representations of the estimator, from which $\sqrt n$-asymptotic normality follows.

We introduce a new class of absorbing boundary conditions (ABCs) for the Helmholtz equation. The proposed ABCs can be derived from a certain simple class of perfectly matched layers using $L$ discrete layers and using the $Q_N$ Lagrange finite element in conjunction with the $N$-point Gauss-Legendre quadrature reduced integration rule. The proposed ABCs are classified by a tuple $(L,N)$, and achieve reflection error of order $O(R^{2LN})$ for some $R<1$. The new ABCs generalise the perfectly matched discrete layers proposed by Guddati and Lim [Int. J. Numer. Meth. Engng 66 (6) (2006) 949-977], including them as type $(L,1)$. An analysis of the proposed ABCs is performed motivated by the work of Ainsworth [J. Comput. Phys. 198 (1) (2004) 106-130]. The new ABCs facilitate numerical implementations of the Helmholtz problem with ABCs if $Q_N$ finite elements are used in the physical domain. Moreover, giving more insight, the analysis presented in this work potentially aids with developing ABCs in related areas.

We consider a linear implicit-explicit (IMEX) time discretization of the Cahn-Hilliard equation with a source term, endowed with Dirichlet boundary conditions. For every time step small enough, we build an exponential attractor of the discrete-in-time dynamical system associated to the discretization. We prove that, as the time step tends to 0, this attractor converges for the symmmetric Hausdorff distance to an exponential attractor of the continuous-in-time dynamical system associated with the PDE. We also prove that the fractal dimension of the exponential attractor (and consequently, of the global attractor) is bounded by a constant independent of the time step. The results also apply to the classical Cahn-Hilliard equation with Neumann boundary conditions.

We propose a general framework for solving forward and inverse problems constrained by partial differential equations, where we interpolate neural networks onto finite element spaces to represent the (partial) unknowns. The framework overcomes the challenges related to the imposition of boundary conditions, the choice of collocation points in physics-informed neural networks, and the integration of variational physics-informed neural networks. A numerical experiment set confirms the framework's capability of handling various forward and inverse problems. In particular, the trained neural network generalises well for smooth problems, beating finite element solutions by some orders of magnitude. We finally propose an effective one-loop solver with an initial data fitting step (to obtain a cheap initialisation) to solve inverse problems.

Differential geometric approaches are ubiquitous in several fields of mathematics, physics and engineering, and their discretizations enable the development of network-based mathematical and computational frameworks, which are essential for large-scale data science. The Forman-Ricci curvature (FRC) - a statistical measure based on Riemannian geometry and designed for networks - is known for its high capacity for extracting geometric information from complex networks. However, extracting information from dense networks is still challenging due to the combinatorial explosion of high-order network structures. Motivated by this challenge we sought a set-theoretic representation theory for high-order network cells and FRC, as well as their associated concepts and properties, which together provide an alternative and efficient formulation for computing high-order FRC in complex networks. We provide a pseudo-code, a software implementation coined FastForman, as well as a benchmark comparison with alternative implementations. Crucially, our representation theory reveals previous computational bottlenecks and also accelerates the computation of FRC. As a consequence, our findings open new research possibilities in complex systems where higher-order geometric computations are required.

北京阿比特科技有限公司