亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper, we propose a generic approach to perform global sensitivity analysis (GSA) for compartmental models based on continuous-time Markov chains (CTMC). This approach enables a complete GSA for epidemic models, in which not only the effects of uncertain parameters such as epidemic parameters (transmission rate, mean sojourn duration in compartments) are quantified, but also those of intrinsic randomness and interactions between the two. The main step in our approach is to build a deterministic representation of the underlying continuous-time Markov chain by controlling the latent variables modeling intrinsic randomness. Then, model output can be written as a deterministic function of both uncertain parameters and controlled latent variables, so that it becomes possible to compute standard variance-based sensitivity indices, e.g. the so-called Sobol' indices. However, different simulation algorithms lead to different representations. We exhibit in this work three different representations for CTMC stochastic compartmental models and discuss the results obtained by implementing and comparing GSAs based on each of these representations on a SARS-CoV-2 epidemic model.

相關內容

In this paper we propose and analyze a novel multilevel version of Stein variational gradient descent (SVGD). SVGD is a recent particle based variational inference method. For Bayesian inverse problems with computationally expensive likelihood evaluations, the method can become prohibitive as it requires to evolve a discrete dynamical system over many time steps, each of which requires likelihood evaluations at all particle locations. To address this, we introduce a multilevel variant that involves running several interacting particle dynamics in parallel corresponding to different approximation levels of the likelihood. By carefully tuning the number of particles at each level, we prove that a significant reduction in computational complexity can be achieved. As an application we provide a numerical experiment for a PDE driven inverse problem, which confirms the speed up suggested by our theoretical results.

In this paper, we consider feature screening for ultrahigh dimensional clustering analyses. Based on the observation that the marginal distribution of any given feature is a mixture of its conditional distributions in different clusters, we propose to screen clustering features by independently evaluating the homogeneity of each feature's mixture distribution. Important cluster-relevant features have heterogeneous components in their mixture distributions and unimportant features have homogeneous components. The well-known EM-test statistic is used to evaluate the homogeneity. Under general parametric settings, we establish the tail probability bounds of the EM-test statistic for the homogeneous and heterogeneous features, and further show that the proposed screening procedure can achieve the sure independent screening and even the consistency in selection properties. Limiting distribution of the EM-test statistic is also obtained for general parametric distributions. The proposed method is computationally efficient, can accurately screen for important cluster-relevant features and help to significantly improve clustering, as demonstrated in our extensive simulation and real data analyses.

In this paper, we develop a general framework for the design of the arbitrary high-order well-balanced discontinuous Galerkin (DG) method for hyperbolic balance laws, including the compressible Euler equations with gravitation and the shallow water equations with horizontal temperature gradients (referred to as the Ripa model). Not only the hydrostatic equilibrium including the more complicated isobaric steady state in Ripa system, but our scheme is also well-balanced for the exact preservation of the moving equilibrium state. The strategy adopted is to approximate the equilibrium variables in the DG piecewise polynomial space, rather than the conservative variables, which is pivotal in the well-balanced property. Our approach provides flexibility in combination with any consistent numerical flux, and it is free of the reference equilibrium state recovery and the special source term treatment. This approach enables the construction of a well-balanced method for non-hydrostatic equilibria in Euler systems. Extensive numerical examples such as moving or isobaric equilibria validate the high order accuracy and exact equilibrium preservation for various flows given by hyperbolic balance laws. With a relatively coarse mesh, it is also possible to capture small perturbations at or close to steady flow without numerical oscillations.

In this paper, we tackle a persistent numerical instability within the total Lagrangian smoothed particle hydrodynamics (TLSPH) solid dynamics. Specifically, we address the hourglass modes that may grow and eventually deteriorate the reliability of simulation, particularly in the scenarios characterized by large deformations. We propose a generalized essentially non-hourglass formulation based on volumetric-deviatoric stress decomposition, offering a general solution for elasticity, plasticity, anisotropy, and other material models. Comparing the standard SPH formulation with the original non-nested Laplacian operator applied in our previous work \cite{wu2023essentially} to handle the hourglass issues in standard elasticity, we introduce a correction for the discretization of shear stress that relies on the discrepancy produced by a tracing-back prediction of the initial inter-particle direction from the current deformation gradient. The present formulation, when applied to standard elastic materials, is able to recover the original Laplacian operator. Due to the dimensionless nature of the correction, this formulation handles complex material models in a very straightforward way. Furthermore, a magnitude limiter is introduced to minimize the correction in domains where the discrepancy is less pronounced. The present formulation is validated, with a single set of modeling parameters, through a series of benchmark cases, confirming good stability and accuracy across elastic, plastic, and anisotropic materials. To showcase its potential, the formulation is employed to simulate a complex problem involving viscous plastic Oobleck material, contacts, and very large deformation.

This paper develops a flexible and computationally efficient multivariate volatility model, which allows for dynamic conditional correlations and volatility spillover effects among financial assets. The new model has desirable properties such as identifiability and computational tractability for many assets. A sufficient condition of the strict stationarity is derived for the new process. Two quasi-maximum likelihood estimation methods are proposed for the new model with and without low-rank constraints on the coefficient matrices respectively, and the asymptotic properties for both estimators are established. Moreover, a Bayesian information criterion with selection consistency is developed for order selection, and the testing for volatility spillover effects is carefully discussed. The finite sample performance of the proposed methods is evaluated in simulation studies for small and moderate dimensions. The usefulness of the new model and its inference tools is illustrated by two empirical examples for 5 stock markets and 17 industry portfolios, respectively.

In this paper, we introduce a general constructive method to compute solutions of initial value problems of semilinear parabolic partial differential equations via semigroup theory and computer-assisted proofs. Once a numerical candidate for the solution is obtained via a finite dimensional projection, Chebyshev series expansions are used to solve the linearized equations about the approximation from which a solution map operator is constructed. Using the solution operator (which exists from semigroup theory), we define an infinite dimensional contraction operator whose unique fixed point together with its rigorous bounds provide the local inclusion of the solution. Applying this technique for multiple time steps leads to constructive proofs of existence of solutions over long time intervals. As applications, we study the 3D/2D Swift-Hohenberg, where we combine our method with explicit constructions of trapping regions to prove global existence of solutions of initial value problems converging asymptotically to nontrivial equilibria. A second application consists of the 2D Ohta-Kawasaki equation, providing a framework for handling derivatives in nonlinear terms.

Assuming we have iid observations from two unknown probability density functions (pdfs), $p$ and $q$, the likelihood-ratio estimation (LRE) is an elegant approach to compare the two pdfs only by relying on the available data. In this paper, we introduce the first -to the best of our knowledge-graph-based extension of this problem, which reads as follows: Suppose each node $v$ of a fixed graph has access to observations coming from two unknown node-specific pdfs, $p_v$ and $q_v$, and the goal is to estimate for each node the likelihood-ratio between both pdfs by also taking into account the information provided by the graph structure. The node-level estimation tasks are supposed to exhibit similarities conveyed by the graph, which suggests that the nodes could collaborate to solve them more efficiently. We develop this idea in a concrete non-parametric method that we call Graph-based Relative Unconstrained Least-squares Importance Fitting (GRULSIF). We derive convergence rates for our collaborative approach that highlights the role played by variables such as the number of available observations per node, the size of the graph, and how accurately the graph structure encodes the similarity between tasks. These theoretical results explicit the situations where collaborative estimation effectively leads to an improvement in performance compared to solving each problem independently. Finally, in a series of experiments, we illustrate how GRULSIF infers the likelihood-ratios at the nodes of the graph more accurately compared to state-of-the art LRE methods, which would operate independently at each node, and we also verify that the behavior of GRULSIF is aligned with our previous theoretical analysis.

One of the central quantities of probabilistic seismic risk assessment studies is the fragility curve, which represents the probability of failure of a mechanical structure conditional on a scalar measure derived from the seismic ground motion. Estimating such curves is a difficult task because, for many structures of interest, few data are available and the data are only binary; i.e., they indicate the state of the structure, failure or non-failure. This framework concerns complex equipments such as electrical devices encountered in industrial installations. In order to address this challenging framework a wide range of the methods in the literature rely on a parametric log-normal model. Bayesian approaches allow for efficient learning of the model parameters. However, the choice of the prior distribution has a non-negligible influence on the posterior distribution and, therefore, on any resulting estimate. We propose a thorough study of this parametric Bayesian estimation problem when the data are limited and binary. Using the reference prior theory as a support, we suggest an objective approach for the prior choice. This approach leads to the Jeffreys prior which is explicitly derived for this problem for the first time. The posterior distribution is proven to be proper (i.e., it integrates to unity) with the Jeffreys prior and improper with some classical priors from the literature. The posterior distribution with the Jeffreys prior is also shown to vanish at the boundaries of the parameters domain, so sampling the posterior distribution of the parameters does not produce anomalously small or large values. Therefore, this does not produce degenerate fragility curves such as unit-step functions and the Jeffreys prior leads to robust credibility intervals. The numerical results obtained on two different case studies, including an industrial case, illustrate the theoretical predictions.

In this paper, we provide an analysis of a recently proposed multicontinuum homogenization technique. The analysis differs from those used in classical homogenization methods for several reasons. First, the cell problems in multicontinuum homogenization use constraint problems and can not be directly substituted into the differential operator. Secondly, the problem contains high contrast that remains in the homogenized problem. The homogenized problem averages the microstructure while containing the small parameter. In this analysis, we first based on our previous techniques, CEM-GMsFEM, to define a CEM-downscaling operator that maps the multicontinuum quantities to an approximated microscopic solution. Following the regularity assumption of the multicontinuum quantities, we construct a downscaling operator and the homogenized multicontinuum equations using the information of linear approximation of the multicontinuum quantities. The error analysis is given by the residual estimate of the homogenized equations and the well-posedness assumption of the homogenized equations.

We consider a general multivariate model where univariate marginal distributions are known up to a parameter vector and we are interested in estimating that parameter vector without specifying the joint distribution, except for the marginals. If we assume independence between the marginals and maximize the resulting quasi-likelihood, we obtain a consistent but inefficient QMLE estimator. If we assume a parametric copula (other than independence) we obtain a full MLE, which is efficient but only under a correct copula specification and may be biased if the copula is misspecified. Instead we propose a sieve MLE estimator (SMLE) which improves over QMLE but does not have the drawbacks of full MLE. We model the unknown part of the joint distribution using the Bernstein-Kantorovich polynomial copula and assess the resulting improvement over QMLE and over misspecified FMLE in terms of relative efficiency and robustness. We derive the asymptotic distribution of the new estimator and show that it reaches the relevant semiparametric efficiency bound. Simulations suggest that the sieve MLE can be almost as efficient as FMLE relative to QMLE provided there is enough dependence between the marginals. We demonstrate practical value of the new estimator with several applications. First, we apply SMLE in an insurance context where we build a flexible semi-parametric claim loss model for a scenario where one of the variables is censored. As in simulations, the use of SMLE leads to tighter parameter estimates. Next, we consider financial risk management examples and show how the use of SMLE leads to superior Value-at-Risk predictions. The paper comes with an online archive which contains all codes and datasets.

北京阿比特科技有限公司