亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper, we develop a discrete time stochastic model under partial information to explain the evolution of Covid-19 pandemic. Our model is a modification of the well-known SIR model for epidemics, which accounts for some peculiar features of Covid-19. In particular, we work with a random transmission rate and we assume that the true number of infectious people at any observation time is random and not directly observable, to account for asymptomatic and non-tested people. We elaborate a nested particle filtering approach to estimate the reproduction rate and the model parameters. We apply our methodology to Austrian Covid-19 infection data in the period from May 2020 to June 2022. Finally, we discuss forecasts and model tests.

相關內容

We study the problem of estimating the convex hull of the image $f(X)\subset\mathbb{R}^n$ of a compact set $X\subset\mathbb{R}^m$ with smooth boundary through a smooth function $f:\mathbb{R}^m\to\mathbb{R}^n$. Assuming that $f$ is a diffeomorphism or a submersion, we derive new bounds on the Hausdorff distance between the convex hull of $f(X)$ and the convex hull of the images $f(x_i)$ of $M$ samples $x_i$ on the boundary of $X$. When applied to the problem of geometric inference from random samples, our results give tighter and more general error bounds than the state of the art. We present applications to the problems of robust optimization, of reachability analysis of dynamical systems, and of robust trajectory optimization under bounded uncertainty.

We introduce the Weak-form Estimation of Nonlinear Dynamics (WENDy) method for estimating model parameters for non-linear systems of ODEs. The core mathematical idea involves an efficient conversion of the strong form representation of a model to its weak form, and then solving a regression problem to perform parameter inference. The core statistical idea rests on the Errors-In-Variables framework, which necessitates the use of the iteratively reweighted least squares algorithm. Further improvements are obtained by using orthonormal test functions, created from a set of $C^{\infty}$ bump functions of varying support sizes. We demonstrate that WENDy is a highly robust and efficient method for parameter inference in differential equations. Without relying on any numerical differential equation solvers, WENDy computes accurate estimates and is robust to large (biologically relevant) levels of measurement noise. For low dimensional systems with modest amounts of data, WENDy is competitive with conventional forward solver-based nonlinear least squares methods in terms of speed and accuracy. For both higher dimensional systems and stiff systems, WENDy is typically both faster (often by orders of magnitude) and more accurate than forward solver-based approaches. We illustrate the method and its performance in some common population and neuroscience models, including logistic growth, Lotka-Volterra, FitzHugh-Nagumo, Hindmarsh-Rose, and a Protein Transduction Benchmark model. Software and code for reproducing the examples is available at (//github.com/MathBioCU/WENDy).

In this paper we deal with the problem of sequential testing of multiple hypotheses. The main goal is minimising the expected sample size (ESS) under restrictions on the error probabilities. We use a variant of the method of Lagrange multipliers which is based on the minimisation of an auxiliary objective function (called Lagrangian). This function is defined as a weighted sum of all the test characteristics we are interested in: the error probabilities and the ESSs evaluated at some points of interest. In this paper, we use a definition of the Lagrangian function involving the ESS evaluated at any finite number of fixed parameter points (not necessarily those representing the hypotheses). Then we develop a computer-oriented method of minimisation of the Lagrangian function, that provides, depending on the specific choice of the parameter points, optimal tests in different concrete settings, like in Bayesian, Kiefer-Weiss and other settings. To exemplify the proposed methods for the particular case of sampling from a Bernoulli population we develop a set of computer algorithms for designing sequential tests that minimise the Lagrangian function and for the numerical evaluation of test characteristics like the error probabilities and the ESS, and other related. For the Bernoulli model, we made a series of computer evaluations related to the optimality of sequential multi-hypothesis tests, in a particular case of three hypotheses. A numerical comparison with the matrix sequential probability ratio test is carried out.

We introduce a PDE-based node-to-element contact formulation as an alternative to classical, purely geometrical formulations. It is challenging to devise solutions to nonsmooth contact problem with continuous gap using finite element discretizations. We herein achieve this objective by constructing an approximate distance function (ADF) to the boundaries of solid objects, and in doing so, also obtain universal uniqueness of contact detection. Unilateral constraints are implemented using a mixed model combining the screened Poisson equation and a force element, which has the topology of a continuum element containing an additional incident node. An ADF is obtained by solving the screened Poisson equation with constant essential boundary conditions and a variable transformation. The ADF does not explicitly depend on the number of objects and a single solution of the partial differential equation for this field uniquely defines the contact conditions for all incident points in the mesh. Having an ADF field to any obstacle circumvents the multiple target surfaces and avoids the specific data structures present in traditional contact-impact algorithms. We also relax the interpretation of the Lagrange multipliers as contact forces, and the Courant--Beltrami function is used with a mixed formulation producing the required differentiable result. We demonstrate the advantages of the new approach in two- and three-dimensional problems that are solved using Newton iterations. Simultaneous constraints for each incident point are considered.

In this work, we extend the data-driven It\^{o} stochastic differential equation (SDE) framework for the pathwise assessment of short-term forecast errors to account for the time-dependent upper bound that naturally constrains the observable historical data and forecast. We propose a new nonlinear and time-inhomogeneous SDE model with a Jacobi-type diffusion term for the phenomenon of interest, simultaneously driven by the forecast and the constraining upper bound. We rigorously demonstrate the existence and uniqueness of a strong solution to the SDE model by imposing a condition for the time-varying mean-reversion parameter appearing in the drift term. The normalized forecast function is thresholded to keep such mean-reversion parameters bounded. The SDE model parameter calibration also covers the thresholding parameter of the normalized forecast by applying a novel iterative two-stage optimization procedure to user-selected approximations of the likelihood function. Another novel contribution is estimating the transition density of the forecast error process, not known analytically in a closed form, through a tailored kernel smoothing technique with the control variate method. We fit the model to the 2019 photovoltaic (PV) solar power daily production and forecast data in Uruguay, computing the daily maximum solar PV production estimation. Two statistical versions of the constrained SDE model are fit, with the beta and truncated normal distributions as proxies for the transition density. Empirical results include simulations of the normalized solar PV power production and pathwise confidence bands generated through an indirect inference method. An objective comparison of optimal parametric points associated with the two selected statistical approximations is provided by applying the innovative kernel density estimation technique of the transition function of the forecast error process.

The synthetic control (SC) method is a popular approach for estimating treatment effects from observational panel data. It rests on a crucial assumption that we can write the treated unit as a linear combination of the untreated units. This linearity assumption, however, can be unlikely to hold in practice and, when violated, the resulting SC estimates are incorrect. In this paper we examine two questions: (1) How large can the misspecification error be? (2) How can we limit it? First, we provide theoretical bounds to quantify the misspecification error. The bounds are comforting: small misspecifications induce small errors. With these bounds in hand, we then develop new SC estimators that are specially designed to minimize misspecification error. The estimators are based on additional data about each unit, which is used to produce the SC weights. (For example, if the units are countries then the additional data might be demographic information about each.) We study our estimators on synthetic data; we find they produce more accurate causal estimates than standard synthetic controls. We then re-analyze the California tobacco-program data of the original SC paper, now including additional data from the US census about per-state demographics. Our estimators show that the observations in the pre-treatment period lie within the bounds of misspecification error, and that the observations post-treatment lie outside of those bounds. This is evidence that our SC methods have uncovered a true effect.

Consider the problem of solving systems of linear algebraic equations $Ax=b$ with a real symmetric positive definite matrix $A$ using the conjugate gradient (CG) method. To stop the algorithm at the appropriate moment, it is important to monitor the quality of the approximate solution. One of the most relevant quantities for measuring the quality of the approximate solution is the $A$-norm of the error. This quantity cannot be easily computed, however, it can be estimated. In this paper we discuss and analyze the behaviour of the Gauss-Radau upper bound on the $A$-norm of the error, based on viewing CG as a procedure for approximating a certain Riemann-Stieltjes integral. This upper bound depends on a prescribed underestimate $\mu$ to the smallest eigenvalue of $A$. We concentrate on explaining a phenomenon observed during computations showing that, in later CG iterations, the upper bound loses its accuracy, and is almost independent of $\mu$. We construct a model problem that is used to demonstrate and study the behaviour of the upper bound in dependence of $\mu$, and developed formulas that are helpful in understanding this behavior. We show that the above mentioned phenomenon is closely related to the convergence of the smallest Ritz value to the smallest eigenvalue of $A$. It occurs when the smallest Ritz value is a better approximation to the smallest eigenvalue than the prescribed underestimate $\mu$. We also suggest an adaptive strategy for improving the accuracy of the upper bounds in the previous iterations.

Uncertain fractional differential equation (UFDE) is a kind of differential equation about uncertain process. As an significant mathematical tool to describe the evolution process of dynamic system, UFDE is better than the ordinary differential equation with integer derivatives because of its hereditability and memorability characteristics. However, in most instances, the precise analytical solutions of UFDE is difficult to obtain due to the complex form of the UFDE itself. Up to now, there is not plenty of researches about the numerical method of UFDE, as for the existing numerical algorithms, their accuracy is also not high. In this research, derive from the interval weighting method, a class of fractional adams method is innovatively proposed to solve UFDE. Meanwhile, such fractional adams method extends the traditional predictor-corrector method to higher order cases. The stability and truncation error limit of the improved algorithm are analyzed and deduced. As the application, several numerical simulations (including $\alpha$-path, extreme value and the first hitting time of the UFDE) are provided to manifest the higher accuracy and efficiency of the proposed numerical method.

Branching processes are a class of continuous-time Markov chains (CTMCs) prevalent for modeling stochastic population dynamics in ecology, biology, epidemiology, and many other fields. The transient or finite-time behavior of these systems is fully characterized by their transition probabilities. However, computing them requires marginalizing over all paths between endpoint-conditioned values, which often poses a computational bottleneck. Leveraging recent results that connect generating function methods to a compressed sensing framework, we recast this task from the lens of sparse optimization. We propose a new solution method using variable splitting; in particular, we derive closed form updates in a highly efficient ADMM algorithm. Notably, no matrix products -- let alone inversions -- are required at any step. This reduces computational cost by orders of magnitude over existing methods, and the resulting algorithm is easily parallelizable and fairly insensitive to tuning parameters. A comparison to prior work is carried out in two applications to models of blood cell production and transposon evolution, showing that the proposed method is orders of magnitudes more scalable than existing work.

The receiver operating characteristic (ROC) curve is a powerful statistical tool and has been widely applied in medical research. In the ROC curve estimation, a commonly used assumption is that larger the biomarker value, greater severity the disease. In this paper, we mathematically interpret ``greater severity of the disease" as ``larger probability of being diseased". This in turn is equivalent to assume the likelihood ratio ordering of the biomarker between the diseased and healthy individuals. With this assumption, we first propose a Bernstein polynomial method to model the distributions of both samples; we then estimate the distributions by the maximum empirical likelihood principle. The ROC curve estimate and the associated summary statistics are obtained subsequently. Theoretically, we establish the asymptotic consistency of our estimators. Via extensive numerical studies, we compare the performance of our method with competitive methods. The application of our method is illustrated by a real-data example.

北京阿比特科技有限公司