亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Piecewise deterministic Markov processes (PDMPs) are a class of stochastic processes with applications in several fields of applied mathematics spanning from mathematical modeling of physical phenomena to computational methods. A PDMP is specified by three characteristic quantities: the deterministic motion, the law of the random event times, and the jump kernels. The applicability of PDMPs to real world scenarios is currently limited by the fact that these processes can be simulated only when these three characteristics of the process can be simulated exactly. In order to overcome this problem, we introduce discretisation schemes for PDMPs which make their approximate simulation possible. In particular, we design both first order and higher order schemes that rely on approximations of one or more of the three characteristics. For the proposed approximation schemes we study both pathwise convergence to the continuous PDMP as the step size converges to zero and convergence in law to the invariant measure of the PDMP in the long time limit. Moreover, we apply our theoretical results to several PDMPs that arise from the computational statistics and mathematical biology literature.

相關內容

The estimation of wall thermal properties by \emph{in situ} measurement enables to increase the reliability of the model predictions for building energy efficiency. Nevertheless, retrieving the unknown parameters has an important computational cost. Indeed, several computations of the heat transfer problem are required to identify these thermal properties. To handle this drawback, an innovative approach is investigated. The first step is to search the optimal experiment design among the sequence of observation of several months. A reduced sequence of observations of three days is identified which guarantees to estimate the parameter with the maximum accuracy. Moreover, the inverse problem is only solved for this short sequence. To decrease further the computational efforts, a reduced order model based on the modal identification method is employed. This \emph{a posteriori} model reduction method approximates the solution with a lower degree of freedom. The whole methodology is illustrated to estimate the thermal diffusivity of an historical building that has been monitored with temperature sensors for several months. The computational efforts is cut by five. The estimated parameter improves the reliability of the predictions of the wall thermal efficiency.

The Ensemble Kalman inversion (EKI), proposed by Iglesias et al. for the solution of Bayesian inverse problems of type $y=A u^\dagger +\varepsilon$, with $u^\dagger$ being an unknown parameter and $y$ a given datum, is a powerful tool usually derived from a sequential Monte Carlo point of view. It describes the dynamics of an ensemble of particles $\{u^j(t)\}_{j=1}^J$, whose initial empirical measure is sampled from the prior, evolving over an artificial time $t$ towards an approximate solution of the inverse problem. Using spectral techniques, we provide a complete description of the deterministic dynamics of EKI and their asymptotic behavior in parameter space. In particular, we analyze the dynamics of deterministic EKI and mean-field EKI. We demonstrate that the Bayesian posterior can only be recovered with the mean-field limit and not with finite sample sizes or deterministic EKI. Furthermore, we show that -- even in the deterministic case -- residuals in parameter space do not decrease monotonously in the Euclidean norm and suggest a problem-adapted norm, where monotonicity can be proved. Finally, we derive a system of ordinary differential equations governing the spectrum and eigenvectors of the covariance matrix.

The method-of-moments implementation of the electric-field integral equation yields many code-verification challenges due to the various sources of numerical error and their possible interactions. Matters are further complicated by singular integrals, which arise from the presence of a Green's function. In this paper, we provide approaches to separately assess the numerical errors arising from the use of basis functions to approximate the solution and the use of quadrature to approximate the integration. Through these approaches, we are able to verify the code and compare the error from different quadrature options.

The asymptotic behaviour of Linear Spectral Statistics (LSS) of the smoothed periodogram estimator of the spectral coherency matrix of a complex Gaussian high-dimensional time series $(\y_n)_{n \in \mathbb{Z}}$ with independent components is studied under the asymptotic regime where the sample size $N$ converges towards $+\infty$ while the dimension $M$ of $\y$ and the smoothing span of the estimator grow to infinity at the same rate in such a way that $\frac{M}{N} \rightarrow 0$. It is established that, at each frequency, the estimated spectral coherency matrix is close from the sample covariance matrix of an independent identically $\mathcal{N}_{\mathbb{C}}(0,\I_M)$ distributed sequence, and that its empirical eigenvalue distribution converges towards the Marcenko-Pastur distribution. This allows to conclude that each LSS has a deterministic behaviour that can be evaluated explicitly. Using concentration inequalities, it is shown that the order of magnitude of the supremum over the frequencies of the deviation of each LSS from its deterministic approximation is of the order of $\frac{1}{M} + \frac{\sqrt{M}}{N}+ (\frac{M}{N})^{3}$ where $N$ is the sample size. Numerical simulations supports our results.

Large health care data repositories such as electronic health records (EHR) opens new opportunities to derive individualized treatment strategies to improve disease outcomes. We study the problem of estimating sequential treatment rules tailored to patient's individual characteristics, often referred to as dynamic treatment regimes (DTRs). We seek to find the optimal DTR which maximizes the discontinuous value function through direct maximization of a fisher consistent surrogate loss function. We show that a large class of concave surrogates fails to be Fisher consistent, which differs from the classic setting for binary classification. We further characterize a non-concave family of Fisher consistent smooth surrogate functions, which can be optimized with gradient descent using off-the-shelf machine learning algorithms. Compared to the existing direct search approach under the support vector machine framework (Zhao et al., 2015), our proposed DTR estimation via surrogate loss optimization (DTRESLO) method is more computationally scalable to large sample size and allows for a broader functional class for the predictor effects. We establish theoretical properties for our proposed DTR estimator and obtain a sharp upper bound on the regret corresponding to our DTRESLO method. Finite sample performance of our proposed estimator is evaluated through extensive simulations and an application on deriving an optimal DTR for treatment sepsis using EHR data from patients admitted to intensive care units.

We present a novel approach to adaptive optimal design of groundwater surveys - a methodology for choosing the location of the next monitoring well. Our dual-weighted approach borrows ideas from Bayesian Optimisation and goal-oriented error estimation to propose the next monitoring well, given that some data is already available from existing wells. Our method is distinct from other optimal design strategies in that it does not rely on Fisher Information and it instead directly exploits the posterior uncertainty and the expected solution to a dual (or adjoint) problem to construct an acquisition function that optimally reduces the uncertainty in the model as a whole and some engineering quantity of interest in particular. We demonstrate our approach in the context of 2D groundwater flow example and show that employing the expectation of the dual solution as a weighting function improves the posterior estimate of the quantity of interest on average by a factor of 3, compared to the baseline approach, where only the posterior uncertainty is considered.

In this paper we get error bounds for fully discrete approximations of infinite horizon problems via the dynamic programming approach. It is well known that considering a time discretization with a positive step size $h$ an error bound of size $h$ can be proved for the difference between the value function (viscosity solution of the Hamilton-Jacobi-Bellman equation corresponding to the infinite horizon) and the value function of the discrete time problem. However, including also a spatial discretization based on elements of size $k$ an error bound of size $O(k/h)$ can be found in the literature for the error between the value functions of the continuous problem and the fully discrete problem. In this paper we revise the error bound of the fully discrete method and prove, under similar assumptions to those of the time discrete case, that the error of the fully discrete case is in fact $O(h+k)$ which gives first order in time and space for the method. This error bound matches the numerical experiments of many papers in the literature in which the behaviour $1/h$ from the bound $O(k/h)$ have not been observed.

We consider off-policy evaluation (OPE) in Partially Observable Markov Decision Processes (POMDPs), where the evaluation policy depends only on observable variables and the behavior policy depends on unobservable latent variables. Existing works either assume no unmeasured confounders, or focus on settings where both the observation and the state spaces are tabular. As such, these methods suffer from either a large bias in the presence of unmeasured confounders, or a large variance in settings with continuous or large observation/state spaces. In this work, we first propose novel identification methods for OPE in POMDPs with latent confounders, by introducing bridge functions that link the target policy's value and the observed data distribution. In fully-observable MDPs, these bridge functions reduce to the familiar value functions and marginal density ratios between the evaluation and the behavior policies. We next propose minimax estimation methods for learning these bridge functions. Our proposal permits general function approximation and is thus applicable to settings with continuous or large observation/state spaces. Finally, we construct three estimators based on these estimated bridge functions, corresponding to a value function-based estimator, a marginalized importance sampling estimator, and a doubly-robust estimator. Their nonasymptotic and asymptotic properties are investigated in detail.

In order to avoid the curse of dimensionality, frequently encountered in Big Data analysis, there was a vast development in the field of linear and nonlinear dimension reduction techniques in recent years. These techniques (sometimes referred to as manifold learning) assume that the scattered input data is lying on a lower dimensional manifold, thus the high dimensionality problem can be overcome by learning the lower dimensionality behavior. However, in real life applications, data is often very noisy. In this work, we propose a method to approximate $\mathcal{M}$ a $d$-dimensional $C^{m+1}$ smooth submanifold of $\mathbb{R}^n$ ($d \ll n$) based upon noisy scattered data points (i.e., a data cloud). We assume that the data points are located "near" the lower dimensional manifold and suggest a non-linear moving least-squares projection on an approximating $d$-dimensional manifold. Under some mild assumptions, the resulting approximant is shown to be infinitely smooth and of high approximation order (i.e., $O(h^{m+1})$, where $h$ is the fill distance and $m$ is the degree of the local polynomial approximation). The method presented here assumes no analytic knowledge of the approximated manifold and the approximation algorithm is linear in the large dimension $n$. Furthermore, the approximating manifold can serve as a framework to perform operations directly on the high dimensional data in a computationally efficient manner. This way, the preparatory step of dimension reduction, which induces distortions to the data, can be avoided altogether.

Stochastic gradient Markov chain Monte Carlo (SGMCMC) has become a popular method for scalable Bayesian inference. These methods are based on sampling a discrete-time approximation to a continuous time process, such as the Langevin diffusion. When applied to distributions defined on a constrained space, such as the simplex, the time-discretisation error can dominate when we are near the boundary of the space. We demonstrate that while current SGMCMC methods for the simplex perform well in certain cases, they struggle with sparse simplex spaces; when many of the components are close to zero. However, most popular large-scale applications of Bayesian inference on simplex spaces, such as network or topic models, are sparse. We argue that this poor performance is due to the biases of SGMCMC caused by the discretization error. To get around this, we propose the stochastic CIR process, which removes all discretization error and we prove that samples from the stochastic CIR process are asymptotically unbiased. Use of the stochastic CIR process within a SGMCMC algorithm is shown to give substantially better performance for a topic model and a Dirichlet process mixture model than existing SGMCMC approaches.

北京阿比特科技有限公司