亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We study a nonparametric Bayesian approach to estimation of the volatility function of a stochastic differential equation driven by a gamma process. The volatility function is modelled a priori as piecewise constant, and we specify a gamma prior on its values. This leads to a straightforward procedure for posterior inference via an MCMC procedure. We give theoretical performance guarantees (contraction rates for the posterior) for the Bayesian estimate in terms of the regularity of the unknown volatility function. We illustrate the method on synthetic and real data examples.

相關內容

Robust statistical data modelling under potential model mis-specification often requires leaving the parametric world for the nonparametric. In the latter, parameters are infinite dimensional objects such as functions, probability distributions or infinite vectors. In the Bayesian nonparametric approach, prior distributions are designed for these parameters, which provide a handle to manage the complexity of nonparametric models in practice. However, most modern Bayesian nonparametric models seem often out of reach to practitioners, as inference algorithms need careful design to deal with the infinite number of parameters. The aim of this work is to facilitate the journey by providing computational tools for Bayesian nonparametric inference. The article describes a set of functions available in the \R package BNPdensity in order to carry out density estimation with an infinite mixture model, including all types of censored data. The package provides access to a large class of such models based on normalized random measures, which represent a generalization of the popular Dirichlet process mixture. One striking advantage of this generalization is that it offers much more robust priors on the number of clusters than the Dirichlet. Another crucial advantage is the complete flexibility in specifying the prior for the scale and location parameters of the clusters, because conjugacy is not required. Inference is performed using a theoretically grounded approximate sampling methodology known as the Ferguson & Klass algorithm. The package also offers several goodness of fit diagnostics such as QQ-plots, including a cross-validation criterion, the conditional predictive ordinate. The proposed methodology is illustrated on a classical ecological risk assessment method called the Species Sensitivity Distribution (SSD) problem, showcasing the benefits of the Bayesian nonparametric framework.

Despite the ubiquity of kernel-based clustering, surprisingly few statistical guarantees exist beyond settings that consider strong structural assumptions on the data generation process. In this work, we take a step towards bridging this gap by studying the statistical performance of kernel-based clustering algorithms under non-parametric mixture models. We provide necessary and sufficient separability conditions under which these algorithms can consistently recover the underlying true clustering. Our analysis provides guarantees for kernel clustering approaches without structural assumptions on the form of the component distributions. Additionally, we establish a key equivalence between kernel-based data-clustering and kernel density-based clustering. This enables us to provide consistency guarantees for kernel-based estimators of non-parametric mixture models. Along with theoretical implications, this connection could have practical implications, including in the systematic choice of the bandwidth of the Gaussian kernel in the context of clustering.

Nonlinear Mixed effects models are hidden variables models that are widely used in many fields such as pharmacometrics. In such models, the distribution characteristics of hidden variables can be specified by including several parameters such as covariates or correlations which must be selected. Recent development of pharmacogenomics has brought averaged/high dimensional problems to the field of nonlinear mixed effects modeling for which standard covariates selection techniques like stepwise methods are not well suited. The selection of covariates and correlation parameters using a penalized likelihood approach is proposed. The penalized likelihood problem is solved using a stochastic proximal gradient algorithm to avoid inner-outer iterations. Speed of convergence of the proximal gradient algorithm is improved using component-wise adaptive gradient step sizes. The practical implementation and tuning of the proximal gradient algorithm are explored using simulations. Calibration of regularization parameters is performed by minimizing the Bayesian Information Criterion using particle swarm optimization, a zero-order optimization procedure. The use of warm restart and parallelization allowed computing time to be reduced significantly . The performance of the proposed method compared to the traditional grid search strategy is explored using simulated data. Finally, an application to real data from two pharmacokinetics studies is provided, one studying an antifibrinolytic and the other studying an antibiotic.

The problem of the mean-square optimal estimation of the linear functionals which depend on the unknown values of a stochastic stationary sequence from observations of the sequence in special sets of points is considered. Formulas for calculating the mean-square error and the spectral characteristic of the optimal linear estimate of the functionals are derived under the condition of spectral certainty, where the spectral density of the sequence is exactly known. The minimax (robust) method of estimation is applied in the case where the spectral density of the sequence is not known exactly while some sets of admissible spectral densities are given. Formulas that determine the least favourable spectral densities and the minimax spectral characteristics are derived for some special sets of admissible densities.

Given independent identically-distributed samples from a one-dimensional distribution, IAs are formed by partitioning samples into pairs, triplets, or nth-order groupings and retaining the median of those groupings that are approximately equal. A new statistical method, Independent Approximates (IAs), is defined and proven to enable closed-form estimation of the parameters of heavy-tailed distributions. The pdf of the IAs is proven to be the normalized nth-power of the original density. From this property, heavy-tailed distributions are proven to have well-defined means for their IA pairs, finite second moments for their IA triplets, and a finite, well-defined (n-1)th-moment for the nth-grouping. Estimation of the location, scale, and shape (inverse of degree of freedom) of the generalized Pareto and Student's t distributions are possible via a system of three equations. Performance analysis of the IA estimation methodology is conducted for the Student's t distribution using between 1000 to 100,000 samples. Closed-form estimates of the location and scale are determined from the mean of the IA pairs and the variance of the IA triplets, respectively. For the Student's t distribution, the geometric mean of the original samples provides a third equation to determine the shape, though its nonlinear solution requires an iterative solver. With 10,000 samples the relative bias of the parameter estimates is less than 0.01 and the relative precision is less than +/-0.1. The theoretical precision is finite for a limited range of the shape but can be extended by using higher-order groupings for a given moment.

Extreme-value copulas arise as the limiting dependence structure of component-wise maxima. Defined in terms of a functional parameter, they are one of the most widespread copula families due to their flexibility and ability to capture asymmetry. Despite this, meeting the complex analytical properties of this parameter in an unconstrained setting remains a challenge, restricting most uses to models with very few parameters or nonparametric models. In this paper, we focus on the bivariate case and propose a novel approach for estimating this functional parameter in a semiparametric manner. Our procedure relies on a series of transformations, including Williamson's transform and starting from a zero-integral spline. Spline coordinates are fit through maximum likelihood estimation, leveraging gradient optimization, without imposing further constraints. Our method produces efficient and wholly compliant solutions. We successfully conducted several experiments on both simulated and real-world data. Specifically, we test our method on scarce data gathered by the gravitational wave detection LIGO and Virgo collaborations.

When studying treatment effects in multilevel studies, investigators commonly use (semi-)parametric estimators, which make strong parametric assumptions about the outcome, the treatment, and/or the correlation between individuals. We propose two nonparametric, doubly robust, asymptotically Normal estimators of treatment effects that do not make such assumptions. The first estimator is an extension of the cross-fitting estimator applied to clustered settings. The second estimator is a new estimator that uses conditional propensity scores and an outcome covariance model to improve efficiency. We apply our estimators in simulation and empirical studies and find that they consistently obtain the smallest standard errors.

Thanks to their ability to capture complex dependence structures, copulas are frequently used to glue random variables into a joint model with arbitrary marginal distributions. More recently, they have been applied to solve statistical learning problems such as regression or classification. Framing such approaches as solutions of estimating equations, we generalize them in a unified framework. We can then obtain simultaneous, coherent inferences across multiple regression-like problems. We derive consistency, asymptotic normality, and validity of the bootstrap for corresponding estimators. The conditions allow for both continuous and discrete data as well as parametric, nonparametric, and semiparametric estimators of the copula and marginal distributions. The versatility of this methodology is illustrated by several theoretical examples, a simulation study, and an application to financial portfolio allocation.

This paper derives the generalized extreme value (GEV) model with implicit availability/perception (IAP) of alternatives and proposes a variational autoencoder (VAE) approach for choice set generation and implicit perception of alternatives. Specifically, the cross-nested logit (CNL) model with IAP is derived as an example of IAP-GEV models. The VAE approach is adapted to model the choice set generation process, in which the likelihood of perceiving chosen alternatives in the choice set is maximized. The VAE approach for route choice set generation is exemplified using a real dataset. IAP- CNL model estimated has the best performance in terms of goodness-of-fit and prediction performance, compared to multinomial logit models and conventional choice set generation methods.

Discrete random structures are important tools in Bayesian nonparametrics and the resulting models have proven effective in density estimation, clustering, topic modeling and prediction, among others. In this paper, we consider nested processes and study the dependence structures they induce. Dependence ranges between homogeneity, corresponding to full exchangeability, and maximum heterogeneity, corresponding to (unconditional) independence across samples. The popular nested Dirichlet process is shown to degenerate to the fully exchangeable case when there are ties across samples at the observed or latent level. To overcome this drawback, inherent to nesting general discrete random measures, we introduce a novel class of latent nested processes. These are obtained by adding common and group-specific completely random measures and, then, normalising to yield dependent random probability measures. We provide results on the partition distributions induced by latent nested processes, and develop an Markov Chain Monte Carlo sampler for Bayesian inferences. A test for distributional homogeneity across groups is obtained as a by product. The results and their inferential implications are showcased on synthetic and real data.

北京阿比特科技有限公司