亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In high-dimensional classification problems, a commonly used approach is to first project the high-dimensional features into a lower dimensional space, and base the classification on the resulting lower dimensional projections. In this paper, we formulate a latent-variable model with a hidden low-dimensional structure to justify this two-step procedure and to guide which projection to choose. We propose a computationally efficient classifier that takes certain principal components (PCs) of the observed features as projections, with the number of retained PCs selected in a data-driven way. A general theory is established for analyzing such two-step classifiers based on any projections. We derive explicit rates of convergence of the excess risk of the proposed PC-based classifier. The obtained rates are further shown to be optimal up to logarithmic factors in the minimax sense. Our theory allows the lower-dimension to grow with the sample size and is also valid even when the feature dimension (greatly) exceeds the sample size. Extensive simulations corroborate our theoretical findings. The proposed method also performs favorably relative to other existing discriminant methods on three real data examples.

相關內容

This paper considers ranking inference of $n$ items based on the observed data on the top choice among $M$ randomly selected items at each trial. This is a useful modification of the Plackett-Luce model for $M$-way ranking with only the top choice observed and is an extension of the celebrated Bradley-Terry-Luce model that corresponds to $M=2$. Under a uniform sampling scheme in which any $M$ distinguished items are selected for comparisons with probability $p$ and the selected $M$ items are compared $L$ times with multinomial outcomes, we establish the statistical rates of convergence for underlying $n$ preference scores using both $\ell_2$-norm and $\ell_\infty$-norm, with the minimum sampling complexity. In addition, we establish the asymptotic normality of the maximum likelihood estimator that allows us to construct confidence intervals for the underlying scores. Furthermore, we propose a novel inference framework for ranking items through a sophisticated maximum pairwise difference statistic whose distribution is estimated via a valid Gaussian multiplier bootstrap. The estimated distribution is then used to construct simultaneous confidence intervals for the differences in the preference scores and the ranks of individual items. They also enable us to address various inference questions on the ranks of these items. Extensive simulation studies lend further support to our theoretical results. A real data application illustrates the usefulness of the proposed methods convincingly.

We present a new method of modelling numerical systems where there are two distinct output solution classes, for example tipping points or bifurcations. Gaussian process emulation is a useful tool in understanding these complex systems and provides estimates of uncertainty, but we aim to include systems where there are discontinuities between the two output solutions. Due to continuity assumptions, we consider current methods of classification to split our input space into two output regions. Classification and logistic regression methods currently rely on drawing from an independent Bernoulli distribution, which neglects any information known in the neighbouring area. We build on this by including correlation between our input points. Gaussian processes are still a vital element, but used in latent space to model the two regions. Using the input values and an associated output class label, the latent variable is estimated using MCMC sampling and a unique likelihood. A threshold (usually at zero) defines the boundary. We apply our method to a motivating example provided by the hormones associated with the reproductive system in mammals, where the two solutions are associated with high and low rates of reproduction.

In applications such as gene regulatory network analysis based on single-cell RNA sequencing data, samples often come from a mixture of different populations and each population has its own unique network. Available graphical models often assume that all samples are from the same population and share the same network. One has to first cluster the samples and use available methods to infer the network for every cluster separately. However, this two-step procedure ignores uncertainty in the clustering step and thus could lead to inaccurate network estimation. Motivated by these applications, we consider the mixture Poisson log-normal model for network inference of count data from mixed populations. The latent precision matrices of the mixture model correspond to the networks of different populations and can be jointly estimated by maximizing the lasso-penalized log-likelihood. Under rather mild conditions, we show that the mixture Poisson log-normal model is identifiable and has the positive definite Fisher information matrix. Consistency of the maximum lasso-penalized log-likelihood estimator is also established. To avoid the intractable optimization of the log-likelihood, we develop an algorithm called VMPLN based on the variational inference method. Comprehensive simulation and real single-cell RNA sequencing data analyses demonstrate the superior performance of VMPLN.

This paper offers a qualitative insight into the convergence of Bayesian parameter inference in a setup which mimics the modeling of the spread of a disease with associated disease measurements. Specifically, we are interested in the Bayesian model's convergence with increasing amounts of data under measurement limitations. Depending on how weakly informative the disease measurements are, we offer a kind of `best case' as well as a `worst case' analysis where, in the former case, we assume that the prevalence is directly accessible, while in the latter that only a binary signal corresponding to a prevalence detection threshold is available. Both cases are studied under an assumed so-called linear noise approximation as to the true dynamics. Numerical experiments test the sharpness of our results when confronted with more realistic situations for which analytical results are unavailable.

We develop a data-driven optimal shrinkage algorithm for matrix denoising in the presence of high-dimensional noise with separable covariance structure; that is, the nose is colored and dependent. The algorithm, coined extended OptShrink (eOptShrink), involves a new imputation and rank estimation and we do not need to estimate the separable covariance structure of the noise. On the theoretical side, we study the asymptotic behavior of singular values and singular vectors of the random matrix associated with the noisy data, including the sticking property of non-outlier singular values and delocalization of the non-outlier singular vectors with a convergence rate. We apply these results to establish the guarantee of the imputation, rank estimation and eOptShrink algorithm with a convergence rate. On the application side, in addition to a series of numerical simulations with a comparison with various state-of-the-art optimal shrinkage algorithms, we apply eOptShrink to extract fetal electrocardiogram from the single channel trans-abdominal maternal electrocardiogram.

The bivariate Gaussian distribution has been a key model for many developments in statistics. However, many real-world phenomena generate data that follow asymmetric distributions, and consequently bivariate normal model is inappropriate in such situations. Bidimensional log-symmetric models have attractive properties and can be considered as good alternatives in these cases. In this paper, we discuss bivariate log-symmetric distributions and their characterizations. We establish several distributional properties and obtain the maximum likelihood estimators of the model parameters. A Monte Carlo simulation study is performed for examining the performance of the developed parameter estimation method. A real data set is finally analyzed to illustrate the proposed model and the associated inferential method.

Occam's razor is a guiding principle that models should be simple enough to describe observed data. While Bayesian model selection (BMS) embodies it by the intrinsic regularization effect (IRE), how observed data scale the IRE has not been fully understood. In the nonlinear regression with conditionally independent observations, we show that the IRE is scaled by observations' fineness, defined by the amount and quality of observed data. We introduce an observable that quantifies the IRE, referred to as the Bayes specific heat, inspired by the correspondence between statistical inference and statistical physics. We derive its scaling relation to observations' fineness. We demonstrate that the optimal model chosen by the BMS changes at critical values of observations' fineness, accompanying the IRE's variation. The changes are from choosing a coarse-grained model to a fine-grained one as observations' fineness increases. Our findings expand an understanding of BMS's typicality when observed data are insufficient.

Exploratory factor analysis (EFA) has been widely used to learn the latent structure underlying multivariate data. Rotation and regularised estimation are two classes of methods in EFA that are widely used to find interpretable loading matrices. This paper proposes a new family of oblique rotations based on component-wise $L^p$ loss functions $(0 < p\leq 1)$ that is closely related to an $L^p$ regularised estimator. Model selection and post-selection inference procedures are developed based on the proposed rotation method. When the true loading matrix is sparse, the proposed method tends to outperform traditional rotation and regularised estimation methods in terms of statistical accuracy and computational cost. Since the proposed loss functions are non-smooth, an iteratively reweighted gradient projection algorithm is developed to solve the optimisation problem. Theoretical results are developed that establish the statistical consistency of the estimation, model selection, and post-selection inference. The proposed method is evaluated and compared with regularised estimation and traditional rotation methods via simulation studies. It is further illustrated by an application to the big-five personality assessment.

We address the problem of integrating data from multiple observational and interventional studies to eventually compute counterfactuals in structural causal models. We derive a likelihood characterisation for the overall data that leads us to extend a previous EM-based algorithm from the case of a single study to that of multiple ones. The new algorithm learns to approximate the (unidentifiability) region of model parameters from such mixed data sources. On this basis, it delivers interval approximations to counterfactual results, which collapse to points in the identifiable case. The algorithm is very general, it works on semi-Markovian models with discrete variables and can compute any counterfactual. Moreover, it automatically determines if a problem is feasible (the parameter region being nonempty), which is a necessary step not to yield incorrect results. Systematic numerical experiments show the effectiveness and accuracy of the algorithm, while hinting at the benefits of integrating heterogeneous data to get informative bounds in case of unidentifiability.

Causal discovery and causal reasoning are classically treated as separate and consecutive tasks: one first infers the causal graph, and then uses it to estimate causal effects of interventions. However, such a two-stage approach is uneconomical, especially in terms of actively collected interventional data, since the causal query of interest may not require a fully-specified causal model. From a Bayesian perspective, it is also unnatural, since a causal query (e.g., the causal graph or some causal effect) can be viewed as a latent quantity subject to posterior inference -- other unobserved quantities that are not of direct interest (e.g., the full causal model) ought to be marginalized out in this process and contribute to our epistemic uncertainty. In this work, we propose Active Bayesian Causal Inference (ABCI), a fully-Bayesian active learning framework for integrated causal discovery and reasoning, which jointly infers a posterior over causal models and queries of interest. In our approach to ABCI, we focus on the class of causally-sufficient, nonlinear additive noise models, which we model using Gaussian processes. We sequentially design experiments that are maximally informative about our target causal query, collect the corresponding interventional data, and update our beliefs to choose the next experiment. Through simulations, we demonstrate that our approach is more data-efficient than several baselines that only focus on learning the full causal graph. This allows us to accurately learn downstream causal queries from fewer samples while providing well-calibrated uncertainty estimates for the quantities of interest.

北京阿比特科技有限公司