亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper we obtain quantitative Bernstein-von Mises type bounds on the normal approximation of the posterior distribution in exponential family models when centering either around the posterior mode or around the maximum likelihood estimator. Our bounds, obtained through a version of Stein's method, are non-asymptotic, and data dependent; they are of the correct order both in the total variation and Wasserstein distances, as well as for approximations for expectations of smooth functions of the posterior. All our results are valid for univariate and multivariate posteriors alike, and do not require a conjugate prior setting. We illustrate our findings on a variety of exponential family distributions, including Poisson, multinomial and normal distribution with unknown mean and variance. The resulting bounds have an explicit dependence on the prior distribution and on sufficient statistics of the data from the sample, and thus provide insight into how these factors may affect the quality of the normal approximation. The performance of the bounds is also assessed with simulations.

相關內容

The nonnegative garrote (NNG) is among the first approaches that combine variable selection and shrinkage of regression estimates. When more than the derivation of a predictor is of interest, NNG has some conceptual advantages over the popular lasso. Nevertheless, NNG has received little attention. The original NNG relies on least-squares (OLS) estimates, which are highly variable in data with a high degree of multicollinearity (HDM) and do not exist in high-dimensional data (HDD). This might be the reason that NNG is not used in such data. Alternative initial estimates have been proposed but hardly used in practice. Analyzing three structurally different data sets, we demonstrated that NNG can also be applied in HDM and HDD and compared its performance with the lasso, adaptive lasso, relaxed lasso, and best subset selection in terms of variables selected, regression estimates, and prediction. Replacing OLS by ridge initial estimates in HDM and lasso initial estimates in HDD helped NNG select simpler models than competing approaches without much increase in prediction errors. Simpler models are easier to interpret, an important issue for descriptive modelling. Based on the limited experience from three datasets, we assume that the NNG can be a suitable alternative to the lasso and its extensions. Neutral comparison simulation studies are needed to better understand the properties of variable selection methods, compare them and derive guidance for practice.

Neyman (1923/1990) introduced the randomization model, which contains the notation of potential outcomes to define causal effects and a framework for large-sample inference based on the design of the experiment. However, the existing theory for this framework is far from complete especially when the number of treatment levels diverges and the group sizes vary a lot across treatment levels. We provide a unified discussion of statistical inference under the randomization model with general group sizes across treatment levels. We formulate the estimator in terms of a linear permutational statistic and use results based on Stein's method to derive various Berry--Esseen bounds on the linear and quadratic functions of the estimator. These new Berry--Esseen bounds serve as basis for design-based causal inference with possibly diverging treatment levels and diverging dimension of causal effects. We also fill an important gap by proposing novel variance estimators for experiments with possibly many treatment levels without replications. Equipped with the newly developed results, design-based causal inference in general settings becomes more convenient with stronger theoretical guarantees.

Predictive algorithms, such as deep neural networks (DNNs), are used in many domain sciences to directly estimate internal parameters of interest in simulator-based models, especially in settings where the observations include images or other complex high-dimensional data. In parallel, modern neural density estimators, such as normalizing flows, are becoming increasingly popular for uncertainty quantification, especially when both parameters and observations are high-dimensional. However, parameter inference is an inverse problem and not a prediction task; thus, an open challenge is to construct conditionally valid and precise confidence regions, with a guaranteed probability of covering the true parameters of the data-generating process, no matter what the (unknown) parameter values are, and without relying on large-sample theory. Many simulator-based inference (SBI) methods are indeed known to produce biased or overly confident parameter regions, yielding misleading uncertainty estimates. This paper presents WALDO, a novel method for constructing confidence regions with finite-sample conditional validity by leveraging prediction algorithms or posterior estimators that are currently widely adopted in SBI. WALDO reframes the well-known Wald test statistic, and uses a computationally efficient regression-based machinery for classical Neyman inversion of hypothesis tests. We apply our method to a recent high-energy physics problem, where prediction with DNNs has previously led to estimates with prediction bias. We also illustrate how our approach can correct overly confident posterior regions computed with normalizing flows.

The Unsplittable Flow on a Path (UFP) problem has garnered considerable attention as a challenging combinatorial optimization problem with notable practical implications. Steered by its pivotal applications in power engineering, the present work formulates a novel generalization of UFP, wherein demands and capacities in the input instance are monotone step functions over the set of edges. As an initial step towards tackling this generalization, we draw on and extend ideas from prior research to devise a quasi-polynomial time approximation scheme (QPTAS) under the premise that the demands and capacities lie in a quasi-polynomial range. Second, retaining the same assumption, an efficient logarithmic approximation is introduced for the single-source variant of the problem. Finally, we round up the contributions by designing a (kind of) black-box reduction that, under some mild conditions, allows to translate LP-based approximation algorithms for the studied problem into their counterparts for the Alternating Current Optimal Power Flow (AC OPF) problem -- a fundamental workflow in operation and control of power systems.

Causal investigations in observational studies pose a great challenge in scientific research where randomized trials or intervention-based studies are not feasible. Leveraging Shannon's seminal work on information theory, we develop a causal discovery framework of "predictive asymmetry" for bivariate $(X, Y)$. Predictive asymmetry is a central concept in information geometric causal inference; it enables assessment of whether $X$ is a stronger predictor of $Y$ or vice-versa. We propose a new metric called the Asymmetric Mutual Information ($AMI$) and establish its key statistical properties. The $AMI$ is not only able to detect complex non-linear association patterns in bivariate data, but also is able to detect and quantify predictive asymmetry. Our proposed methodology relies on scalable non-parametric density estimation using fast Fourier transformation. The resulting estimation method is manyfold faster than the classical bandwidth-based density estimation, while maintaining comparable mean integrated squared error rates. We investigate key asymptotic properties of the $AMI$ methodology; a new data-splitting technique is developed to make statistical inference on predictive asymmetry using the $AMI$. We illustrate the performance of the $AMI$ methodology through simulation studies as well as multiple real data examples.

Bayesian nonparametric mixture models are common for modeling complex data. While these models are well-suited for density estimation, their application for clustering has some limitations. Miller and Harrison (2014) proved posterior inconsistency in the number of clusters when the true number of clusters is finite for Dirichlet process and Pitman--Yor process mixture models. In this work, we extend this result to additional Bayesian nonparametric priors such as Gibbs-type processes and finite-dimensional representations of them. The latter include the Dirichlet multinomial process and the recently proposed Pitman--Yor and normalized generalized gamma multinomial processes. We show that mixture models based on these processes are also inconsistent in the number of clusters and discuss possible solutions. Notably, we show that a post-processing algorithm introduced by Guha et al. (2021) for the Dirichlet process extends to more general models and provides a consistent method to estimate the number of components.

Many applications in computational sciences and statistical inference require the computation of expectations with respect to complex high-dimensional distributions with unknown normalization constants, as well as the estimation of these constants. Here we develop a method to perform these calculations based on generating samples from a simple base distribution, transporting them by the flow generated by a velocity field, and performing averages along these flowlines. This non-equilibrium importance sampling (NEIS) strategy is straightforward to implement and can be used for calculations with arbitrary target distributions. On the theory side, we discuss how to tailor the velocity field to the target and establish general conditions under which the proposed estimator is a perfect estimator with zero-variance. We also draw connections between NEIS and approaches based on mapping a base distribution onto a target via a transport map. On the computational side, we show how to use deep learning to represent the velocity field by a neural network and train it towards the zero variance optimum. These results are illustrated numerically on benchmark examples (with dimension up to $10$), where after training the velocity field, the variance of the NEIS estimator is reduced by up to $6$ orders of magnitude than that of a vanilla estimator. We also compare the performances of NEIS with those of Neal's annealed importance sampling (AIS).

We develop a general method to study the Fisher information distance in central limit theorem for nonlinear statistics. We first construct completely new representations for the score function. We then use these representations to derive quantitative estimates for the Fisher information distance. To illustrate the applicability of our approach, explicit rates of Fisher information convergence for quadratic forms and the functions of sample means are provided. For the sums of independent random variables, we obtain the Fisher information bounds without requiring the finiteness of Poincar\'e constant. Our method can also be used to bound the Fisher information distance in non-central limit theorems.

We derive information-theoretic generalization bounds for supervised learning algorithms based on the information contained in predictions rather than in the output of the training algorithm. These bounds improve over the existing information-theoretic bounds, are applicable to a wider range of algorithms, and solve two key challenges: (a) they give meaningful results for deterministic algorithms and (b) they are significantly easier to estimate. We show experimentally that the proposed bounds closely follow the generalization gap in practical scenarios for deep learning.

As soon as abstract mathematical computations were adapted to computation on digital computers, the problem of efficient representation, manipulation, and communication of the numerical values in those computations arose. Strongly related to the problem of numerical representation is the problem of quantization: in what manner should a set of continuous real-valued numbers be distributed over a fixed discrete set of numbers to minimize the number of bits required and also to maximize the accuracy of the attendant computations? This perennial problem of quantization is particularly relevant whenever memory and/or computational resources are severely restricted, and it has come to the forefront in recent years due to the remarkable performance of Neural Network models in computer vision, natural language processing, and related areas. Moving from floating-point representations to low-precision fixed integer values represented in four bits or less holds the potential to reduce the memory footprint and latency by a factor of 16x; and, in fact, reductions of 4x to 8x are often realized in practice in these applications. Thus, it is not surprising that quantization has emerged recently as an important and very active sub-area of research in the efficient implementation of computations associated with Neural Networks. In this article, we survey approaches to the problem of quantizing the numerical values in deep Neural Network computations, covering the advantages/disadvantages of current methods. With this survey and its organization, we hope to have presented a useful snapshot of the current research in quantization for Neural Networks and to have given an intelligent organization to ease the evaluation of future research in this area.

北京阿比特科技有限公司