亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Comparison and contrast are the basic means to unveil causation and learn which treatments work. To build good comparisons that isolate the average effect of treatment from confounding factors, randomization is key, yet often infeasible. In such non-experimental settings, we illustrate and discuss how well the common linear regression approach to causal inference approximates features of randomized experiments, such as covariate balance, study representativeness, sample-grounded estimation, and unweighted analyses. We also discuss alternative regression modeling, weighting, and matching approaches. We argue they should be given strong consideration in empirical work.

相關內容

線(xian)性回歸是利用(yong)數理統計(ji)中回歸分析,來確定(ding)兩種(zhong)或兩種(zhong)以上變量(liang)間相互依賴(lai)的(de)定(ding)量(liang)關系的(de)一種(zhong)統計(ji)分析方法,運用(yong)十分廣泛。其表(biao)達形(xing)式為(wei)y = w'x+e,e為(wei)誤差(cha)服從均值為(wei)0的(de)正態分布。

知識薈萃

精品入門和進階教程、論(lun)文(wen)和代碼整理等

更多

查看相關(guan)VIP內容、論文、資訊(xun)等

High-dimensional spectral data -- routinely generated in dairy production -- are used to predict a range of traits in milk products. Partial least squares regression (PLSR) is ubiquitously used for these prediction tasks. However PLSR is not typically viewed as arising from statistical inference of a probabilistic model, and parameter uncertainty is rarely quantified. Additionally, PLSR does not easily lend itself to model-based modifications, coherent prediction intervals are not readily available, and the process of choosing the latent-space dimension, $\mathtt{Q}$, can be subjective and sensitive to data size. We introduce a Bayesian latent-variable model, emulating the desirable properties of PLSR while accounting for parameter uncertainty. The need to choose $\mathtt{Q}$ is eschewed through a nonparametric shrinkage prior. The flexibility of the proposed Bayesian partial least squares regression (BPLSR) framework is exemplified by considering sparsity modifications and allowing for multivariate response prediction. The BPLSR framework is used in two motivating settings: 1) trait prediction from mid-infrared spectral analyses of milk samples, and 2) milk pH prediction from surface-enhanced Raman spectral data. The prediction performance of BPLSR at least matches that of PLSR. Additionally, the provision of correctly calibrated prediction intervals objectively provides richer, more informative inference for stakeholders in dairy production.

Approximating significance scans of searches for new particles in high-energy physics experiments as Gaussian fields is a well-established way to estimate the trials factors required to quantify global significances. We propose a novel, highly efficient method to estimate the covariance matrix of such a Gaussian field. The method is based on the linear approximation of statistical fluctuations of the signal amplitude. For one-dimensional searches the upper bound on the trials factor can then be calculated directly from the covariance matrix. For higher dimensions, the Gaussian process described by this covariance matrix may be sampled to calculate the trials factor directly. This method also serves as the theoretical basis for a recent study of the trials factor with an empirically constructed set of Asmiov-like background datasets. We illustrate the method with studies of a $H \rightarrow \gamma \gamma$ inspired model that was used in the empirical paper.

In this work we study systems consisting of a group of moving particles. In such systems, often some important parameters are unknown and have to be estimated from observed data. Such parameter estimation problems can often be solved via a Bayesian inference framework. However in many practical problems, only data at the aggregate level is available and as a result the likelihood function is not available, which poses challenge for Bayesian methods. In particular, we consider the situation where the distributions of the particles are observed. We propose a Wasserstein distance based sequential Monte Carlo sampler to solve the problem: the Wasserstein distance is used to measure the similarity between the observed and the simulated particle distributions and the sequential Monte Carlo samplers is used to deal with the sequentially available observations. Two real-world examples are provided to demonstrate the performance of the proposed method.

Prior knowledge and symbolic rules in machine learning are often expressed in the form of label constraints, especially in structured prediction problems. In this work, we compare two common strategies for encoding label constraints in a machine learning pipeline, regularization with constraints and constrained inference, by quantifying their impact on model performance. For regularization, we show that it narrows the generalization gap by precluding models that are inconsistent with the constraints. However, its preference for small violations introduces a bias toward a suboptimal model. For constrained inference, we show that it reduces the population risk by correcting a model's violation, and hence turns the violation into an advantage. Given these differences, we further explore the use of two approaches together and propose conditions for constrained inference to compensate for the bias introduced by regularization, aiming to improve both the model complexity and optimal risk.

In this study, a density-on-density regression model is introduced, where the association between densities is elucidated via a warping function. The proposed model has the advantage of a being straightforward demonstration of how one density transforms into another. Using the Riemannian representation of density functions, which is the square-root function (or half density), the model is defined in the correspondingly constructed Riemannian manifold. To estimate the warping function, it is proposed to minimize the average Hellinger distance, which is equivalent to minimizing the average Fisher-Rao distance between densities. An optimization algorithm is introduced by estimating the smooth monotone transformation of the warping function. Asymptotic properties of the proposed estimator are discussed. Simulation studies demonstrate the superior performance of the proposed approach over competing approaches in predicting outcome density functions. Applying to a proteomic-imaging study from the Alzheimer's Disease Neuroimaging Initiative, the proposed approach illustrates the connection between the distribution of protein abundance in the cerebrospinal fluid and the distribution of brain regional volume. Discrepancies among cognitive normal subjects, patients with mild cognitive impairment, and Alzheimer's disease (AD) are identified and the findings are in line with existing knowledge about AD.

Batch active learning is a popular approach for efficiently training machine learning models on large, initially unlabelled datasets by repeatedly acquiring labels for batches of data points. However, many recent batch active learning methods are white-box approaches and are often limited to differentiable parametric models: they score unlabeled points using acquisition functions based on model embeddings or first- and second-order derivatives. In this paper, we propose black-box batch active learning for regression tasks as an extension of white-box approaches. Crucially, our method only relies on model predictions. This approach is compatible with a wide range of machine learning models, including regular and Bayesian deep learning models and non-differentiable models such as random forests. It is rooted in Bayesian principles and utilizes recent kernel-based approaches. This allows us to extend a wide range of existing state-of-the-art white-box batch active learning methods (BADGE, BAIT, LCMD) to black-box models. We demonstrate the effectiveness of our approach through extensive experimental evaluations on regression datasets, achieving surprisingly strong performance compared to white-box approaches for deep learning models.

Polytomous categorical data are frequent in studies, that can be obtained with an individual or grouped structure. In both structures, the generalized logit model is commonly used to relate the covariates on the response variable. After fitting a model, one of the challenges is the definition of an appropriate residual and choosing diagnostic techniques. Since the polytomous variable is multivariate, raw, Pearson, or deviance residuals are vectors and their asymptotic distribution is generally unknown, which leads to difficulties in graphical visualization and interpretation. Therefore, the definition of appropriate residuals and the choice of the correct analysis in diagnostic tools is important, especially for nominal data, where a restriction of methods is observed. This paper proposes the use of randomized quantile residuals associated with individual and grouped nominal data, as well as Euclidean and Mahalanobis distance measures, as an alternative to reduce the dimension of the residuals. We developed simulation studies with both data structures associated. The half-normal plots with simulation envelopes were used to assess model performance. These studies demonstrated a good performance of the quantile residuals, and the distance measurements allowed a better interpretation of the graphical techniques. We illustrate the proposed procedures with two applications to real data.

The expected shortfall is defined as the average over the tail below (or above) a certain quantile of a probability distribution. The expected shortfall regression provides powerful tools for learning the relationship between a response variable and a set of covariates while exploring the heterogeneous effects of the covariates. In the health disparity research, for example, the lower/upper tail of the conditional distribution of a health-related outcome, given high-dimensional covariates, is often of importance. Under sparse models, we propose the lasso-penalized expected shortfall regression and establish non-asymptotic error bounds, depending explicitly on the sample size, dimension, and sparsity, for the proposed estimator. To perform statistical inference on a covariate of interest, we propose a debiased estimator and establish its asymptotic normality, from which asymptotically valid tests can be constructed. We illustrate the finite sample performance of the proposed method through numerical studies and a data application on health disparity.

Gaussian graphical models are graphs that represent the conditional relationships among multivariate normal variables. The process of uncovering the structure of these graphs is known as structure learning. Despite the fact that Bayesian methods in structure learning offer intuitive and well-founded ways to measure model uncertainty and integrate prior information, frequentist methods are often preferred due to the computational burden of the Bayesian approach. Over the last decade, Bayesian methods have seen substantial improvements, with some now capable of generating accurate estimates of graphs up to a thousand variables in mere minutes. Despite these advancements, a comprehensive review or empirical comparison of all cutting-edge methods has not been conducted. This paper delves into a wide spectrum of Bayesian approaches used in structure learning, evaluates their efficacy through a simulation study, and provides directions for future research. This study gives an exhaustive overview of this dynamic field for both newcomers and experts.

This paper focuses on the expected difference in borrower's repayment when there is a change in the lender's credit decisions. Classical estimators overlook the confounding effects and hence the estimation error can be magnificent. As such, we propose another approach to construct the estimators such that the error can be greatly reduced. The proposed estimators are shown to be unbiased, consistent, and robust through a combination of theoretical analysis and numerical testing. Moreover, we compare the power of estimating the causal quantities between the classical estimators and the proposed estimators. The comparison is tested across a wide range of models, including linear regression models, tree-based models, and neural network-based models, under different simulated datasets that exhibit different levels of causality, different degrees of nonlinearity, and different distributional properties. Most importantly, we apply our approaches to a large observational dataset provided by a global technology firm that operates in both the e-commerce and the lending business. We find that the relative reduction of estimation error is strikingly substantial if the causal effects are accounted for correctly.

北京阿比特科技有限公司