亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

When are inferences (whether Direct-Likelihood, Bayesian, or Frequentist) obtained from partial data valid? This paper answers this question by offering a new theory about inference with missing data. It proves that as the sample size increases and the extent of missingness decreases, the mean-loglikelihood function generated by partial data and that ignores the missingness mechanism will almost surely converge uniformly to that which would have been generated by complete data; and if the data are Missing at Random, this convergence depends only on sample size. Thus, inferences on partial data, such as posterior modes, uncertainty estimates, confidence intervals, likelihood ratios, and indeed, all quantities or features derived from the partial-data loglikelihood function, will approximate their true values (what they would have been given complete data). This adds to previous research which has only proved the consistency of the posterior mode. Practical implications of this result are discussed, and the theory is tested on a previous study of International Human Rights Law.

相關內容

The question whether inputs are valid for the problem a neural network is trying to solve has sparked interest in out-of-distribution (OOD) detection. It is widely assumed that Bayesian neural networks (BNNs) are well suited for this task, as the endowed epistemic uncertainty should lead to disagreement in predictions on outliers. In this paper, we question this assumption and show that proper Bayesian inference with function space priors induced by neural networks does not necessarily lead to good OOD detection. To circumvent the use of approximate inference, we start by studying the infinite-width case, where Bayesian inference can be exact due to the correspondence with Gaussian processes. Strikingly, the kernels derived from common architectural choices lead to function space priors which induce predictive uncertainties that do not reflect the underlying input data distribution and are therefore unsuited for OOD detection. Importantly, we find the OOD behavior in this limiting case to be consistent with the corresponding finite-width case. To overcome this limitation, useful function space properties can also be encoded in the prior in weight space, however, this can currently only be applied to a specified subset of the domain and thus does not inherently extend to OOD data. Finally, we argue that a trade-off between generalization and OOD capabilities might render the application of BNNs for OOD detection undesirable in practice. Overall, our study discloses fundamental problems when naively using BNNs for OOD detection and opens interesting avenues for future research.

The premise of independence among subjects in the same cluster/group often fails in practice, and models that rely on such untenable assumption can produce misleading results. To overcome this severe deficiency, we introduce a new regression model to handle overdispersed and correlated clustered counts. To account for correlation within clusters, we propose a Poisson regression model where the observations within the same cluster are driven by the same latent random effect that follows the Birnbaum-Saunders distribution with a parameter that controls the strength of dependence among the individuals. This novel multivariate count model is called Clustered Poisson Birnbaum-Saunders (CPBS) regression. As illustrated in this paper, the CPBS model is analytically tractable, and its moment structure can be explicitly obtained. Estimation of parameters is performed through the maximum likelihood method, and an Expectation-Maximization (EM) algorithm is also developed. Simulation results to evaluate the finite-sample performance of our proposed estimators are presented. We also discuss diagnostic tools for checking model adequacy. An empirical application concerning the number of inpatient admissions by individuals to hospital emergency rooms, from the Medical Expenditure Panel Survey (MEPS) conducted by the United States Agency for Health Research and Quality, illustrates the usefulness of our proposed methodology.

This paper studies the problem of statistical inference for genetic relatedness between binary traits based on individual-level genome-wide association data. Specifically, under the high-dimensional logistic regression model, we define parameters characterizing the cross-trait genetic correlation, the genetic covariance and the trait-specific genetic variance. A novel weighted debiasing method is developed for the logistic Lasso estimator and computationally efficient debiased estimators are proposed. The rates of convergence for these estimators are studied and their asymptotic normality is established under mild conditions. Moreover, we construct confidence intervals and statistical tests for these parameters, and provide theoretical justifications for the methods, including the coverage probability and expected length of the confidence intervals, as well as the size and power of the proposed tests. Numerical studies are conducted under both model generated data and simulated genetic data to show the superiority of the proposed methods and their applicability to the analysis of real genetic data. Finally, by analyzing a real data set on autoimmune diseases, we demonstrate the ability to obtain novel insights about the shared genetic architecture between ten pediatric autoimmune diseases.

Bayesian likelihood-free inference, which is used to perform Bayesian inference when the likelihood is intractable, enjoys an increasing number of important scientific applications. However, many aspects of a Bayesian analysis become more challenging in the likelihood-free setting. One example of this is prior-data conflict checking, where the goal is to assess whether the information in the data and the prior are inconsistent. Conflicts of this kind are important to detect, since they may reveal problems in an investigator's understanding of what are relevant values of the parameters, and can result in sensitivity of Bayesian inferences to the prior. Here we consider methods for prior-data conflict checking which are applicable regardless of whether the likelihood is tractable or not. In constructing our checks, we consider checking statistics based on prior-to-posterior Kullback-Leibler divergences. The checks are implemented using mixture approximations to the posterior distribution and closed-form approximations to Kullback-Leibler divergences for mixtures, which make Monte Carlo approximation of reference distributions for calibration computationally feasible. When prior-data conflicts occur, it is useful to consider weakly informative prior specifications in alternative analyses as part of a sensitivity analysis. As a main application of our methodology, we develop a technique for searching for weakly informative priors in likelihood-free inference, where the notion of a weakly informative prior is formalized using prior-data conflict checks. The methods are demonstrated in three examples.

An impossibility theorem demonstrates that a particular problem or set of problems cannot be solved as described in the claim. Such theorems put limits on what is possible to do concerning artificial intelligence, especially the super-intelligent one. As such, these results serve as guidelines, reminders, and warnings to AI safety, AI policy, and governance researchers. These might enable solutions to some long-standing questions in the form of formalizing theories in the framework of constraint satisfaction without committing to one option. We strongly believe this to be the most prudent approach to long-term AI safety initiatives. In this paper, we have categorized impossibility theorems applicable to AI into five mechanism-based categories: deduction, indistinguishability, induction, tradeoffs, and intractability. We found that certain theorems are too specific or have implicit assumptions that limit application. Also, we added new results (theorems) such as the unfairness of explainability, the first explainability-related result in the induction category. The remaining results deal with misalignment between the clones and put a limit to the self-awareness of agents. We concluded that deductive impossibilities deny 100%-guarantees for security. In the end, we give some ideas that hold potential in explainability, controllability, value alignment, ethics, and group decision-making. They can be deepened by further investigation.

Ensemble methods based on subsampling, such as random forests, are popular in applications due to their high predictive accuracy. Existing literature views a random forest prediction as an infinite-order incomplete U-statistic to quantify its uncertainty. However, these methods focus on a small subsampling size of each tree, which is theoretically valid but practically limited. This paper develops an unbiased variance estimator based on incomplete U-statistics, which allows the tree size to be comparable with the overall sample size, making statistical inference possible in a broader range of real applications. Simulation results demonstrate that our estimators enjoy lower bias and more accurate confidence interval coverage without additional computational costs. We also propose a local smoothing procedure to reduce the variation of our estimator, which shows improved numerical performance when the number of trees is relatively small. Further, we investigate the ratio consistency of our proposed variance estimator under specific scenarios. In particular, we develop a new "double U-statistic" formulation to analyze the Hoeffding decomposition of the estimator's variance.

In observational studies, causal inference relies on several key identifying assumptions. One identifiability condition is the positivity assumption, which requires the probability of treatment be bounded away from 0 and 1. That is, for every covariate combination, it should be possible to observe both treated and control subjects, i.e., the covariate distributions should overlap between treatment arms. If the positivity assumption is violated, population-level causal inference necessarily involves some extrapolation. Ideally, a greater amount of uncertainty about the causal effect estimate should be reflected in such situations. With that goal in mind, we construct a Gaussian process model for estimating treatment effects in the presence of practical violations of positivity. Advantages of our method include minimal distributional assumptions, a cohesive model for estimating treatment effects, and more uncertainty associated with areas in the covariate space where there is less overlap. We assess the performance of our approach with respect to bias and efficiency using simulation studies. The method is then applied to a study of critically ill female patients to examine the effect of undergoing right heart catheterization.

When drawing causal inferences about the effects of multiple treatments on clustered survival outcomes using observational data, we need to address implications of the multilevel data structure, multiple treatments, censoring and unmeasured confounding for causal analyses. Few off-the-shelf causal inference tools are available to simultaneously tackle these issues. We develop a flexible random-intercept accelerated failure time model, in which we use Bayesian additive regression trees to capture arbitrarily complex relationships between censored survival times and pre-treatment covariates and use the random intercepts to capture cluster-specific main effects. We develop an efficient Markov chain Monte Carlo algorithm to draw posterior inferences about the population survival effects of multiple treatments and examine the variability in cluster-level effects. We further propose an interpretable sensitivity analysis approach to evaluate the sensitivity of drawn causal inferences about treatment effect to the potential magnitude of departure from the causal assumption of no unmeasured confounding. Expansive simulations empirically validate and demonstrate good practical operating characteristics of our proposed methods. Applying the proposed methods to a dataset on older high-risk localized prostate cancer patients drawn from the National Cancer Database, we evaluate the comparative effects of three treatment approaches on patient survival, and assess the ramifications of potential unmeasured confounding. The methods developed in this work are readily available in the $\textsf{R}$ package $\textsf{riAFTBART}$.

Causally identifying the effect of digital advertising is challenging, because experimentation is expensive, and observational data lacks random variation. This paper identifies a pervasive source of naturally occurring, quasi-experimental variation in user-level ad-exposure in digital advertising campaigns. It shows how this variation can be utilized by ad-publishers to identify the causal effect of advertising campaigns. The variation pertains to auction throttling, a probabilistic method of budget pacing that is widely used to spread an ad-campaign`s budget over its deployed duration, so that the campaign`s budget is not exceeded or overly concentrated in any one period. The throttling mechanism is implemented by computing a participation probability based on the campaign`s budget spending rate and then including the campaign in a random subset of available ad-auctions each period according to this probability. We show that access to logged-participation probabilities enables identifying the local average treatment effect (LATE) in the ad-campaign. We present a new estimator that leverages this identification strategy and outline a bootstrap procedure for quantifying its variability. We apply our method to real-world ad-campaign data from an e-commerce advertising platform, which uses such throttling for budget pacing. We show our estimate is statistically different from estimates derived using other standard observational methods such as OLS and two-stage least squares estimators. Our estimated conversion lift is 110%, a more plausible number than 600%, the conversion lifts estimated using naive observational methods.

Consistency is a long standing issue faced by dialogue models. In this paper, we frame the consistency of dialogue agents as natural language inference (NLI) and create a new natural language inference dataset called Dialogue NLI. We propose a method which demonstrates that a model trained on Dialogue NLI can be used to improve the consistency of a dialogue model, and evaluate the method with human evaluation and with automatic metrics on a suite of evaluation sets designed to measure a dialogue model's consistency.

北京阿比特科技有限公司