亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In causal inference, sensitivity models assess how unmeasured confounders could alter causal analyses. However, the sensitivity parameter in these models -- which quantifies the degree of unmeasured confounding -- is often difficult to interpret. For this reason, researchers will sometimes compare the magnitude of the sensitivity parameter to an estimate for measured confounding. This is known as calibration. We propose novel calibrated sensitivity models, which directly incorporate measured confounding, and bound the degree of unmeasured confounding by a multiple of measured confounding. We illustrate how to construct calibrated sensitivity models via several examples. We also demonstrate their advantages over standard sensitivity analyses and calibration; in particular, the calibrated sensitivity parameter is an intuitive unit-less ratio of unmeasured divided by measured confounding, unlike standard sensitivity parameters, and one can correctly incorporate uncertainty due to estimating measured confounding, which standard calibration methods fail to do. By incorporating uncertainty due to measured confounding, we observe that causal analyses can be less robust or more robust to unmeasured confounding than would have been shown with standard approaches. We develop efficient estimators and methods for inference for bounds on the average treatment effect with three calibrated sensitivity models, and establish that our estimators are doubly robust and attain parametric efficiency and asymptotic normality under nonparametric conditions on their nuisance function estimators. We illustrate our methods with data analyses on the effect of exposure to violence on attitudes towards peace in Darfur and the effect of mothers' smoking on infant birthweight.

相關內容

Frontier artificial intelligence (AI) systems could pose increasing risks to public safety and security. But what level of risk is acceptable? One increasingly popular approach is to define capability thresholds, which describe AI capabilities beyond which an AI system is deemed to pose too much risk. A more direct approach is to define risk thresholds that simply state how much risk would be too much. For instance, they might state that the likelihood of cybercriminals using an AI system to cause X amount of economic damage must not increase by more than Y percentage points. The main upside of risk thresholds is that they are more principled than capability thresholds, but the main downside is that they are more difficult to evaluate reliably. For this reason, we currently recommend that companies (1) define risk thresholds to provide a principled foundation for their decision-making, (2) use these risk thresholds to help set capability thresholds, and then (3) primarily rely on capability thresholds to make their decisions. Regulators should also explore the area because, ultimately, they are the most legitimate actors to define risk thresholds. If AI risk estimates become more reliable, risk thresholds should arguably play an increasingly direct role in decision-making.

Dynamical behaviors of complex interacting systems, including brain activities, financial price movements, and physical collective phenomena, are associated with underlying interactions between the system's components. The issue of uncovering interaction relations in such systems using observable dynamics is called relational inference. In this study, we propose a Diffusion model for Relational Inference (DiffRI), inspired by a self-supervised method for probabilistic time series imputation. DiffRI learns to infer the probability of the presence of connections between components through conditional diffusion modeling.

Precision matrices are crucial in many fields such as social networks, neuroscience, and economics, representing the edge structure of Gaussian graphical models (GGMs), where a zero in an off-diagonal position of the precision matrix indicates conditional independence between nodes. In high-dimensional settings where the dimension of the precision matrix $p$ exceeds the sample size $n$ and the matrix is sparse, methods like graphical Lasso, graphical SCAD, and CLIME are popular for estimating GGMs. While frequentist methods are well-studied, Bayesian approaches for (unstructured) sparse precision matrices are less explored. The graphical horseshoe estimate by \citet{li2019graphical}, applying the global-local horseshoe prior, shows superior empirical performance, but theoretical work for sparse precision matrix estimations using shrinkage priors is limited. This paper addresses these gaps by providing concentration results for the tempered posterior with the fully specified horseshoe prior in high-dimensional settings. Moreover, we also provide novel theoretical results for model misspecification, offering a general oracle inequality for the posterior.

Renewed interest in the relationship between artificial and biological neural networks motivates the study of gradient-free methods. Considering the linear regression model with random design, we theoretically analyze in this work the biologically motivated (weight-perturbed) forward gradient scheme that is based on random linear combination of the gradient. If d denotes the number of parameters and k the number of samples, we prove that the mean squared error of this method converges for $k\gtrsim d^2\log(d)$ with rate $d^2\log(d)/k.$ Compared to the dimension dependence d for stochastic gradient descent, an additional factor $d\log(d)$ occurs.

Noninformative priors constructed for estimation purposes are usually not appropriate for model selection and testing. The methodology of integral priors was developed to get prior distributions for Bayesian model selection when comparing two models, modifying initial improper reference priors. We propose a generalization of this methodology to more than two models. Our approach adds an artificial copy of each model under comparison by compactifying the parametric space and creating an ergodic Markov chain across all models that returns the integral priors as marginals of the stationary distribution. Besides the garantee of their existance and the lack of paradoxes attached to estimation reference priors, an additional advantage of this methodology is that the simulation of this Markov chain is straightforward as it only requires simulations of imaginary training samples for all models and from the corresponding posterior distributions. This renders its implementation automatic and generic, both in the nested case and in the nonnested case.

The Rasch model, a classical model in the item response theory, is widely used in psychometrics to model the relationship between individuals' latent traits and their binary responses on assessments or questionnaires. In this paper, we introduce a new likelihood-based estimator -- random pairing maximum likelihood estimator ($\mathsf{RP\text{-}MLE}$) and its bootstrapped variant multiple random pairing MLE ($\mathsf{MRP\text{-}MLE}$) that faithfully estimate the item parameters in the Rasch model. The new estimators have several appealing features compared to existing ones. First, both work for sparse observations, an increasingly important scenario in the big data era. Second, both estimators are provably minimax optimal in terms of finite sample $\ell_{\infty}$ estimation error. Lastly, $\mathsf{RP\text{-}MLE}$ admits precise distributional characterization that allows uncertainty quantification on the item parameters, e.g., construction of confidence intervals of the item parameters. The main idea underlying $\mathsf{RP\text{-}MLE}$ and $\mathsf{MRP\text{-}MLE}$ is to randomly pair user-item responses to form item-item comparisons. This is carefully designed to reduce the problem size while retaining statistical independence. We also provide empirical evidence of the efficacy of the two new estimators using both simulated and real data.

We propose a test for the identification of causal effects in mediation and dynamic treatment models that is based on two sets of observed variables, namely covariates to be controlled for and suspected instruments, building on the test by Huber and Kueck (2022) for single treatment models. We consider models with a sequential assignment of a treatment and a mediator to assess the direct treatment effect (net of the mediator), the indirect treatment effect (via the mediator), or the joint effect of both treatment and mediator. We establish testable conditions for identifying such effects in observational data. These conditions jointly imply (1) the exogeneity of the treatment and the mediator conditional on covariates and (2) the validity of distinct instruments for the treatment and the mediator, meaning that the instruments do not directly affect the outcome (other than through the treatment or mediator) and are unconfounded given the covariates. Our framework extends to post-treatment sample selection or attrition problems when replacing the mediator by a selection indicator for observing the outcome, enabling joint testing of the selectivity of treatment and attrition. We propose a machine learning-based test to control for covariates in a data-driven manner and analyze its finite sample performance in a simulation study. Additionally, we apply our method to Slovak labor market data and find that our testable implications are not rejected for a sequence of training programs typically considered in dynamic treatment evaluations.

Bayesian forecasting is developed in multivariate time series analysis for causal inference. Causal evaluation of sequentially observed time series data from control and treated units focuses on the impacts of interventions using contemporaneous outcomes in control units. Methodological developments here concern multivariate dynamic models for time-varying effects across multiple treated units with explicit foci on sequential learning and aggregation of intervention effects. Analysis explores dimension reduction across multiple synthetic counterfactual predictors. Computational advances leverage fully conjugate models for efficient sequential learning and inference, including cross-unit correlations and their time variation. This allows full uncertainty quantification on model hyper-parameters via Bayesian model averaging. A detailed case study evaluates interventions in a supermarket promotions experiment, with coupled predictive analyses in selected regions of a large-scale commercial system. Comparisons with existing methods highlight the issues of appropriate uncertainty quantification in casual inference in aggregation across treated units, among other practical concerns.

Making inference with spatial extremal dependence models can be computationally burdensome since they involve intractable and/or censored likelihoods. Building on recent advances in likelihood-free inference with neural Bayes estimators, that is, neural networks that approximate Bayes estimators, we develop highly efficient estimators for censored peaks-over-threshold models that {use data augmentation techniques} to encode censoring information in the neural network {input}. Our new method provides a paradigm shift that challenges traditional censored likelihood-based inference methods for spatial extremal dependence models. Our simulation studies highlight significant gains in both computational and statistical efficiency, relative to competing likelihood-based approaches, when applying our novel estimators to make inference with popular extremal dependence models, such as max-stable, $r$-Pareto, and random scale mixture process models. We also illustrate that it is possible to train a single neural Bayes estimator for a general censoring level, precluding the need to retrain the network when the censoring level is changed. We illustrate the efficacy of our estimators by making fast inference on hundreds-of-thousands of high-dimensional spatial extremal dependence models to assess extreme particulate matter 2.5 microns or less in diameter (${\rm PM}_{2.5}$) concentration over the whole of Saudi Arabia.

We describe a probabilistic methodology, based on random walk estimates, to obtain exponential upper bounds for the probability of observing unusually small maximal components in two classical (near-)critical random graph models. More specifically, we analyse the near-critical Erd\H{o}s-R\'enyi model $\mathbb{G}(n,p)$ and the random graph $\mathbb{G}(n,d,p)$ obtained by performing near-critical $p$-bond percolation on a simple random $d$-regular graph and show that, for each one of these models, the probability that the size of a largest component is smaller than $n^{2/3}/A$ is at most of order $\exp(-A^{3/2})$. The exponent $3/2$ is known to be optimal for the near-critical $\mathbb{G}(n,p)$ random graph, whereas for the near-critical $\mathbb{G}(n,d,p)$ model the best known upper bound for the above probability was of order $A^{-3/5}$. As a secondary result we show, by means of an optimized version of the martingale method of Nachmias and Peres, that the above probability of observing an unusually small maximal component is at most of order $\exp(-A^{3/5})$ in other two critical models, namely a random intersection graph and the quantum random graph; this stretched-exponential bounds also improve upon the known (polynomial) bounds available for these other two critical models.

北京阿比特科技有限公司