亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In cluster randomized trials, patients are recruited after clusters are randomized, and the recruiters and patients may not be blinded to the assignment. This often leads to differential recruitment and systematic differences in baseline characteristics of the recruited patients between intervention and control arms, inducing post-randomization selection bias. We aim to rigorously define causal estimands in the presence of selection bias. We elucidate the conditions under which standard covariate adjustment methods can validly estimate these estimands. We further discuss the additional data and assumptions necessary for estimating causal effects when such conditions are not met. Adopting the principal stratification framework in causal inference, we clarify there are two average treatment effect (ATE) estimands in cluster randomized trials: one for the overall population and one for the recruited population. We derive the analytical formula of the two estimands in terms of principal-stratum-specific causal effects. Further, using simulation studies, we assess the empirical performance of the multivariable regression adjustment method under different data generating processes leading to selection bias. When treatment effects are heterogeneous across principal strata, the ATE on the overall population generally differs from the ATE on the recruited population. A naive intention-to-treat analysis of the recruited sample leads to biased estimates of both ATEs. In the presence of post-randomization selection and without additional data on the non-recruited subjects, the ATE on the recruited population is estimable only when the treatment effects are homogenous between principal strata, and the ATE on the overall population is generally not estimable. The extent to which covariate adjustment can remove selection bias depends on the degree of effect heterogeneity across principal strata.

相關內容

Stratification is commonly employed in clinical trials to reduce the chance covariate imbalances and increase the precision of the treatment effect estimate. We propose a general framework for constructing the confidence interval (CI) for a difference or ratio effect parameter under stratified sampling by the method of variance estimates recovery (MOVER). We consider the additive variance and additive CI approaches for the difference, in which either the CI for the weighted difference, or the CI for the weighted effect in each group, or the variance for the weighted difference is calculated as the weighted sum of the corresponding stratum-specific statistics. The CI for the ratio is derived by the Fieller and log-ratio methods. The weights can be random quantities under the assumption of a constant effect across strata, but this assumption is not needed for fixed weights. These methods can be easily applied to different endpoints in that they require only the point estimate, CI, and variance estimate for the measure of interest in each group across strata. The methods are illustrated with two real examples. In one example, we derive the MOVER CIs for the risk difference and risk ratio for binary outcomes. In the other example, we compare the restricted mean survival time and milestone survival in stratified analysis of time-to-event outcomes. Simulations show that the proposed MOVER CIs generally outperform the standard large sample CIs, and that the additive CI approach performs slightly better than the additive variance approach.

In an influential critique of empirical practice, Freedman (2008) showed that the linear regression estimator was biased for the analysis of randomized controlled trials under the randomization model. Under Freedman's assumptions, we derive exact closed-form bias corrections for the linear regression estimator with and without treatment-by-covariate interactions. We show that the limiting distribution of the bias corrected estimator is identical to the uncorrected estimator, implying that the asymptotic gains from adjustment can be attained without introducing any risk of bias. Taken together with results from Lin (2013), our results show that Freedman's theoretical arguments against the use of regression adjustment can be completely resolved with minor modifications to practice.

A notable challenge of leveraging Electronic Health Records (EHR) for treatment effect assessment is the lack of precise information on important clinical variables, including the treatment received and the response. Both treatment information and response often cannot be accurately captured by readily available EHR features and require labor intensive manual chart review to precisely annotate, which limits the number of available gold standard labels on these key variables. We consider average treatment effect (ATE) estimation under such a semi-supervised setting with a large number of unlabeled samples containing both confounders and imperfect EHR features for treatment and response. We derive the efficient influence function for ATE and use it to construct a semi-supervised multiple machine learning (SMMAL) estimator. We showcase that our SMMAL estimator is semi-parametric efficient with B-spline regression under low-dimensional smooth models. We develop the adaptive sparsity/model doubly robust estimation under high-dimensional logistic propensity score and outcome regression models. Results from simulation studies support the validity of our SMMAL method and its superiority over supervised benchmarks.

Standard Monte Carlo computation is widely known to exhibit a canonical square-root convergence speed in terms of sample size. Two recent techniques, one based on control variate and one on importance sampling, both derived from an integration of reproducing kernels and Stein's identity, have been proposed to reduce the error in Monte Carlo computation to supercanonical convergence. This paper presents a more general framework to encompass both techniques that is especially beneficial when the sample generator is biased and noise-corrupted. We show our general estimator, which we call the doubly robust Stein-kernelized estimator, outperforms both existing methods in terms of convergence rates across different scenarios. We also demonstrate the superior performance of our method via numerical examples.

We consider the optimal decision-making problem in a primary sample of interest with multiple auxiliary sources available. The outcome of interest is limited in the sense that it is only observed in the primary sample. In reality, such multiple data sources may belong to heterogeneous studies and thus cannot be combined directly. This paper proposes a new framework to handle heterogeneous studies and address the limited outcome simultaneously through a novel calibrated optimal decision making (CODA) method, by leveraging the common intermediate outcomes in multiple data sources. Specifically, CODA allows the baseline covariates across different samples to have either homogeneous or heterogeneous distributions. Under a mild and testable assumption that the conditional means of intermediate outcomes in different samples are equal given baseline covariates and the treatment information, we show that the proposed CODA estimator of the conditional mean outcome is asymptotically normal and more efficient than using the primary sample solely. In addition, the variance of the CODA estimator can be easily obtained using the simple plug-in method due to the rate double robustness. Extensive experiments on simulated datasets demonstrate empirical validity and improved efficiency using CODA, followed by a real application to a MIMIC-III dataset as the primary sample with the auxiliary data from eICU.

Performing causal inference in observational studies requires we assume confounding variables are correctly adjusted for. G-computation methods are often used in these scenarios, with several recent proposals using Bayesian versions of g-computation. In settings with few confounders, standard models can be employed, however as the number of confounders increase these models become less feasible as there are fewer observations available for each unique combination of confounding variables. In this paper we propose a new model for estimating treatment effects in observational studies that incorporates both parametric and nonparametric outcome models. By conceptually splitting the data, we can combine these models while maintaining a conjugate framework, allowing us to avoid the use of MCMC methods. Approximations using the central limit theorem and random sampling allows our method to be scaled to high dimensional confounders while maintaining computational efficiency. We illustrate the model using carefully constructed simulation studies, as well as compare the computational costs to other benchmark models.

Tie-breaker experimental designs are hybrids of Randomized Controlled Trials (RCTs) and Regression Discontinuity Designs (RDDs) in which subjects with moderate scores are placed in an RCT while subjects with extreme scores are deterministically assigned to the treatment or control group. The tie-breaker design (TBD) has practical advantages over the RCT in settings where it is unfair or uneconomical to deny the treatment to the most deserving recipients. Meanwhile, the TBD has statistical benefits due to randomization over the RDD. In this paper we discuss and quantify the statistical benefits of the TBD compared to the RDD. If the goal is estimation of the average treatment effect or the treatment at more than one score value, the statistical benefits of using a TBD over an RDD are apparent. If the goal is estimation of the average treatment effect at merely one score value, which is typically done by fitting local linear regressions, about 2.8 times more subjects are needed for an RDD in order to achieve the same asymptotic mean squared error. We further demonstrate using both theoretical results and simulations from the Angrist and Lavy (1999) classroom size dataset, that larger experimental radii choices for the TBD lead to greater statistical efficiency.

Cluster randomized trials (CRTs) randomly assign an intervention to groups of individuals (e.g., clinics or communities) and measure outcomes on individuals in those groups. While offering many advantages, this experimental design introduces challenges that are only partially addressed by existing analytic approaches. First, outcomes are often missing for some individuals within clusters. Failing to appropriately adjust for differential outcome measurement can result in biased estimates and inference. Second, CRTs often randomize limited numbers of clusters, resulting in chance imbalances on baseline outcome predictors between arms. Failing to adaptively adjust for these imbalances and other predictive covariates can result in efficiency losses. To address these methodological gaps, we propose and evaluate a novel two-stage targeted minimum loss-based estimator (TMLE) to adjust for baseline covariates in a manner that optimizes precision, after controlling for baseline and post-baseline causes of missing outcomes. Finite sample simulations illustrate that our approach can nearly eliminate bias due to differential outcome measurement, while existing CRT estimators yield misleading results and inferences. Application to real data from the SEARCH community randomized trial demonstrates the gains in efficiency afforded through adaptive adjustment for baseline covariates, after controlling for missingness on individual-level outcomes.

In this article, a discrete analogue of continuous Teissier distribution is presented. Its several important distributional characteristics have been derived. The estimation of the unknown parameter has been done using the method of maximum likelihood and the method of moment. Two real data applications have been presented to show the applicability of the proposed model.

In observational studies, causal inference relies on several key identifying assumptions. One identifiability condition is the positivity assumption, which requires the probability of treatment be bounded away from 0 and 1. That is, for every covariate combination, it should be possible to observe both treated and control subjects, i.e., the covariate distributions should overlap between treatment arms. If the positivity assumption is violated, population-level causal inference necessarily involves some extrapolation. Ideally, a greater amount of uncertainty about the causal effect estimate should be reflected in such situations. With that goal in mind, we construct a Gaussian process model for estimating treatment effects in the presence of practical violations of positivity. Advantages of our method include minimal distributional assumptions, a cohesive model for estimating treatment effects, and more uncertainty associated with areas in the covariate space where there is less overlap. We assess the performance of our approach with respect to bias and efficiency using simulation studies. The method is then applied to a study of critically ill female patients to examine the effect of undergoing right heart catheterization.

北京阿比特科技有限公司