亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Longitudinal studies are often subject to missing data. The ICH E9(R1) addendum addresses the importance of defining a treatment effect estimand with the consideration of intercurrent events. Jump-to-reference (J2R) is one classically envisioned control-based scenario for the treatment effect evaluation using the hypothetical strategy, where the participants in the treatment group after intercurrent events are assumed to have the same disease progress as those with identical covariates in the control group. We establish new estimators to assess the average treatment effect based on a proposed potential outcomes framework under J2R. Various identification formulas are constructed under the assumptions addressed by J2R, motivating estimators that rely on different parts of the observed data distribution. Moreover, we obtain a novel estimator inspired by the efficient influence function, with multiple robustness in the sense that it achieves $n^{1/2}$-consistency if any pairs of multiple nuisance functions are correctly specified, or if the nuisance functions converge at a rate not slower than $n^{-1/4}$ when using flexible modeling approaches. The finite-sample performance of the proposed estimators is validated in simulation studies and an antidepressant clinical trial.

相關內容

Consider a situation where a new patient arrives in the Intensive Care Unit (ICU) and is monitored by multiple sensors. We wish to assess relevant unmeasured physiological variables (e.g., cardiac contractility and output and vascular resistance) that have a strong effect on the patients diagnosis and treatment. We do not have any information about this specific patient, but, extensive offline information is available about previous patients, that may only be partially related to the present patient (a case of dataset shift). This information constitutes our prior knowledge, and is both partial and approximate. The basic question is how to best use this prior knowledge, combined with online patient data, to assist in diagnosing the current patient most effectively. Our proposed approach consists of three stages: (i) Use the abundant offline data in order to create both a non-causal and a causal estimator for the relevant unmeasured physiological variables. (ii) Based on the non-causal estimator constructed, and a set of measurements from a new group of patients, we construct a causal filter that provides higher accuracy in the prediction of the hidden physiological variables for this new set of patients. (iii) For any new patient arriving in the ICU, we use the constructed filter in order to predict relevant internal variables. Overall, this strategy allows us to make use of the abundantly available offline data in order to enhance causal estimation for newly arriving patients. We demonstrate the effectiveness of this methodology on a (non-medical) real-world task, in situations where the offline data is only partially related to the new observations. We provide a mathematical analysis of the merits of the approach in a linear setting of Kalman filtering and smoothing, demonstrating its utility.

Dynamic treatment regimens (DTRs), also known as treatment algorithms or adaptive interventions, play an increasingly important role in many health domains. DTRs are motivated to address the unique and changing needs of individuals by delivering the type of treatment needed, when needed, while minimizing unnecessary treatment. Practically, a DTR is a sequence of decision rules that specify, for each of several points in time, how available information about the individual should be used in practice to decide which treatment (e.g., type or intensity) to deliver. The sequential multiple assignment randomized trial (SMART) is an experimental design that is widely used to empirically inform the development of DTRs. Existing sample size planning resources for SMART studies are suitable for continuous, binary, or survival outcomes. However, an important gap exists in sample size estimation methodology for planning SMARTs with longitudinal count outcomes. Further, in many health domains, count data are overdispersed - that is, having variance greater than their mean. To close this gap, this manuscript describes the development of a Monte Carlo-based approach for sample size estimation. Simulation studies were employed to investigate various properties of this approach. Throughout, a SMART for engaging alcohol and cocaine-dependent patients in treatment is used as motivation.

This paper presents a new dynamic approach to experiment design in settings where, due to interference or other concerns, experimental units are coarse. `Region-split' experiments on online platforms are one example of such a setting. The cost, or regret, of experimentation is a natural concern here. Our new design, dubbed Synthetically Controlled Thompson Sampling (SCTS), minimizes the regret associated with experimentation at no practically meaningful loss to inferential ability. We provide theoretical guarantees characterizing the near-optimal regret of our approach, and the error rates achieved by the corresponding treatment effect estimator. Experiments on synthetic and real world data highlight the merits of our approach relative to both fixed and `switchback' designs common to such experimental settings.

Estimation of signal-to-noise ratios and noise variances in high-dimensional linear models have important applications in statistical inference, hyperparameter selection, and heritability estimation in genomics. One common approach in practice is maximum likelihood estimation under random effects models. This paper aims to conduct model misspecification analysis on the consistency of this method, in which the true model only has fixed effects. Assume that the ratio between the number of samples and features converges to a nonzero constant, our results provide conditions on the design matrices under which random effects model based maximum likelihood estimation is asymptotically consistent in estimating the SNR and noise variance. Our model misspecification analysis also extends to the high-dimensional linear models with feature groups, in which group SNR estimation has important applications such as tuning parameter selection for group ridge regression.

In animal behavior studies, a common goal is to investigate the causal pathways between an exposure and outcome, and a mediator that lies in between. Causal mediation analysis provides a principled approach for such studies. Although many applications involve longitudinal data, the existing causal mediation models are not directly applicable to settings where the mediators are measured on irregular time grids. In this paper, we propose a causal mediation model that accommodates longitudinal mediators on arbitrary time grids and survival outcomes simultaneously. We take a functional data analysis perspective and view longitudinal mediators as realizations of underlying smooth stochastic processes. We define causal estimands of direct and indirect effects accordingly and provide corresponding identification assumptions. We employ a functional principal component analysis approach to estimate the mediator process, and propose a Cox hazard model for the survival outcome that flexibly adjusts the mediator process. We then derive a g-computation formula to express the causal estimands using the model coefficients. The proposed method is applied to a longitudinal data set from the Amboseli Baboon Research Project to investigate the causal relationships between early adversity, adult physiological stress responses, and survival among wild female baboons. We find that adversity experienced in early life has a significant direct effect on females' life expectancy and survival probability, but find little evidence that these effects were mediated by markers of the stress response in adulthood. We further developed a sensitivity analysis method to assess the impact of potential violation to the key assumption of sequential ignorability.

The synthetic control method offers a way to estimate the effect of an aggregate intervention using weighted averages of untreated units to approximate the counterfactual outcome that the treated unit(s) would have experienced in the absence of the intervention. This method is useful for program evaluation and causal inference in observational studies. We introduce the software package \texttt{scpi} for estimation and inference using synthetic controls, implemented in \texttt{Python}, \texttt{R}, and \texttt{Stata}. For point estimation or prediction of treatment effects, the package offers an array of (possibly penalized) approaches leveraging the latest optimization methods. For uncertainty quantification, the package offers the prediction interval methods introduced by Cattaneo, Feng and Titiunik (2021) and Cattaneo, Feng, Palomba and Titiunik (2022). The discussion contains numerical illustrations and a comparison with other synthetic control software.

Large longitudinal studies provide lots of valuable information, especially in medical applications. A problem which must be taken care of in order to utilize their full potential is that of correlation between intra-subject measurements taken at different times. For data in Euclidean space this can be done with hierarchical models, that is, models that consider intra-subject and between-subject variability in two different stages. Nevertheless, data from medical studies often takes values in nonlinear manifolds. Here, as a first step, geodesic hierarchical models have been developed that generalize the linear ansatz by assuming that time-induced intra-subject variations occur along a generalized straight line in the manifold. However, this is often not the case (e.g., periodic motion or processes with saturation). We propose a hierarchical model for manifold-valued data that extends this to include trends along higher-order curves, namely B\'ezier splines in the manifold. To this end, we present a principled way of comparing shape trends in terms of a functional-based Riemannian metric. Remarkably, this metric allows efficient, yet simple computations by virtue of a variational time discretization requiring only the solution of regression problems. We validate our model on longitudinal data from the osteoarthritis initiative, including classification of disease progression.

This paper studies the consistency and statistical inference of simulated Ising models in the high dimensional background. Our estimators are based on the Markov chain Monte Carlo maximum likelihood estimation (MCMC-MLE) method penalized by the Elastic-net. Under mild conditions that ensure a specific convergence rate of MCMC method, the $\ell_{1}$ consistency of Elastic-net-penalized MCMC-MLE is proved. We further propose a decorrelated score test based on the decorrelated score function and prove the asymptotic normality of the score function without the influence of many nuisance parameters under the assumption that accelerates the convergence of the MCMC method. The one-step estimator for a single parameter of interest is purposed by linearizing the decorrelated score function to solve its root, as well as its normality and confidence interval for the true value, therefore, be established. Finally, we use different algorithms to control the false discovery rate (FDR) via traditional p-values and novel e-values.

This paper considers identification and estimation of the causal effect of the time Z until a subject is treated on a survival outcome T. The treatment is not randomly assigned, T is randomly right censored by a random variable C and the time to treatment Z is right censored by min(T,C). The endogeneity issue is treated using an instrumental variable explaining Z and independent of the error term of the model. We study identification in a fully nonparametric framework. We show that our specification generates an integral equation, of which the regression function of interest is a solution. We provide identification conditions that rely on this identification equation. For estimation purposes, we assume that the regression function follows a parametric model. We propose an estimation procedure and give conditions under which the estimator is asymptotically normal. The estimators exhibit good finite sample properties in simulations. Our methodology is applied to find evidence supporting the efficacy of a therapy for burn-out.

In this paper we introduce a covariance framework for the analysis of EEG and MEG data that takes into account observed temporal stationarity on small time scales and trial-to-trial variations. We formulate a model for the covariance matrix, which is a Kronecker product of three components that correspond to space, time and epochs/trials, and consider maximum likelihood estimation of the unknown parameter values. An iterative algorithm that finds approximations of the maximum likelihood estimates is proposed. We perform a simulation study to assess the performance of the estimator and investigate the influence of different assumptions about the covariance factors on the estimated covariance matrix and on its components. Apart from that, we illustrate our method on real EEG and MEG data sets. The proposed covariance model is applicable in a variety of cases where spontaneous EEG or MEG acts as source of noise and realistic noise covariance estimates are needed for accurate dipole localization, such as in evoked activity studies, or where the properties of spontaneous EEG or MEG are themselves the topic of interest, such as in combined EEG/fMRI experiments in which the correlation between EEG and fMRI signals is investigated.

北京阿比特科技有限公司