亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

To draw real-world evidence about the comparative effectiveness of complex time-varying treatment regimens on patient survival, we develop a joint marginal structural proportional hazards model and novel weighting schemes in continuous time to account for time-varying confounding and censoring. Our methods formulate complex longitudinal treatments with multiple "start/stop" switches as the recurrent events with discontinuous intervals of treatment eligibility. We derive the weights in continuous time to handle a complex longitudinal dataset on its own terms, without the need to discretize or artificially align the measurement times. We further propose using machine learning models designed for censored survival data with time-varying covariates and the kernel function estimator of the baseline intensity to efficiently estimate the continuous-time weights. Our simulations demonstrate that the proposed methods provide better bias reduction and nominal coverage probability when analyzing observational longitudinal survival data with irregularly spaced time intervals, compared to conventional methods that require aligned measurement time points. We apply the proposed methods to a large-scale COVID-19 dataset to estimate the causal effects of several COVID-19 treatment strategies on in-hospital mortality or ICU admission, and provide new insights relative to findings from randomized trials.

相關內容

讓 iOS 8 和 OS X Yosemite 無縫切換的一個新特性。 > Apple products have always been designed to work together beautifully. But now they may really surprise you. With iOS 8 and OS X Yosemite, you’ll be able to do more wonderful things than ever before.

Source:

COVID-19 has likely been the most disruptive event at a global scale the world experienced since WWII. Our discipline never experienced such a phenomenon, whereby software engineers were forced to abruptly work from home. Nearly every developer started new working habits and organizational routines, while trying to stay mentally healthy and productive during the lockdowns. We are now starting to realize that some of these new habits and routines may stick with us in the future. Therefore, it is of importance to understand how we have worked from home so far. We investigated whether 15 psychological, social, and situational variables such as quality of social contacts or loneliness predict software engineers' well-being and productivity across a four wave longitudinal study of over 14 months. Additionally, we tested whether there were changes in any of these variables across time. We found that developers' well-being and quality of social contacts improved between April 2020 and July 2021, while their emotional loneliness went down. Other variables, such as productivity and boredom have not changed. We further found that developers' stress measured in May 2020 negatively predicted their well-being 14 months later, even after controlling for many other variables. Finally, comparisons of women and men, as well as between developers residing in the UK and USA, were not statistically different but revealed substantial similarities.

We say that a continuous real-valued function $x$ admits the Hurst roughness exponent $H$ if the $p^{\text{th}}$ variation of $x$ converges to zero if $p>1/H$ and to infinity if $p<1/H$. For the sample paths of many stochastic processes, such as fractional Brownian motion, the Hurst roughness exponent exists and equals the standard Hurst parameter. In our main result, we provide a mild condition on the Faber--Schauder coefficients of $x$ under which the Hurst roughness exponent exists and is given as the limit of the classical Gladyshev estimates $\wh H_n(x)$. This result can be viewed as a strong consistency result for the Gladyshev estimators in an entirely model-free setting, because no assumption whatsoever is made on the possible dynamics of the function $x$. Nonetheless, our proof is probabilistic and relies on a martingale that is hidden in the Faber--Schauder expansion of $x$. Since the Gladyshev estimators are not scale-invariant, we construct several scale-invariant estimators that are derived from the sequence $(\wh H_n)_{n\in\bN}$. We also discuss how a dynamic change in the Hurst roughness parameter of a time series can be detected. Our results are illustrated by means of high-frequency financial times series.

The case-crossover design (Maclure, 1991) is widely used in epidemiology and other fields to study causal effects of transient treatments on acute outcomes. However, its validity and causal interpretation have only been justified under informal conditions. Here, we place the design in a formal counterfactual framework for the first time. Doing so helps to clarify its assumptions and interpretation. In particular, when the treatment effect is non-null, we identify a previously unnoticed bias arising from common causes of the outcome at different person-times. We analytically characterize the direction and size of this bias and demonstrate its potential importance with a simulation. We also use our derivation of the limit of the case-crossover estimator to analyze its sensitivity to treatment effect heterogeneity, a violation of one of the informal criteria for validity. The upshot of this work for practitioners is that, while the case-crossover design can be useful for testing the causal null hypothesis in the presence of baseline confounders, extra caution is warranted when using the case-crossover design for point estimation of causal effects.

In randomized experiments, the actual treatments received by some experimental units may differ from their treatment assignments. This non-compliance issue often occurs in clinical trials, social experiments, and the applications of randomized experiments in many other fields. Under certain assumptions, the average treatment effect for the compliers is identifiable and equal to the ratio of the intention-to-treat effects of the potential outcomes to that of the potential treatment received. To improve the estimation efficiency, we propose three model-assisted estimators for the complier average treatment effect in randomized experiments with a binary outcome. We study their asymptotic properties, compare their efficiencies with that of the Wald estimator, and propose the Neyman-type conservative variance estimators to facilitate valid inferences. Moreover, we extend our methods and theory to estimate the multiplicative complier average treatment effect. Our analysis is randomization-based, allowing the working models to be misspecified. Finally, we conduct simulation studies to illustrate the advantages of the model-assisted methods and apply these analysis methods in a randomized experiment to evaluate the effect of academic services or incentives on academic performance.

Individual Treatment Effect (ITE) prediction is an important area of research in machine learning which aims at explaining and estimating the causal impact of an action at the granular level. It represents a problem of growing interest in multiple sectors of application such as healthcare, online advertising or socioeconomics. To foster research on this topic we release a publicly available collection of 13.9 million samples collected from several randomized control trials, scaling up previously available datasets by a healthy 210x factor. We provide details on the data collection and perform sanity checks to validate the use of this data for causal inference tasks. First, we formalize the task of uplift modeling (UM) that can be performed with this data, along with the relevant evaluation metrics. Then, we propose synthetic response surfaces and heterogeneous treatment assignment providing a general set-up for ITE prediction. Finally, we report experiments to validate key characteristics of the dataset leveraging its size to evaluate and compare - with high statistical significance - a selection of baseline UM and ITE prediction methods.

Return-to-baseline is an important method to impute missing values or unobserved potential outcomes when certain hypothetical strategies are used to handle intercurrent events in clinical trials. Current return-to-baseline approaches seen in literature and in practice inflate the variability of the "complete" dataset after imputation and lead to biased mean estimators {when the probability of missingness depends on the observed baseline and/or postbaseline intermediate outcomes}. In this article, we first provide a set of criteria a return-to-baseline imputation method should satisfy. Under this framework, we propose a novel return-to-baseline imputation method. Simulations show the completed data after the new imputation approach have the proper distribution, and the estimators based on the new imputation method outperform the traditional method in terms of both bias and variance, when missingness depends on the observed values. The new method can be implemented easily with the existing multiple imputation procedures in commonly used statistical packages.

As of December 2020, the COVID-19 pandemic has infected over 75 million people, making it the deadliest pandemic in modern history. This study develops a novel compartmental epidemiological model specific to the SARS-CoV-2 virus and analyzes the effect of common preventative measures such as testing, quarantine, social distancing, and vaccination. By accounting for the most prevalent interventions that have been enacted to minimize the spread of the virus, the model establishes a paramount foundation for future mathematical modeling of COVID-19 and other modern pandemics. Specifically, the model expands on the classic SIR model and introduces separate compartments for individuals who are in the incubation period, asymptomatic, tested-positive, quarantined, vaccinated, or deceased. It also accounts for variable infection, testing, and death rates. I first analyze the outbreak in Santa Clara County, California, and later generalize the findings. The results show that, although all preventative measures reduce the spread of COVID-19, quarantine and social distancing mandates reduce the infection rate and subsequently are the most effective policies, followed by vaccine distribution and, finally, public testing. Thus, governments should concentrate resources on enforcing quarantine and social distancing policies. In addition, I find mathematical proof that the relatively high asymptomatic rate and long incubation period are driving factors of COVID-19's rapid spread.

One of the major research questions regarding human microbiome studies is the feasibility of designing interventions that modulate the composition of the microbiome to promote health and cure disease. This requires extensive understanding of the modulating factors of the microbiome, such as dietary intake, as well as the relation between microbial composition and phenotypic outcomes, such as body mass index (BMI). Previous efforts have modeled these data separately, employing two-step approaches that can produce biased interpretations of the results. Here, we propose a Bayesian joint model that simultaneously identifies clinical covariates associated with microbial composition data and predicts a phenotypic response using information contained in the compositional data. Using spike-and-slab priors, our approach can handle high-dimensional compositional as well as clinical data. Additionally, we accommodate the compositional structure of the data via balances and overdispersion typically found in microbial samples. We apply our model to understand the relations between dietary intake, microbial samples, and BMI. In this analysis, we find numerous associations between microbial taxa and dietary factors that may lead to a microbiome that is generally more hospitable to the development of chronic diseases, such as obesity. Additionally, we demonstrate on simulated data how our method outperforms two-step approaches and also present a sensitivity analysis.

Modern ocean datasets are large, multi-dimensional, and inherently spatiotemporal. A common oceanographic analysis task is the comparison of such datasets along one or several dimensions of latitude, longitude, depth, time as well as across different data modalities. Here, we show that the Wasserstein distance, also known as earth mover's distance, provides a promising optimal transport metric for quantifying differences in ocean spatiotemporal data. The Wasserstein distance complements commonly used point-wise difference methods such as, e.g., the root mean squared error, by quantifying deviations in terms of apparent displacements (in distance units of space or time) rather than magnitudes of a measured quantity. Using large-scale gridded remote sensing and ocean simulation data of Chlorophyll concentration, a proxy for phytoplankton biomass, in the North Pacific, we show that the Wasserstein distance enables meaningful low-dimensional embeddings of marine seasonal cycles, provides oceanographically relevant summaries of Chlorophyll depth profiles and captures hitherto overlooked trends in the temporal variability of Chlorophyll in a warming climate. We also illustrate how the optimal transport vectors underlying the Wasserstein distance calculation can serve as a novel interpretable visual aid in other exploratory ocean data analysis tasks, e.g., in tracking ocean province boundaries across space and time.

Causality can be described in terms of a structural causal model (SCM) that carries information on the variables of interest and their mechanistic relations. For most processes of interest the underlying SCM will only be partially observable, thus causal inference tries to leverage any exposed information. Graph neural networks (GNN) as universal approximators on structured input pose a viable candidate for causal learning, suggesting a tighter integration with SCM. To this effect we present a theoretical analysis from first principles that establishes a novel connection between GNN and SCM while providing an extended view on general neural-causal models. We then establish a new model class for GNN-based causal inference that is necessary and sufficient for causal effect identification. Our empirical illustration on simulations and standard benchmarks validate our theoretical proofs.

北京阿比特科技有限公司