亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Optimal treatment rules can improve health outcomes on average by assigning a treatment associated with the most desirable outcome to each individual. Due to an unknown data generation mechanism, it is appealing to use flexible models to estimate these rules. However, such models often lead to complex and uninterpretable rules. In this article, we introduce an approach aimed at estimating optimal treatment rules that have higher accuracy, higher value, and lower loss from the same simple model family. We use a flexible model to estimate the optimal treatment rules and a simple model to derive interpretable treatment rules. We provide an extensible definition of interpretability and present a method that - given a class of simple models - can be used to select a preferred model. We conduct a simulation study to evaluate the performance of our approach compared to treatment rules obtained by fitting the same simple model directly to observed data. The results show that our approach has lower average loss, higher average outcome, and greater power in identifying individuals who can benefit from the treatment. We apply our approach to derive treatment rules of adjuvant chemotherapy in colon cancer patients using cancer registry data. The results show that our approach has the potential to improve treatment decisions.

相關內容

In this paper, we use a linear birth and death process with immigration to model infectious disease propagation when contamination stems from both person-to-person contact and contact with the environment. Our aim is to estimate the parameters of the process. The main originality and difficulty comes from the observation scheme. Counts of infected population are hidden. The only data available are periodic cumulated new retired counts. Although very common in epidemiology, this observation scheme is mathematically challenging even for such a standard stochastic process. We first derive an analytic expression of the unknown parameters as functions of well-chosen discrete time transition probabilities. Second, we extend and adapt the standard Baum-Welch algorithm in order to estimate the said discrete time transition probabilities in our hidden data framework. The performance of our estimators is illustrated both on synthetic data and real data of typhoid fever in Mayotte.

In the conventional change detection (CD) pipeline, two manually registered and labeled remote sensing datasets serve as the input of the model for training and prediction. However, in realistic scenarios, data from different periods or sensors could fail to be aligned as a result of various coordinate systems. Geometric distortion caused by coordinate shifting remains a thorny issue for CD algorithms. In this paper, we propose a reusable self-supervised framework for bitemporal geometric distortion in CD tasks. The whole framework is composed of Pretext Representation Pre-training, Bitemporal Image Alignment, and Down-stream Decoder Fine-Tuning. With only single-stage pre-training, the key components of the framework can be reused for assistance in the bitemporal image alignment, while simultaneously enhancing the performance of the CD decoder. Experimental results in 2 large-scale realistic scenarios demonstrate that our proposed method can alleviate the bitemporal geometric distortion in CD tasks.

An important task in health research is to characterize time-to-event outcomes such as disease onset or mortality in terms of a potentially high-dimensional set of risk factors. For example, prospective cohort studies of Alzheimer's disease typically enroll older adults for observation over several decades to assess the long-term impact of genetic and other factors on cognitive decline and mortality. The accelerated failure time model is particularly well-suited to such studies, structuring covariate effects as `horizontal' changes to the survival quantiles that conceptually reflect shifts in the outcome distribution due to lifelong exposures. However, this modeling task is complicated by the enrollment of adults at differing ages, and intermittent followup visits leading to interval censored outcome information. Moreover, genetic and clinical risk factors are not only high-dimensional, but characterized by underlying grouping structure, such as by function or gene location. Such grouped high-dimensional covariates require shrinkage methods that directly acknowledge this structure to facilitate variable selection and estimation. In this paper, we address these considerations directly by proposing a Bayesian accelerated failure time model with a group-structured lasso penalty, designed for left-truncated and interval-censored time-to-event data. We develop a custom Markov chain Monte Carlo sampler for efficient estimation, and investigate the impact of various methods of penalty tuning and thresholding for variable selection. We present a simulation study examining the performance of this method relative to models with an ordinary lasso penalty, and apply the proposed method to identify groups of predictive genetic and clinical risk factors for Alzheimer's disease in the Religious Orders Study and Memory and Aging Project (ROSMAP) prospective cohort studies of AD and dementia.

The presence of intermediate confounders, also called recanting witnesses, is a fundamental challenge to the investigation of causal mechanisms in mediation analysis, preventing the identification of natural path-specific effects. Proposed alternative parameters (such as randomizational interventional effects) are problematic because they can be non-null even when there is no mediation for any individual in the population; i.e., they are not an average of underlying individual-level mechanisms. In this paper we develop a novel method for mediation analysis in settings with intermediate confounding, with guarantees that the causal parameters are summaries of the individual-level mechanisms of interest. The method is based on recently proposed ideas that view causality as the transfer of information, and thus replace recanting witnesses by draws from their conditional distribution, what we call "recanting twins". We show that, in the absence of intermediate confounding, recanting twin effects recover natural path-specific effects. We present the assumptions required for identification of recanting twins effects under a standard structural causal model, as well as the assumptions under which the recanting twin identification formulas can be interpreted in the context of the recently proposed separable effects models. To estimate recanting-twin effects, we develop efficient semi-parametric estimators that allow the use of data driven methods in the estimation of the nuisance parameters. We present numerical studies of the methods using synthetic data, as well as an application to evaluate the role of new-onset anxiety and depressive disorder in explaining the relationship between gabapentin/pregabalin prescription and incident opioid use disorder among Medicaid beneficiaries with chronic pain.

Confounding remains one of the major challenges to causal inference with observational data. This problem is paramount in medicine, where we would like to answer causal questions from large observational datasets like electronic health records (EHRs) and administrative claims. Modern medical data typically contain tens of thousands of covariates. Such a large set carries hope that many of the confounders are directly measured, and further hope that others are indirectly measured through their correlation with measured covariates. How can we exploit these large sets of covariates for causal inference? To help answer this question, this paper examines the performance of the large-scale propensity score (LSPS) approach on causal analysis of medical data. We demonstrate that LSPS may adjust for indirectly measured confounders by including tens of thousands of covariates that may be correlated with them. We present conditions under which LSPS removes bias due to indirectly measured confounders, and we show that LSPS may avoid bias when inadvertently adjusting for variables (like colliders) that otherwise can induce bias. We demonstrate the performance of LSPS with both simulated medical data and real medical data.

When studying the association between treatment and a clinical outcome, a parametric multivariable model of the conditional outcome expectation is often used to adjust for covariates. The treatment coefficient of the outcome model targets a conditional treatment effect. Model-based standardization is typically applied to average the model predictions over the target covariate distribution, and generate a covariate-adjusted estimate of the marginal treatment effect. The standard approach to model-based standardization involves maximum-likelihood estimation and use of the non-parametric bootstrap. We introduce a novel, general-purpose, model-based standardization method based on multiple imputation that is easily applicable when the outcome model is a generalized linear model. We term our proposed approach multiple imputation marginalization (MIM). MIM consists of two main stages: the generation of synthetic datasets and their analysis. MIM accommodates a Bayesian statistical framework, which naturally allows for the principled propagation of uncertainty, integrates the analysis into a probabilistic framework, and allows for the incorporation of prior evidence. We conduct a simulation study to benchmark the finite-sample performance of MIM in conjunction with a parametric outcome model. The simulations provide proof-of-principle in scenarios with binary outcomes, continuous-valued covariates, a logistic outcome model and the marginal log odds ratio as the target effect measure. When parametric modeling assumptions hold, MIM yields unbiased estimation in the target covariate distribution, valid coverage rates, and similar precision and efficiency than the standard approach to model-based standardization.

Respiratory diseases represent one of the most significant economic burdens on healthcare systems worldwide. The variation in the increasing number of cases depends greatly on climatic seasonal effects, socioeconomic factors, and pollution. Therefore, understanding these variations and obtaining precise forecasts allows health authorities to make correct decisions regarding the allocation of limited economic and human resources. This study aims to model and forecast weekly hospitalizations due to respiratory conditions in seven regional hospitals in Costa Rica using four statistical learning techniques (Random Forest, XGboost, Facebook's Prophet forecasting model, and an ensemble method combining the above methods), along with 22 climate change indices and aerosol optical depth as an indicator of pollution. Models are trained using data from 2000 to 2018 and are evaluated using data from 2019 as testing data. Reliable predictions are obtained for each of the seven regional hospitals

For finite element approximations of transport phenomena, it is often necessary to apply a form of limiting to ensure that the discrete solution remains well-behaved and satisfies physical constraints. However, these limiting procedures are typically performed at discrete nodal locations, which is not sufficient to ensure the robustness of the scheme when the solution must be evaluated at arbitrary locations (e.g., for adaptive mesh refinement, remapping in arbitrary Lagragian--Eulerian solvers, overset meshes, etc.). In this work, a novel limiting approach for discontinuous Galerkin methods is presented which ensures that the solution is continuously bounds-preserving (i.e., across the entire solution polynomial) for any arbitrary choice of basis, approximation order, and mesh element type. Through a modified formulation for the constraint functionals, the proposed approach requires only the solution of a single spatial scalar minimization problem per element for which a highly efficient numerical optimization procedure is presented. The efficacy of this approach is shown in numerical experiments by enforcing continuous constraints in high-order unstructured discontinuous Galerkin discretizations of hyperbolic conservation laws, ranging from scalar transport with maximum principle preserving constraints to compressible gas dynamics with positivity-preserving constraints.

Patient care may be improved by recommending treatments based on patient characteristics when there is treatment effect heterogeneity. Recently, there has been a great deal of attention focused on the estimation of optimal treatment rules that maximize expected outcomes. However, there has been comparatively less attention given to settings where the outcome is right-censored, especially with regard to the practical use of estimators. In this study, simulations were undertaken to assess the finite-sample performance of estimators for optimal treatment rules and estimators for the expected outcome under treatment rules. The simulations were motivated by the common setting in biomedical and public health research where the data is observational, survival times may be right-censored, and there is interest in estimating baseline treatment decisions to maximize survival probability. A variety of outcome regression and direct search estimation methods were compared for optimal treatment rule estimation across a range of simulation scenarios. Methods that flexibly model the outcome performed comparatively well, including in settings where the treatment rule was non-linear. R code to reproduce this study's results are available on Github.

Safety assessment of crash and conflict avoidance systems is important for both the automotive industry and other stakeholders. One type of system that needs such an assessment is a driver monitoring system (DMS) with some intervention (e.g., warning or nudging) when the driver looks off-road for too long. Although using computer simulation to assess safety systems is becoming increasingly common, it is not yet commonly used for systems that affect driver behavior, such as DMSs. Models that generate virtual crashes, taking crash-causation mechanisms into account, are needed to assess these systems. However, few such models exist, and those that do have not been thoroughly validated on real-world data. This study aims to address this research gap by validating a rear-end crash-causation model which is based on four crash-causation mechanisms related to driver behavior: a) off-road glances, b) too-short headway, c) not braking with the maximum deceleration possible, and d) sleepiness (not reacting before the crash). The pre-crash kinematics were obtained from the German GIDAS in-depth crash database. Challenges with the validation process were identified and addressed. Most notably, a process was developed to transform the generated crashes to mimic the crash severity distribution in GIDAS. This step was necessary because GIDAS does not include property-damage-only (PDO) crashes, while the generated crashes cover the full range of severities (including low-severity crashes, of which many are PDOs). Our results indicate that the proposed model is a reasonably good crash generator. We further demonstrated that the model is a valid method for assessing DMSs in virtual simulations; it shows the safety impact of shorter longest off-road glances. As expected, cutting away long off-road glances substantially reduces the number of crashes that occur and reduces the average delta-v.

北京阿比特科技有限公司