亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In the era of fast-paced precision medicine, observational studies play a major role in properly evaluating new treatments in clinical practice. Yet, unobserved confounding can significantly compromise causal conclusions drawn from non-randomized data. We propose a novel strategy that leverages randomized trials to quantify unobserved confounding. First, we design a statistical test to detect unobserved confounding with strength above a given threshold. Then, we use the test to estimate an asymptotically valid lower bound on the unobserved confounding strength. We evaluate the power and validity of our statistical test on several synthetic and semi-synthetic datasets. Further, we show how our lower bound can correctly identify the absence and presence of unobserved confounding in a real-world setting.

相關內容

A popular method for variance reduction in observational causal inference is propensity-based trimming, the practice of removing units with extreme propensities from the sample. This practice has theoretical grounding when the data are homoscedastic and the propensity model is parametric (Yang and Ding, 2018; Crump et al. 2009), but in modern settings where heteroscedastic data are analyzed with non-parametric models, existing theory fails to support current practice. In this work, we address this challenge by developing new methods and theory for sample trimming. Our contributions are three-fold: first, we describe novel procedures for selecting which units to trim. Our procedures differ from previous work in that we trim not only units with small propensities, but also units with extreme conditional variances. Second, we give new theoretical guarantees for inference after trimming. In particular, we show how to perform inference on the trimmed subpopulation without requiring that our regressions converge at parametric rates. Instead, we make only fourth-root rate assumptions like those in the double machine learning literature. This result applies to conventional propensity-based trimming as well and thus may be of independent interest. Finally, we propose a bootstrap-based method for constructing simultaneously valid confidence intervals for multiple trimmed sub-populations, which are valuable for navigating the trade-off between sample size and variance reduction inherent in trimming. We validate our methods in simulation, on the 2007-2008 National Health and Nutrition Examination Survey, and on a semi-synthetic Medicare dataset and find promising results in all settings.

Faithfully summarizing the knowledge encoded by a deep neural network (DNN) into a few symbolic primitive patterns without losing much information represents a core challenge in explainable AI. To this end, Ren et al. (2023c) have derived a series of theorems to prove that the inference score of a DNN can be explained as a small set of interactions between input variables. However, the lack of generalization power makes it still hard to consider such interactions as faithful primitive patterns encoded by the DNN. Therefore, given different DNNs trained for the same task, we develop a new method to extract interactions that are shared by these DNNs. Experiments show that the extracted interactions can better reflect common knowledge shared by different DNNs.

While methods for measuring and correcting differential performance in risk prediction models have proliferated in recent years, most existing techniques can only be used to assess fairness across relatively large subgroups. The purpose of algorithmic fairness efforts is often to redress discrimination against groups that are both marginalized and small, so this sample size limitation often prevents existing techniques from accomplishing their main aim. We take a three-pronged approach to address the problem of quantifying fairness with small subgroups. First, we propose new estimands built on the "counterfactual fairness" framework that leverage information across groups. Second, we estimate these quantities using a larger volume of data than existing techniques. Finally, we propose a novel data borrowing approach to incorporate "external data" that lacks outcomes and predictions but contains covariate and group membership information. This less stringent requirement on the external data allows for more possibilities for external data sources. We demonstrate practical application of our estimators to a risk prediction model used by a major Midwestern health system during the COVID-19 pandemic.

Disaggregated evaluation is a central task in AI fairness assessment, with the goal to measure an AI system's performance across different subgroups defined by combinations of demographic or other sensitive attributes. The standard approach is to stratify the evaluation data across subgroups and compute performance metrics separately for each group. However, even for moderately-sized evaluation datasets, sample sizes quickly get small once considering intersectional subgroups, which greatly limits the extent to which intersectional groups are considered in many disaggregated evaluations. In this work, we introduce a structured regression approach to disaggregated evaluation that we demonstrate can yield reliable system performance estimates even for very small subgroups. We also provide corresponding inference strategies for constructing confidence intervals and explore how goodness-of-fit testing can yield insight into the structure of fairness-related harms experienced by intersectional groups. We evaluate our approach on two publicly available datasets, and several variants of semi-synthetic data. The results show that our method is considerably more accurate than the standard approach, especially for small subgroups, and goodness-of-fit testing helps identify the key factors that drive differences in performance.

Typical pipelines for model geometry generation in computational biomedicine stem from images, which are usually considered to be at rest, despite the object being in mechanical equilibrium under several forces. We refer to the stress-free geometry computation as the reference configuration problem, and in this work we extend such a formulation to the theory of fully nonlinear poroelastic media. The main steps are (i) writing the equations in terms of the reference porosity and (ii) defining a time dependent problem whose steady state solution is the reference porosity. This problem can be computationally challenging as it can require several hundreds of iterations to converge, so we propose the use of Anderson acceleration to speed up this procedure. Our evidence shows that this strategy can reduce the number of iterations up to 80\%. In addition, we note that a primal formulation of the nonlinear mass conservation equations is not consistent due to the presence of second order derivatives of the displacement, which we alleviate through adequate mixed formulations. All claims are validated through numerical simulations in both idealized and realistic scenarios.

Since the start of the operational use of ensemble prediction systems, ensemble-based probabilistic forecasting has become the most advanced approach in weather prediction. However, despite the persistent development of the last three decades, ensemble forecasts still often suffer from the lack of calibration and might exhibit systematic bias, which calls for some form of statistical post-processing. Nowadays, one can choose from a large variety of post-processing approaches, where parametric methods provide full predictive distributions of the investigated weather quantity. Parameter estimation in these models is based on training data consisting of past forecast-observation pairs, thus post-processed forecasts are usually available only at those locations where training data are accessible. We propose a general clustering-based interpolation technique of extending calibrated predictive distributions from observation stations to any location in the ensemble domain where there are ensemble forecasts at hand. Focusing on the ensemble model output statistics (EMOS) post-processing technique, in a case study based on wind speed ensemble forecasts of the European Centre for Medium-Range Weather Forecasts, we demonstrate the predictive performance of various versions of the suggested method and show its superiority over the regionally estimated and interpolated EMOS models and the raw ensemble forecasts as well.

Epilepsy is a clinical neurological disorder characterized by recurrent and spontaneous seizures consisting of abnormal high-frequency electrical activity in the brain. In this condition, the transmembrane potential dynamics are characterized by rapid and sharp wavefronts traveling along the heterogeneous and anisotropic conduction pathways of the brain. This work employs the monodomain model, coupled with specific neuronal ionic models characterizing ion concentration dynamics, to mathematically describe brain tissue electrophysiology in grey and white matter at the organ scale. This multiscale model is discretized in space with the high-order discontinuous Galerkin method on polygonal and polyhedral grids (PolyDG) and advanced in time with a Crank-Nicolson scheme. This ensures, on the one hand, efficient and accurate simulations of the high-frequency electrical activity that is responsible for epileptic seizure and, on the other hand, keeps reasonably low the computational costs by a suitable combination of high-order approximations and agglomerated polytopal meshes. We numerically investigate synthetic test cases on a two-dimensional heterogeneous squared domain discretized with a polygonal grid, and on a two-dimensional brainstem in a sagittal plane with an agglomerated polygonal grid that takes full advantage of the flexibility of the PolyDG approximation of the semidiscrete formulation. Finally, we provide a theoretical analysis of stability and an a-priori convergence analysis for a simplified mathematical problem.

We present a pipeline for unbiased and robust multimodal registration of neuroimaging modalities with minimal pre-processing. While typical multimodal studies need to use multiple independent processing pipelines, with diverse options and hyperparameters, we propose a single and structured framework to jointly process different image modalities. The use of state-of-the-art learning-based techniques enables fast inferences, which makes the presented method suitable for large-scale and/or multi-cohort datasets with a diverse number of modalities per session. The pipeline currently works with structural MRI, resting state fMRI and amyloid PET images. We show the predictive power of the derived biomarkers using in a case-control study and study the cross-modal relationship between different image modalities. The code can be found in https: //github.com/acasamitjana/JUMP.

Swarms of autonomous interactive drones, with the support of recharging technology, can provide compelling sensing capabilities in Smart Cities, such as traffic monitoring and disaster response. Existing approaches, including distributed optimization and deep reinforcement learning (DRL), aim to coordinate drones to achieve cost-effective, high-quality navigation, sensing, and charging. However, they face grand challenges: short-term optimization is not effective in dynamic environments with unanticipated changes, while long-term learning lacks scalability, resilience, and flexibility. To bridge this gap, this paper introduces a new progressive approach that combines short-term plan generation and selection based on distributed optimization with a DRL-based long-term strategic scheduling of flying direction. Extensive experimentation with datasets generated from realistic urban mobility underscores an outstanding performance of the proposed solution compared to state-of-the-art. We also provide compelling new insights about the role of drones density in different sensing missions, the energy safety of drone operations and how to prioritize investments for key locations of charging infrastructure.

Time to an event of interest over a lifetime is a central measure of the clinical benefit of an intervention used in a health technology assessment (HTA). Within the same trial multiple end-points may also be considered. For example, overall and progression-free survival time for different drugs in oncology studies. A common challenge is when an intervention is only effective for some proportion of the population who are not clinically identifiable. Therefore, latent group membership as well as separate survival models for groups identified need to be estimated. However, follow-up in trials may be relatively short leading to substantial censoring. We present a general Bayesian hierarchical framework that can handle this complexity by exploiting the similarity of cure fractions between end-points; accounting for the correlation between them and improving the extrapolation beyond the observed data. Assuming exchangeability between cure fractions facilitates the borrowing of information between end-points. We show the benefits of using our approach with a motivating example, the CheckMate 067 phase 3 trial consisting of patients with metastatic melanoma treated with first line therapy.

北京阿比特科技有限公司