亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

While randomized controlled trials (RCTs) are the gold standard for estimating treatment effects in medical research, there is increasing use of and interest in using real-world data for drug development. One such use case is the construction of external control arms for evaluation of efficacy in single-arm trials, particularly in cases where randomization is either infeasible or unethical. However, it is well known that treated patients in non-randomized studies may not be comparable to control patients -- on either measured or unmeasured variables -- and that the underlying population differences between the two groups may result in biased treatment effect estimates as well as increased variability in estimation. To address these challenges for analyses of time-to-event outcomes, we developed a meta-analytic framework that uses historical reference studies to adjust a log hazard ratio estimate in a new external control study for its additional bias and variability. The set of historical studies is formed by constructing external control arms for historical RCTs, and a meta-analysis compares the trial controls to the external control arms. Importantly, a prospective external control study can be performed independently of the meta-analysis using standard causal inference techniques for observational data. We illustrate our approach with a simulation study and an empirical example based on reference studies for advanced non-small cell lung cancer. In our empirical analysis, external control patients had lower survival than trial controls (hazard ratio: 0.907), but our methodology is able to correct for this bias. An implementation of our approach is available in the R package ecmeta.

相關內容

Representational similarity analysis (RSA) is a multivariate technique to investigate cortical representations of objects or constructs. While avoiding ill-posed matrix inversions that plague multivariate approaches in the presence of many outcome variables, it suffers from the confound arising from the non-orthogonality of the design matrix. Here, a partial correlation approach will be explored to adjust for this source of bias by partialling out this confound in the context of the searchlight method for functional imaging datasets. A formal analysis will show the existence of a dependency of this confound on the temporal correlation model of the sequential observations, motivating a data-driven approach that avoids the problem of misspecification of this model. However, where the autocorrelation locally diverges from its volume estimate, bias may be difficult to control for exactly, given the difficulties of estimating the precise form of the confound at each voxel. Application to real data shows the effectiveness of the partial correlation approach, suggesting the impact of local bias to be minor. However, where the control for bias locally fails, possible spurious associations with the similarity matrix of the stimuli may emerge. This limitation may be intrinsic to RSA applied to non-orthogonal designs. The software implementing the approach is made publicly available (//github.com/roberto-viviani/rsa-rsm.git).

The current research standards in robotics demand general approaches to robots' controllers development. In the assistive robotics domain, the human-machine interaction plays a substantial role. Especially, the humans generate intents that affect robot control system. In the article an approach is presented for creating control systems for assistive robots, which reacts to users' intents delivered by voice commands, buttons, or an operator console. The whole approach was applied to the real system consisting of customised TIAGo robot and additional hardware components. The exemplary experiments performed on the platform illustrate the motivation for diversification of human-machine interfaces in assistive robots.

Patients regularly continue assessment or treatment in other facilities than they began them in, receiving their previous imaging studies as a CD-ROM and requiring clinical staff at the new hospital to import these studies into their local database. However, between different facilities, standards for nomenclature, contents, or even medical procedures may vary, often requiring human intervention to accurately classify the received studies in the context of the recipient hospital's standards. In this study, the authors present MOMO (MOdality Mapping and Orchestration), a deep learning-based approach to automate this mapping process utilizing metadata substring matching and a neural network ensemble, which is trained to recognize the 76 most common imaging studies across seven different modalities. A retrospective study is performed to measure the accuracy that this algorithm can provide. To this end, a set of 11,934 imaging series with existing labels was retrieved from the local hospital's PACS database to train the neural networks. A set of 843 completely anonymized external studies was hand-labeled to assess the performance of our algorithm. Additionally, an ablation study was performed to measure the performance impact of the network ensemble in the algorithm, and a comparative performance test with a commercial product was conducted. In comparison to a commercial product (96.20% predictive power, 82.86% accuracy, 1.36% minor errors), a neural network ensemble alone performs the classification task with less accuracy (99.05% predictive power, 72.69% accuracy, 10.3% minor errors). However, MOMO outperforms either by a large margin in accuracy and with increased predictive power (99.29% predictive power, 92.71% accuracy, 2.63% minor errors).

We investigate the optimal design of experimental studies that have pre-treatment outcome data available. The average treatment effect is estimated as the difference between the weighted average outcomes of the treated and control units. A number of commonly used approaches fit this formulation, including the difference-in-means estimator and a variety of synthetic-control techniques. We propose several methods for choosing the set of treated units in conjunction with the weights. Observing the NP-hardness of the problem, we introduce a mixed-integer programming formulation which selects both the treatment and control sets and unit weightings. We prove that these proposed approaches lead to qualitatively different experimental units being selected for treatment. We use simulations based on publicly available data from the US Bureau of Labor Statistics that show improvements in terms of mean squared error and statistical power when compared to simple and commonly used alternatives such as randomized trials.

With the advancement of affordable self-driving vehicles using complicated nonlinear optimization but limited computation resources, computation time becomes a matter of concern. Other factors such as actuator dynamics and actuator command processing cost also unavoidably cause delays. In high-speed scenarios, these delays are critical to the safety of a vehicle. Recent works consider these delays individually, but none unifies them all in the context of autonomous driving. Moreover, recent works inappropriately consider computation time as a constant or a large upper bound, which makes the control either less responsive or over-conservative. To deal with all these delays, we present a unified framework by 1) modeling actuation dynamics, 2) using robust tube model predictive control, 3) using a novel adaptive Kalman filter without assuminga known process model and noise covariance, which makes the controller safe while minimizing conservativeness. On onehand, our approach can serve as a standalone controller; on theother hand, our approach provides a safety guard for a high-level controller, which assumes no delay. This can be used for compensating the sim-to-real gap when deploying a black-box learning-enabled controller trained in a simplistic environment without considering delays for practical vehicle systems.

Defining multivariate generalizations of the classical univariate ranks has been a long-standing open problem in statistics. Optimal transport has been shown to offer a solution by transporting data points to grid approximating a reference measure (Chernozhukov et al., 2017; Hallin, 2017; Hallin et al., 2021a). We take up this new perspective to develop and study multivariate analogues of popular correlations measures including the sign covariance, Kendall's tau and Spearman's rho. Our tests are genuinely distribution-free, hence valid irrespective of the actual (absolutely continuous) distributions of the observations. We present asymptotic distribution theory for these new statistics, providing asymptotic approximations to critical values to be used for testing independence as well as an analysis of power of the resulting tests. Interestingly, we are able to establish a multivariate elliptical Chernoff-Savage property, which guarantees that, under ellipticity, our nonparametric tests of independence when compared to Gaussian procedures enjoy an asymptotic relative efficiency of one or larger. Hence, the nonparametric tests constitute a safe replacement for procedures based on multivariate Gaussianity.

Large linear systems of saddle-point type have arisen in a wide variety of applications throughout computational science and engineering. The discretizations of distributed control problems have a saddle-point structure. The numerical solution of saddle-point problems has attracted considerable interest in recent years. In this work, we propose a novel Braess-Sarazin multigrid relaxation scheme for finite element discretizations of the distributed control problems, where we use the stiffness matrix obtained from the five-point finite difference method for the Laplacian to approximate the inverse of the mass matrix arising in the saddle-point system. We apply local Fourier analysis to examine the smoothing properties of the Braess-Sarazin multigrid relaxation. From our analysis, the optimal smoothing factor for Braess-Sarazin relaxation is derived. Numerical experiments validate our theoretical results. The relaxation scheme considered here shows its high efficiency and robustness with respect to the regularization parameter and grid size.

Data from both a randomized trial and an observational study are sometimes simultaneously available for evaluating the effect of an intervention. The randomized data typically allows for reliable estimation of average treatment effects but may be limited in sample size and patient heterogeneity for estimating conditional average treatment effects for a broad range of patients. Estimates from the observational study can potentially compensate for these limitations, but there may be concerns about whether confounding and treatment effect heterogeneity have been adequately addressed. We propose an approach for combining conditional treatment effect estimators from each source such that it aggressively weights toward the randomized estimator when bias in the observational estimator is detected. This allows the combination to be consistent for a conditional causal effect, regardless of whether assumptions required for consistent estimation in the observational study are satisfied. When the bias is negligible, the estimators from each source are combined for optimal efficiency. We show the problem can be formulated as a penalized least squares problem and consider its asymptotic properties. Simulations demonstrate the robustness and efficiency of the method in finite samples, in scenarios with bias or no bias in the observational estimator. We illustrate the method by estimating the effects of hormone replacement therapy on the risk of developing coronary heart disease in data from the Women's Health Initiative.

Robust estimation is much more challenging in high dimensions than it is in one dimension: Most techniques either lead to intractable optimization problems or estimators that can tolerate only a tiny fraction of errors. Recent work in theoretical computer science has shown that, in appropriate distributional models, it is possible to robustly estimate the mean and covariance with polynomial time algorithms that can tolerate a constant fraction of corruptions, independent of the dimension. However, the sample and time complexity of these algorithms is prohibitively large for high-dimensional applications. In this work, we address both of these issues by establishing sample complexity bounds that are optimal, up to logarithmic factors, as well as giving various refinements that allow the algorithms to tolerate a much larger fraction of corruptions. Finally, we show on both synthetic and real data that our algorithms have state-of-the-art performance and suddenly make high-dimensional robust estimation a realistic possibility.

In this paper we introduce a covariance framework for the analysis of EEG and MEG data that takes into account observed temporal stationarity on small time scales and trial-to-trial variations. We formulate a model for the covariance matrix, which is a Kronecker product of three components that correspond to space, time and epochs/trials, and consider maximum likelihood estimation of the unknown parameter values. An iterative algorithm that finds approximations of the maximum likelihood estimates is proposed. We perform a simulation study to assess the performance of the estimator and investigate the influence of different assumptions about the covariance factors on the estimated covariance matrix and on its components. Apart from that, we illustrate our method on real EEG and MEG data sets. The proposed covariance model is applicable in a variety of cases where spontaneous EEG or MEG acts as source of noise and realistic noise covariance estimates are needed for accurate dipole localization, such as in evoked activity studies, or where the properties of spontaneous EEG or MEG are themselves the topic of interest, such as in combined EEG/fMRI experiments in which the correlation between EEG and fMRI signals is investigated.

北京阿比特科技有限公司