亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In clinical trials, response-adaptive randomization (RAR) has the appealing ability to assign more subjects to better-performing treatments based on interim results. The traditional RAR strategy alters the randomization ratio on a patient-by-patient basis; this has been heavily criticized for bias due to time-trends. An alternate approach is blocked RAR, which groups patients together in blocks and recomputes the randomization ratio in a block-wise fashion; the final analysis is then stratified by block. However, the typical blocked RAR design divides patients into equal-sized blocks, which is not generally optimal. This paper presents TrialMDP, an algorithm that designs two-armed blocked RAR clinical trials. Our method differs from past approaches in that it optimizes the size and number of blocks as well as their treatment allocations. That is, the algorithm yields a policy that adaptively chooses the size and composition of the next block, based on results seen up to that point in the trial. TrialMDP is related to past works that compute optimal trial designs via dynamic programming. The algorithm maximizes a utility function balancing (i) statistical power, (ii) patient outcomes, and (iii) the number of blocks. We show that it attains significant improvements in utility over a suite of baseline designs, and gives useful control over the tradeoff between statistical power and patient outcomes. It is well suited for small trials that assign high cost to failures. We provide TrialMDP as an R package on GitHub: //github.com/dpmerrell/TrialMDP

相關內容

One of the main concerns about fairness in machine learning (ML) is that, in order to achieve it, one may have to trade off some accuracy. To overcome this issue, Hardt et al. proposed the notion of equality of opportunity (EO), which is compatible with maximal accuracy when the target label is deterministic with respect to the input features. In the probabilistic case, however, the issue is more complicated: It has been shown that under differential privacy constraints, there are data sources for which EO can only be achieved at the total detriment of accuracy, in the sense that a classifier that satisfies EO cannot be more accurate than a trivial (i.e., constant) classifier. In our paper we strengthen this result by removing the privacy constraint. Namely, we show that for certain data sources, the most accurate classifier that satisfies EO is a trivial classifier. Furthermore, we study the trade-off between accuracy and EO loss (opportunity difference), and provide a sufficient condition on the data source under which EO and non-trivial accuracy are compatible.

Randomized controlled trials are considered the gold standard to evaluate the treatment effect (estimand) for efficacy and safety. According to the recent International Council on Harmonisation (ICH)-E9 addendum (R1), intercurrent events (ICEs) need to be considered when defining an estimand, and principal stratum is one of the five strategies to handle ICEs. Qu et al. (2020, Statistics in Biopharmaceutical Research 12:1-18) proposed estimators for the adherer average causal effect (AdACE) for estimating the treatment difference for those who adhere to one or both treatments based on the causal-inference framework, and demonstrated the consistency of those estimators; however, this method requires complex custom programming related to high-dimensional numeric integrations. In this article, we implemented the AdACE estimators using multiple imputation (MI) and constructs CI through bootstrapping. A simulation study showed that the MI-based estimators provided consistent estimators with the nominal coverage probabilities of CIs for the treatment difference for the adherent populations of interest. As an illustrative example, the new method was applied to data from a real clinical trial comparing 2 types of basal insulin for patients with type 1 diabetes.

We consider a Bayesian persuasion or information design problem where the sender tries to persuade the receiver to take a particular action via a sequence of signals. This we model by considering multi-phase trials with different experiments conducted based on the outcomes of prior experiments. In contrast to most of the literature, we consider the problem with constraints on signals imposed on the sender. This we achieve by fixing some of the experiments in an exogenous manner; these are called determined experiments. This modeling helps us understand real-world situations where this occurs: e.g., multi-phase drug trials where the FDA determines some of the experiments, funding of a startup by a venture capital firm, start-up acquisition by big firms where late-stage assessments are determined by the potential acquirer, multi-round job interviews where the candidates signal initially by presenting their qualifications but the rest of the screening procedures are determined by the interviewer. The non-determined experiments (signals) in the multi-phase trial are to be chosen by the sender in order to persuade the receiver best. With a binary state of the world, we start by deriving the optimal signaling policy in the only non-trivial configuration of a two-phase trial with binary-outcome experiments. We then generalize to multi-phase trials with binary-outcome experiments where the determined experiments can be placed at any chosen node in the trial tree. Here we present a dynamic programming algorithm to derive the optimal signaling policy that uses the two-phase trial solution's structural insights. We also contrast the optimal signaling policy structure with classical Bayesian persuasion strategies to highlight the impact of the signaling constraints on the sender.

Examinations of any experiment involving living organisms require justifications of the need and moral defensibleness of the study. Statistical planning, design and sample size calculation of the experiment are no less important review criteria than general medical and ethical points to consider. Errors made in the statistical planning and data evaluation phase can have severe consequences on both results and conclusions. They might proliferate and thus impact future trials-an unintended outcome of fundamental research with profound ethical consequences. Therefore, any trial must be efficient in both a medical and statistical way in answering the questions of interests to be considered as approvable. Unified statistical standards are currently missing for animal review boards in Germany. In order to accompany, we developed a biometric form to be filled and handed in with the proposal at the local authority on animal welfare. It addresses relevant points to consider for biostatistical planning of animal experiments and can help both the applicants and the reviewers in overseeing the entire experiment(s) planned. Furthermore, the form might also aid in meeting the current standards set by the 3+3R's principle of animal experimentation Replacement, Reduction, Refinement as well as Robustness, Registration and Reporting. The form has already been in use by the local authority of animal welfare in Berlin, Germany. In addition, we provide reference to our user guide giving more detailed explanation and examples for each section of the biometric form. Unifying the set of biostatistical aspects will help both the applicants and the reviewers to equal standards and increase quality of preclinical research projects, also for translational, multicenter, or international studies.

Robust Policy Search is the problem of learning policies that do not degrade in performance when subject to unseen environment model parameters. It is particularly relevant for transferring policies learned in a simulation environment to the real world. Several existing approaches involve sampling large batches of trajectories which reflect the differences in various possible environments, and then selecting some subset of these to learn robust policies, such as the ones that result in the worst performance. We propose an active learning based framework, EffAcTS, to selectively choose model parameters for this purpose so as to collect only as much data as necessary to select such a subset. We apply this framework using Linear Bandits, and experimentally validate the gains in sample efficiency and the performance of our approach on standard continuous control tasks. We also present a Multi-Task Learning perspective to the problem of Robust Policy Search, and draw connections from our proposed framework to existing work on Multi-Task Learning.

In this paper, we present a novel and practical joint hybrid beamforming (HYBF) and combining scheme to maximize the weighted sum-rate (WSR) in a single-cell massive multiple-input-multiple-output (MIMO) millimeter-wave (mmWave) FD system. All the multi-antenna users and the base station (BS) are assumed to be suffering from the limited dynamic range (LDR) noise due to non-ideal hardware, and we adopt an impairment-aware HYBF approach. To model the non-ideal hardware of a hybrid FD transceiver, we extend the traditional LDR noise model to mmWave. We also present a novel interference and self-interference (SI) aware optimal power allocation scheme for the uplink (UL) users and the BS. The analog processing stage is assumed to be quantized, and we consider both the unit-modulus and unconstrained cases. Compared to the traditional designs, our design also considers the joint sum-power and the practical per-antenna power constraints. It relies on alternating optimization based on the minorization-maximization method. We investigate the maximum achievable gain of a hybrid multi-user FD system with different levels of the LDR noise variance and with different numbers of the radio-frequency (RF) chains. We also show that amplitude manipulation at the analog stage is beneficial for a hybrid FD BS when the number of RF chains is small. Simulation results show that the proposed HYBF design significantly outperforms the fully digital HD system with only a few RF chains at any LDR noise level.

To maximize clinical benefit, clinicians routinely tailor treatment to the individual characteristics of each patient, where individualized treatment rules are needed and are of significant research interest to statisticians. In the covariate-adjusted randomization clinical trial with many covariates, we model the treatment effect with an unspecified function of a single index of the covariates and leave the baseline response completely arbitrary. We devise a class of estimators to consistently estimate the treatment effect function and its associated index while bypassing the estimation of the baseline response, which is subject to the curse of dimensionality. We further develop inference tools to identify predictive covariates and isolate effective treatment region. The usefulness of the methods is demonstrated in both simulations and a clinical data example.

Covariate shifts are a common problem in predictive modeling on real-world problems. This paper proposes addressing the covariate shift problem by minimizing Maximum Mean Discrepancy (MMD) statistics between the training and test sets in either feature input space, feature representation space, or both. We designed three techniques that we call MMD Representation, MMD Mask, and MMD Hybrid to deal with the scenarios where only a distribution shift exists, only a missingness shift exists, or both types of shift exist, respectively. We find that integrating an MMD loss component helps models use the best features for generalization and avoid dangerous extrapolation as much as possible for each test sample. Models treated with this MMD approach show better performance, calibration, and extrapolation on the test set.

In randomized experiments, the actual treatments received by some experimental units may differ from their treatment assignments. This non-compliance issue often occurs in clinical trials, social experiments, and the applications of randomized experiments in many other fields. Under certain assumptions, the average treatment effect for the compliers is identifiable and equal to the ratio of the intention-to-treat effects of the potential outcomes to that of the potential treatment received. To improve the estimation efficiency, we propose three model-assisted estimators for the complier average treatment effect in randomized experiments with a binary outcome. We study their asymptotic properties, compare their efficiencies with that of the Wald estimator, and propose the Neyman-type conservative variance estimators to facilitate valid inferences. Moreover, we extend our methods and theory to estimate the multiplicative complier average treatment effect. Our analysis is randomization-based, allowing the working models to be misspecified. Finally, we conduct simulation studies to illustrate the advantages of the model-assisted methods and apply these analysis methods in a randomized experiment to evaluate the effect of academic services or incentives on academic performance.

We propose an Active Learning approach to image segmentation that exploits geometric priors to streamline the annotation process. We demonstrate this for both background-foreground and multi-class segmentation tasks in 2D images and 3D image volumes. Our approach combines geometric smoothness priors in the image space with more traditional uncertainty measures to estimate which pixels or voxels are most in need of annotation. For multi-class settings, we additionally introduce two novel criteria for uncertainty. In the 3D case, we use the resulting uncertainty measure to show the annotator voxels lying on the same planar patch, which makes batch annotation much easier than if they were randomly distributed in the volume. The planar patch is found using a branch-and-bound algorithm that finds a patch with the most informative instances. We evaluate our approach on Electron Microscopy and Magnetic Resonance image volumes, as well as on regular images of horses and faces. We demonstrate a substantial performance increase over state-of-the-art approaches.

北京阿比特科技有限公司