亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In comparative research on time-to-event data for two groups, when two survival curves cross each other, it may be difficult to use the log-rank test and hazard ratio (HR) to properly assess the treatment benefit. Our aim was to identify a method for evaluating the treatment benefits for two groups in the above situation. We quantified treatment benefits based on an intuitive measure called the area between two survival curves (ABS), which is a robust measure of treatment benefits in clinical trials regardless of whether the proportional hazards assumption is violated or two survival curves cross each other. Additionally, we propose a permutation test based on the ABS, and we evaluate the effectiveness and reliability of this test with simulated data. The ABS permutation test is a robust statistical inference method with an acceptable type I error rate and superior power to detect differences in treatment effects, especially when the proportional hazards assumption is violated. The ABS can be used to intuitively quantify treatment differences over time and provide reliable conclusions in complicated situations, such as crossing survival curves. The R Package "ComparisonSurv" contains the proposed methods and is available from //CRAN.R-project.org/package=ComparisonSurv. Keywords: Survival analysis; Area between two survival curves; Crossing survival curves; Treatment benefit

相關內容

The design of SQL is based on a three-valued logic (3VL), rather than the familiar Boolean logic. 3VL adds a truth value unknown to true and false to handle nulls. Viewed as indispensable for SQL expressiveness, it is at the same time much criticized for unintuitive behavior of queries and being a source of programmer mistakes. We show that, contrary to the widely held view, SQL could have been designed based on the standard Boolean logic, without any loss of expressiveness and without giving up nulls. The approach itself follows SQL's evaluation, which only retains tuples for which conditions in WHERE evaluate to true. We show that conflating unknown with false leads to an equally expressive version of SQL that does not use the third truth value. Queries written under the two-valued semantics can be efficiently translated into the standard SQL and thus executed on any existing RDBMS. These results cover the core of the SQL 1999 Standard: SELECT-FROM-WHERE-GROUP BY-HAVING queries extended with subqueries and IN/EXISTS/ANY/ALL conditions, and recursive queries. We also investigate new optimization rules enabled by the two-valued SQL, and show that for many queries, including most of those found in benchmarks such as TPC-H and TPC-DS, there is no difference between three- and two-valued versions.

While the stable marriage problem and its variants model a vast range of matching markets, they fail to capture complex agent relationships, such as the affiliation of applicants and employers in an interview marketplace. To model this problem, the existing literature on matching with externalities permits agents to provide complete and total rankings over matchings based off of both their own and their affiliates' matches. This complete ordering restriction is unrealistic, and further the model may have an empty core. To address this, we introduce the Dichotomous Affiliate Stable Matching (DASM) Problem, where agents' preferences indicate dichotomous acceptance or rejection of another agent in the marketplace, both for themselves and their affiliates. We also assume the agent's preferences over entire matchings are determined by a general weighted valuation function of their (and their affiliates') matches. Our results are threefold: (1) we use a human study to show that real-world matching rankings follow our assumed valuation function; (2) we prove that there always exists a stable solution by providing an efficient, easily-implementable algorithm that finds such a solution; and (3) we experimentally validate the efficiency of our algorithm versus a linear-programming-based approach.

Sparse Group LASSO (SGL) is a regularized model for high-dimensional linear regression problems with grouped covariates. SGL applies $l_1$ and $l_2$ penalties on the individual predictors and group predictors, respectively, to guarantee sparse effects both on the inter-group and within-group levels. In this paper, we apply the approximate message passing (AMP) algorithm to efficiently solve the SGL problem under Gaussian random designs. We further use the recently developed state evolution analysis of AMP to derive an asymptotically exact characterization of SGL solution. This allows us to conduct multiple fine-grained statistical analyses of SGL, through which we investigate the effects of the group information and $\gamma$ (proportion of $\ell_1$ penalty). With the lens of various performance measures, we show that SGL with small $\gamma$ benefits significantly from the group information and can outperform other SGL (including LASSO) or regularized models which do not exploit the group information, in terms of the recovery rate of signal, false discovery rate and mean squared error.

Bayesian nonparametric methods are a popular choice for analysing survival data due to their ability to flexibly model the distribution of survival times. These methods typically employ a nonparametric prior on the survival function that is conjugate with respect to right-censored data. Eliciting these priors, particularly in the presence of covariates, can be challenging and inference typically relies on computationally intensive Markov chain Monte Carlo schemes. In this paper, we build on recent work that recasts Bayesian inference as assigning a predictive distribution on the unseen values of a population conditional on the observed samples, thus avoiding the need to specify a complex prior. We describe a copula-based predictive update which admits a scalable sequential importance sampling algorithm to perform inference that properly accounts for right-censoring. We provide theoretical justification through an extension of Doob's consistency theorem and illustrate the method on a number of simulated and real data sets, including an example with covariates. Our approach enables analysts to perform Bayesian nonparametric inference through only the specification of a predictive distribution.

This paper investigates the efficiency of different cross-validation (CV) procedures under algorithmic stability with a specific focus on the K-fold. We derive a generic upper bound for the risk estimation error applicable to a wide class of CV schemes. This upper bound ensures the consistency of the leave-one-out and the leave-p-out CV but fails to control the error of the K-fold. We confirm this negative result with a lower bound on the K-fold error which does not converge to zero with the sample size. We thus propose a debiased version of the K-fold which is consistent for any uniformly stable learner. We apply our results to the problem of model selection and demonstrate empirically the usefulness of the promoted approach on real world datasets.

The sequential treatment decisions made by physicians to treat chronic diseases are formalized in the statistical literature as dynamic treatment regimes. To date, methods for dynamic treatment regimes have been developed under the assumption that observation times, i.e., treatment and outcome monitoring times, are determined by study investigators. That assumption is often not satisfied in electronic health records data in which the outcome, the observation times, and the treatment mechanism are associated with patients' characteristics. The treatment and observation processes can lead to spurious associations between the treatment of interest and the outcome to be optimized under the dynamic treatment regime if not adequately considered in the analysis. We address these associations by incorporating two inverse weights that are functions of a patient's covariates into dynamic weighted ordinary least squares to develop optimal single stage dynamic treatment regimes, known as individualized treatment rules. We show empirically that our methodology yields consistent, multiply robust estimators. In a cohort of new users of antidepressant drugs from the United Kingdom's Clinical Practice Research Datalink, the proposed method is used to develop an optimal treatment rule that chooses between two antidepressants to optimize a utility function related to the change in body mass index.

Observational studies can play a useful role in assessing the comparative effectiveness of competing treatments. In a clinical trial the randomization of participants to treatment and control groups generally results in well-balanced groups with respect to possible confounders, which makes the analysis straightforward. However, when analysing observational data, the potential for unmeasured confounding makes comparing treatment effects much more challenging. Causal inference methods such as the Instrumental Variable and Prior Even Rate Ratio approaches make it possible to circumvent the need to adjust for confounding factors that have not been measured in the data or measured with error. Direct confounder adjustment via multivariable regression and Propensity score matching also have considerable utility. Each method relies on a different set of assumptions and leverages different data. In this paper, we describe the assumptions of each method and assess the impact of violating these assumptions in a simulation study. We propose the prior outcome augmented Instrumental Variable method that leverages data from before and after treatment initiation, and is robust to the violation of key assumptions. Finally, we propose the use of a heterogeneity statistic to decide two or more estimates are statistically similar, taking into account their correlation. We illustrate our causal framework to assess the risk of genital infection in patients prescribed Sodium-glucose Co-transporter-2 inhibitors versus Dipeptidyl Peptidase-4 inhibitors as second-line treatment for type 2 diabets using observational data from the Clinical Practice Research Datalink.

The statistical finite element method (StatFEM) is an emerging probabilistic method that allows observations of a physical system to be synthesised with the numerical solution of a PDE intended to describe it in a coherent statistical framework, to compensate for model error. This work presents a new theoretical analysis of the statistical finite element method demonstrating that it has similar convergence properties to the finite element method on which it is based. Our results constitute a bound on the Wasserstein-2 distance between the ideal prior and posterior and the StatFEM approximation thereof, and show that this distance converges at the same mesh-dependent rate as finite element solutions converge to the true solution. Several numerical examples are presented to demonstrate our theory, including an example which test the robustness of StatFEM when extended to nonlinear quantities of interest.

Neural waveform models such as the WaveNet are used in many recent text-to-speech systems, but the original WaveNet is quite slow in waveform generation because of its autoregressive (AR) structure. Although faster non-AR models were recently reported, they may be prohibitively complicated due to the use of a distilling training method and the blend of other disparate training criteria. This study proposes a non-AR neural source-filter waveform model that can be directly trained using spectrum-based training criteria and the stochastic gradient descent method. Given the input acoustic features, the proposed model first uses a source module to generate a sine-based excitation signal and then uses a filter module to transform the excitation signal into the output speech waveform. Our experiments demonstrated that the proposed model generated waveforms at least 100 times faster than the AR WaveNet and the quality of its synthetic speech is close to that of speech generated by the AR WaveNet. Ablation test results showed that both the sine-wave excitation signal and the spectrum-based training criteria were essential to the performance of the proposed model.

Discrete random structures are important tools in Bayesian nonparametrics and the resulting models have proven effective in density estimation, clustering, topic modeling and prediction, among others. In this paper, we consider nested processes and study the dependence structures they induce. Dependence ranges between homogeneity, corresponding to full exchangeability, and maximum heterogeneity, corresponding to (unconditional) independence across samples. The popular nested Dirichlet process is shown to degenerate to the fully exchangeable case when there are ties across samples at the observed or latent level. To overcome this drawback, inherent to nesting general discrete random measures, we introduce a novel class of latent nested processes. These are obtained by adding common and group-specific completely random measures and, then, normalising to yield dependent random probability measures. We provide results on the partition distributions induced by latent nested processes, and develop an Markov Chain Monte Carlo sampler for Bayesian inferences. A test for distributional homogeneity across groups is obtained as a by product. The results and their inferential implications are showcased on synthetic and real data.

北京阿比特科技有限公司