亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Some years ago, Snapinn and Jiang[1] considered the interpretation and pitfalls of absolute versus relative treatment effect measures in analyses of time-to-event outcomes. Through specific examples and analytical considerations based solely on the exponential and the Weibull distributions they reach two conclusions: 1) that the commonly used criteria for clinical effectiveness, the ARR (Absolute Risk Reduction) and the median (survival time) difference (MD) directly contradict each other and 2) cost-effectiveness depends only the hazard ratio(HR) and the shape parameter (in the Weibull case) but not the overall baseline risk of the population. Though provocative, the first conclusion does not apply to either the two special cases considered or even more generally, while the second conclusion is strictly correct only for the exponential case. Therefore, the implication inferred by the authors i.e. all measures of absolute treatment effect are of little value compared with the relative measure of the hazard ratio, is not of general validity and hence both absolute and relative measures should continue to be used when appraising clinical evidence.

相關內容

Understanding when and why interpolating methods generalize well has recently been a topic of interest in statistical learning theory. However, systematically connecting interpolating methods to achievable notions of optimality has only received partial attention. In this paper, we investigate the question of what is the optimal way to interpolate in linear regression using functions that are linear in the response variable (as the case for the Bayes optimal estimator in ridge regression) and depend on the data, the population covariance of the data, the signal-to-noise ratio and the covariance of the prior for the signal, but do not depend on the value of the signal itself nor the noise vector in the training data. We provide a closed-form expression for the interpolator that achieves this notion of optimality and show that it can be derived as the limit of preconditioned gradient descent with a specific initialization. We identify a regime where the minimum-norm interpolator provably generalizes arbitrarily worse than the optimal response-linear achievable interpolator that we introduce, and validate with numerical experiments that the notion of optimality we consider can be achieved by interpolating methods that only use the training data as input in the case of an isotropic prior. Finally, we extend the notion of optimal response-linear interpolation to random features regression under a linear data-generating model that has been previously studied in the literature.

We formally introduce a time series statistical learning method, called Adaptive Learning, capable of handling model selection, out-of-sample forecasting and interpretation in a noisy environment. Through simulation studies we demonstrate that the method can outperform traditional model selection techniques such as AIC and BIC in the presence of regime-switching, as well as facilitating window size determination when the Data Generating Process is time-varying. Empirically, we use the method to forecast S&P 500 returns across multiple forecast horizons, employing information from the VIX Curve and the Yield Curve. We find that Adaptive Learning models are generally on par with, if not better than, the best of the parametric models a posteriori, evaluated in terms of MSE, while also outperforming under cross validation. We present a financial application of the learning results and an interpretation of the learning regime during the 2020 market crash. These studies can be extended in both a statistical direction and in terms of financial applications.

Performing causal inference in observational studies requires we assume confounding variables are correctly adjusted for. G-computation methods are often used in these scenarios, with several recent proposals using Bayesian versions of g-computation. In settings with few confounders, standard models can be employed, however as the number of confounders increase these models become less feasible as there are fewer observations available for each unique combination of confounding variables. In this paper we propose a new model for estimating treatment effects in observational studies that incorporates both parametric and nonparametric outcome models. By conceptually splitting the data, we can combine these models while maintaining a conjugate framework, allowing us to avoid the use of MCMC methods. Approximations using the central limit theorem and random sampling allows our method to be scaled to high dimensional confounders while maintaining computational efficiency. We illustrate the model using carefully constructed simulation studies, as well as compare the computational costs to other benchmark models.

In this paper I describe some substantial extensions to the survsim command for simulating survival data. survsim can now simulate survival data from a parametric distribution, a custom/user-defined distribution, from a fitted merlin model, from a specified cause-specific hazards competing risks model, or from a specified general multi-state model (with multiple timescales). Left truncation (delayed entry) is now also available for all settings. I illustrate the command with some examples, demonstrating the huge flexibility that can be used to better evaluate statistical methods.

Network meta-analysis (NMA) allows the combination of direct and indirect evidence from a set of randomized clinical trials. Performing NMA using individual patient data (IPD) is considered as a "gold standard" approach as it provides several advantages over NMA based on aggregate data. For example, it allows to perform advanced modelling of covariates or covariate-treatment interactions. An important issue in IPD NMA is the selection of influential parameters among terms that account for inconsistency, covariates, covariate-by-treatment interactions or non-proportionality of treatments effect for time to event data. This issue has not been deeply studied in the literature yet and in particular not for time-to-event data. A major difficulty is to jointly account for between-trial heterogeneity which could have a major influence on the selection process. The use of penalized generalized mixed effect model is a solution, but existing implementations have several shortcomings and an important computational cost that precludes their use for complex IPD NMA. In this article, we propose a penalized Poisson regression model to perform IPD NMA of time-to-event data. It is based only on fixed effect parameters which improve its computational cost over the use of random effects. It could be easily implemented using existing penalized regression package. Computer code is shared for implementation. The methods were applied on simulated data to illustrate the importance to take into account between trial heterogeneity during the selection procedure. Finally, it was applied to an IPD NMA of overall survival of chemotherapy and radiotherapy in nasopharyngeal carcinoma.

From the distributional characterizations that lie at the heart of Stein's method we derive explicit formulae for the mass functions of discrete probability laws that identify those distributions. These identities are applied to develop tools for the solution of statistical problems. Our characterizations, and hence the applications built on them, do not require any knowledge about normalization constants of the probability laws. To demonstrate that our statistical methods are sound, we provide comparative simulation studies for the testing of fit to the Poisson distribution and for parameter estimation of the negative binomial family when both parameters are unknown. We also consider the problem of parameter estimation for discrete exponential-polynomial models which generally are non-normalized.

In the Online Machine Covering problem jobs, defined by their sizes, arrive one by one and have to be assigned to $m$ parallel and identical machines, with the goal of maximizing the load of the least-loaded machine. In this work, we study the Machine Covering problem in the recently popular random-order model. Here no extra resources are present, but instead the adversary is weakened in that it can only decide upon the input set while jobs are revealed uniformly at random. It is particularly relevant to Machine Covering where lower bounds are usually associated to highly structured input sequences. We first analyze Graham's Greedy-strategy in this context and establish that its competitive ratio decreases slightly to $\Theta\left(\frac{m}{\log(m)}\right)$ which is asymptotically tight. Then, as our main result, we present an improved $\tilde{O}(\sqrt[4]{m})$-competitive algorithm for the problem. This result is achieved by exploiting the extra information coming from the random order of the jobs, using sampling techniques to devise an improved mechanism to distinguish jobs that are relatively large from small ones. We complement this result with a first lower bound showing that no algorithm can have a competitive ratio of $O\left(\frac{\log(m)}{\log\log(m)}\right)$ in the random-order model. This lower bound is achieved by studying a novel variant of the Secretary problem, which could be of independent interest.

Causal inference has been increasingly reliant on observational studies with rich covariate information. To build tractable causal procedures, such as the doubly robust estimators, it is imperative to first extract important features from high or even ultra-high dimensional data. In this paper, we propose the causal ball screening for confounder selection from modern ultra-high dimensional data sets. Unlike the familiar task of variable selection for prediction modeling, our confounder selection procedure aims to control for confounding while improving efficiency in the resulting causal effect estimate. Previous empirical and theoretical studies imply that one should exclude causes of the treatment that are not confounders. Motivated by these results, our goal is to keep all the predictors of the outcome in both the propensity score and outcome regression models. A distinctive feature of our proposal is that we use an outcome model-free procedure for propensity score model selection, thereby maintaining double robustness in the resulting causal effect estimator. Our theoretical analyses show that the proposed procedure enjoys a number of properties, including model selection consistency, normality and efficiency. Synthetic and real data analyses show that our proposal performs favorably with existing methods in a range of realistic settings.

Competing risks data are common in medical studies, and the sub-distribution hazard (SDH) ratio is considered an appropriate measure. However, because the limitations of hazard itself are not easy to interpret clinically and because the SDH ratio is valid only under the proportional SDH assumption, this article introduced an alternative index under competing risks, named restricted mean time lost (RMTL). Several test procedures were also constructed based on RMTL. First, we introduced the definition and estimation of RMTL based on Aalen-Johansen cumulative incidence functions. Then, we considered several combined tests based on the SDH and the RMTL difference (RMTLd). The statistical properties of the methods are evaluated using simulations and are applied to two examples. The type I errors of combined tests are close to the nominal level. All combined tests show acceptable power in all situations. In conclusion, RMTL can meaningfully summarize treatment effects for clinical decision making, and three combined tests have robust power under various conditions, which can be considered for statistical inference in real data analysis.

Accurately classifying malignancy of lesions detected in a screening scan plays a critical role in reducing false positives. Through extracting and analyzing a large numbers of quantitative image features, radiomics holds great potential to differentiate the malignant tumors from benign ones. Since not all radiomic features contribute to an effective classifying model, selecting an optimal feature subset is critical. This work proposes a new multi-objective based feature selection (MO-FS) algorithm that considers both sensitivity and specificity simultaneously as the objective functions during the feature selection. In MO-FS, we developed a modified entropy based termination criterion (METC) to stop the algorithm automatically rather than relying on a preset number of generations. We also designed a solution selection methodology for multi-objective learning using the evidential reasoning approach (SMOLER) to automatically select the optimal solution from the Pareto-optimal set. Furthermore, an adaptive mutation operation was developed to generate the mutation probability in MO-FS automatically. The MO-FS was evaluated for classifying lung nodule malignancy in low-dose CT and breast lesion malignancy in digital breast tomosynthesis. Compared with other commonly used feature selection methods, the experimental results for both lung nodule and breast lesion malignancy classification demonstrated that the feature set by selected MO-FS achieved better classification performance.

北京阿比特科技有限公司