亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Risk-adjusted quality measures are used to evaluate healthcare providers while controlling for factors beyond their control. Existing healthcare provider profiling approaches typically assume that the risk adjustment is perfect and the between-provider variation in quality measures is entirely due to the quality of care. However, in practice, even with very good models for risk adjustment, some between-provider variation will be due to incomplete risk adjustment, which should be recognized in assessing and monitoring providers. Otherwise, conventional methods disproportionately identify larger providers as outliers, even though their provider effects need not be "extreme.'' Motivated by efforts to evaluate the quality of care provided by transplant centers, we develop a composite evaluation score based on a novel individualized empirical null method, which robustly accounts for overdispersion due to unobserved risk factors, models the marginal variance of standardized scores as a function of the effective center size, and only requires the use of publicly-available center-level statistics. The evaluations of United States kidney transplant centers based on the proposed composite score are substantially different from those based on conventional methods. Simulations show that the proposed empirical null approach more accurately classifies centers in terms of quality of care, compared to existing methods.

相關內容

Generalized linear mixed models (GLMM) are a popular tool to analyze clustered data, but when the number of clusters is small to moderate, standard statistical tests may produce elevated type I error rates. Small-sample corrections have been proposed to address this issue for continuous or binary outcomes without covariate adjustment. However, appropriate tests to use for count outcomes or under covariate-adjusted models remains unknown. An important setting in which this issue arises is in cluster-randomized trials (CRTs). Because many CRTs have just a few clusters (e.g., clinics or health systems), covariate adjustment is particularly critical to address potential chance imbalance and/or low power (e.g., adjustment following stratified randomization or for the baseline value of the outcome). We conducted simulations to evaluate GLMM-based tests of the treatment effect that account for the small (10) or moderate (20) number of clusters under a parallel-group CRT setting across scenarios of covariate adjustment (including adjustment for one or more person-level or cluster-level covariates) for both binary and count outcomes. We find that when the intraclass correlation is non-negligible ($\geq 0.01$) and the number of covariates is small ($\leq 2$), likelihood ratio tests with a between-within denominator degree of freedom have type I error rates close to the nominal level. When the number of covariates is moderate ($\geq 5$), across our simulation scenarios, the relative performance of the tests varied considerably and no method performed uniformly well. Therefore, we recommend adjusting for no more than a few covariates and using likelihood ratio tests with a between-within denominator degree of freedom.

Ephemeral Diffie-Hellman Over COSE (EDHOC) aims at being a very compact and lightweight authenticated Diffie-Hellman key exchange with ephemeral keys. It is expected to provide mutual authentication, forward secrecy, and identity protection, with a 128-bit security level.A formal analysis has already been proposed at SECRYPT '21, on a former version, leading to some improvements, in the ongoing evaluation process by IETF. Unfortunately, while formal analysis can detect some misconceptions in the protocol, it cannot evaluate the actual security level.In this paper, we study the last version. Without complete breaks, we anyway exhibit attacks in 2^64 operations, which contradict the expected 128-bit security level. We thereafter propose improvements, some of them being at no additional cost, to achieve 128-bit security for all the security properties (i.e. key privacy, mutual authentication, and identity-protection).

Non-Euclidean data is currently prevalent in many fields, necessitating the development of novel concepts such as distribution functions, quantiles, rankings, and signs for these data in order to conduct nonparametric statistical inference. This study provides new thoughts on quantiles, both locally and globally, in metric spaces. This is realized by expanding upon metric distribution function proposed by Wang et al. (2021). Rank and sign are defined at both the local and global levels as a natural consequence of the center-outward ordering of metric spaces brought about by the local and global quantiles. The theoretical properties are established, such as the root-$n$ consistency and uniform consistency of the local and global empirical quantiles and the distribution-freeness of ranks and signs. The empirical metric median, which is defined here as the 0th empirical global metric quantile, is proven to be resistant to contaminations by means of both theoretical and numerical approaches. Quantiles have been shown valuable through extensive simulations in a number of metric spaces. Moreover, we introduce a family of fast rank-based independence tests for a generic metric space. Monte Carlo experiments show good finite-sample performance of the test. Quantiles are demonstrated in a real-world setting by analysing hippocampal data.

In this paper, we derive copula-based and empirical dependency models (DMs) for simulating non-independent variables, and then propose a new way for determining the distribution of the model outputs conditional on every subset of inputs. Our approach relies on equivalent representations of the model outputs using DMs, and an algorithm is provided for selecting the representations that are necessary and sufficient for deriving the distribution of the model outputs given each subset of inputs. In sensitivity analysis, the selected representations allow for assessing the main, total and interactions effects of each subset of inputs. We then introduce the first-order and total indices of every subset of inputs with the former index less than the latter. Consistent estimators of such indices are provided, including their asymptotic distributions. Analytical and numerical results are provided using single and multivariate response models.

In explainable machine learning, local post-hoc explanation algorithms and inherently interpretable models are often seen as competing approaches. In this work, offer a novel perspective on Shapley Values, a prominent post-hoc explanation technique, and show that it is strongly connected with Glassbox-GAMs, a popular class of interpretable models. We introduce $n$-Shapley Values, a natural extension of Shapley Values that explain individual predictions with interaction terms up to order $n$. As $n$ increases, the $n$-Shapley Values converge towards the Shapley-GAM, a uniquely determined decomposition of the original function. From the Shapley-GAM, we can compute Shapley Values of arbitrary order, which gives precise insights into the limitations of these explanations. We then show that Shapley Values recover generalized additive models of order $n$, assuming that we allow for interaction terms up to order $n$ in the explanations. This implies that the original Shapley Values recover Glassbox-GAMs. At the technical end, we show that there is a one-to-one correspondence between different ways to choose the value function and different functional decompositions of the original function. This provides a novel perspective on the question of how to choose the value function. We also present an empirical analysis of the degree of variable interaction that is present in various standard classifiers, and discuss the implications of our results for algorithmic explanations. A python package to compute $n$-Shapley Values and replicate the results in this paper is available at \url{//github.com/tml-tuebingen/nshap}.

Subsampling or subdata selection is a useful approach in large-scale statistical learning. Most existing studies focus on model-based subsampling methods which significantly depend on the model assumption. In this paper, we consider the model-free subsampling strategy for generating subdata from the original full data. In order to measure the goodness of representation of a subdata with respect to the original data, we propose a criterion, generalized empirical F-discrepancy (GEFD), and study its theoretical properties in connection with the classical generalized L2-discrepancy in the theory of uniform designs. These properties allow us to develop a kind of low-GEFD data-driven subsampling method based on the existing uniform designs. By simulation examples and a real case study, we show that the proposed subsampling method is superior to the random sampling method. Moreover, our method keeps robust under diverse model specifications while other popular subsampling methods are under-performing. In practice, such a model-free property is more appealing than the model-based subsampling methods, where the latter may have poor performance when the model is misspecified, as demonstrated in our simulation studies.

This paper proposes a generalization of Gaussian mixture models, where the mixture weight is allowed to behave as an unknown function of time. This model is capable of successfully capturing the features of the data, as demonstrated by simulated and real datasets. It can be useful in studies such as clustering, change-point and process control. In order to estimate the mixture weight function, we propose two new Bayesian nonlinear dynamic approaches for polynomial models, that can be extended to other problems involving polynomial nonlinear dynamic models. One of the methods, called here component-wise Metropolis-Hastings, apply the Metropolis-Hastings algorithm to each local level component of the state equation. It is more general and can be used in any situation where the observation and state equations are nonlinearly connected. The other method tends to be faster, but is applied specifically to binary data (using the probit link function). The performance of these methods of estimation, in the context of the proposed dynamic Gaussian mixture model, is evaluated through simulated datasets. Also, an application to an array Comparative Genomic Hybridization (aCGH) dataset from glioblastoma cancer illustrates our proposal, highlighting the ability of the method to detect chromosome aberrations.

Although there is an extensive literature on the eigenvalues of high-dimensional sample covariance matrices, much of it is specialized to Mar\v{c}enko-Pastur (MP) models -- in which observations are represented as linear transformations of random vectors with independent entries. By contrast, less is known in the context of elliptical models, which violate the independence structure of MP models and exhibit quite different statistical phenomena. In particular, very little is known about the scope of bootstrap methods for doing inference with spectral statistics in high-dimensional elliptical models. To fill this gap, we show how a bootstrap approach developed previously for MP models can be extended to handle the different properties of elliptical models. Within this setting, our main theoretical result guarantees that the proposed method consistently approximates the distributions of linear spectral statistics, which play a fundamental role in multivariate analysis. Lastly, we provide empirical results showing that the proposed method also performs well for a variety of nonlinear spectral statistics.

Differential Privacy (DP) is an important privacy-enhancing technology for private machine learning systems. It allows to measure and bound the risk associated with an individual participation in a computation. However, it was recently observed that DP learning systems may exacerbate bias and unfairness for different groups of individuals. This paper builds on these important observations and sheds light on the causes of the disparate impacts arising in the problem of differentially private empirical risk minimization. It focuses on the accuracy disparity arising among groups of individuals in two well-studied DP learning methods: output perturbation and differentially private stochastic gradient descent. The paper analyzes which data and model properties are responsible for the disproportionate impacts, why these aspects are affecting different groups disproportionately and proposes guidelines to mitigate these effects. The proposed approach is evaluated on several datasets and settings.

Imaging demonstrates that preclinical and human tumors are heterogeneous, i.e. a single tumor can exhibit multiple regions that behave differently during both normal development and also in response to treatment. The large variations observed in control group tumors can obscure detection of significant therapeutic effects due to the ambiguity in attributing causes of change. This can hinder development of effective therapies due to limitations in experimental design, rather than due to therapeutic failure. An improved method to model biological variation and heterogeneity in imaging signals is described. Specifically, Linear Poisson modelling (LPM) evaluates changes in apparent diffusion co-efficient (ADC) before and 72 hours after radiotherapy, in two xenograft models of colorectal cancer. The statistical significance of measured changes are compared to those attainable using a conventional t-test analysis on basic ADC distribution parameters. When LPMs were applied to treated tumors, the LPMs detected highly significant changes. The analyses were significant for all tumors, equating to a gain in power of 4 fold (i.e. equivelent to having a sample size 16 times larger), compared with the conventional approach. In contrast, highly significant changes are only detected at a cohort level using t-tests, restricting their potential use within personalised medicine and increasing the number of animals required during testing. Furthermore, LPM enabled the relative volumes of responding and non-responding tissue to be estimated for each xenograft model. Leave-one-out analysis of the treated xenografts provided quality control and identified potential outliers, raising confidence in LPM data at clinically relevant sample sizes.

北京阿比特科技有限公司