亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper we revisit some common recommendations regarding the analysis of matched-pair and stratified experimental designs in the presence of attrition. Our main objective is to clarify a number of well-known claims about the practice of dropping pairs with an attrited unit when analyzing matched-pair designs. Contradictory advice appears in the literature about whether or not dropping pairs is beneficial or harmful, and stratifying into larger groups has been recommended as a resolution to the issue. To address these claims, we derive the estimands obtained from the difference-in-means estimator in a matched-pair design both when the observations from pairs with an attrited unit are retained and when they are dropped. We find limited evidence to support the claims that dropping pairs helps recover the average treatment effect, but we find that it may potentially help in recovering a convex weighted average of conditional average treatment effects. We report similar findings for stratified designs when studying the estimands obtained from a regression of outcomes on treatment with and without strata fixed effects.

相關內容

Heterogeneous treatment effects are driven by treatment effect modifiers, pre-treatment covariates that modify the effect of a treatment on an outcome. Current approaches for uncovering these variables are limited to low-dimensional data, data with weakly correlated covariates, or data generated according to parametric processes. We resolve these issues by developing a framework for defining model-agnostic treatment effect modifier variable importance parameters applicable to high-dimensional data with arbitrary correlation structure, deriving one-step, estimating equation and targeted maximum likelihood estimators of these parameters, and establishing these estimators' asymptotic properties. This framework is showcased by defining variable importance parameters for data-generating processes with continuous, binary, and time-to-event outcomes with binary treatments, and deriving accompanying multiply-robust and asymptotically linear estimators. Simulation experiments demonstrate that these estimators' asymptotic guarantees are approximately achieved in realistic sample sizes for observational and randomized studies alike. This framework is applied to gene expression data collected for a clinical trial assessing the effect of a monoclonal antibody therapy on disease-free survival in breast cancer patients. Genes predicted to have the greatest potential for treatment effect modification have previously been linked to breast cancer. An open-source R package implementing this methodology, unihtee, is made available on GitHub at //github.com/insightsengineering/unihtee.

Algorithmic stablecoins (AS) are one special type of stablecoins that are not backed by any asset (equiv. without collateral). They stand to revolutionize the way a sovereign fiat operates. As implemented, these coins are poorly stabilized in most cases, easily deviating from the price target or even falling into a catastrophic collapse (a.k.a. Death spiral), and are as a result dismissed as a Ponzi scheme. However, is this the whole picture? In this paper, we try to reveal the truth and clarify such a deceptive concept. We find that Ponzi is basically a financial protocol that pays existing investors with funds collected from new ones. Running a Ponzi, however, does not necessarily imply that any participant is in any sense losing out, as long as the game can be perpetually rolled over. Economists call such realization as a \textit{rational Ponzi game}. We thereby propose a rational model in the context of AS and draw its holding conditions. We apply the model to examine: \textit{whether or not the algorithmic stablecoin is a rational Ponzi game.} Accordingly, we discuss two types of algorithmic stablecoins (\text{Rebase} \& \text{Seigniorage shares}) and dig into the historical market performance of two impactful projects (\text{Ampleforth} \& \text{TerraUSD}, respectively) to demonstrate the effectiveness of our model.

We investigate semantic guarantees of private learning algorithms for their resilience to training Data Reconstruction Attacks (DRAs) by informed adversaries. To this end, we derive non-asymptotic minimax lower bounds on the adversary's reconstruction error against learners that satisfy differential privacy (DP) and metric differential privacy (mDP). Furthermore, we demonstrate that our lower bound analysis for the latter also covers the high dimensional regime, wherein, the input data dimensionality may be larger than the adversary's query budget. Motivated by the theoretical improvements conferred by metric DP, we extend the privacy analysis of popular deep learning algorithms such as DP-SGD and Projected Noisy SGD to cover the broader notion of metric differential privacy.

This paper addresses the deconvolution problem of estimating a square-integrable probability density from observations contaminated with additive measurement errors having a known density. The estimator begins with a density estimate of the contaminated observations and minimizes a reconstruction error penalized by an integrated squared $m$-th derivative. Theory for deconvolution has mainly focused on kernel- or wavelet-based techniques, but other methods including spline-based techniques and this smoothness-penalized estimator have been found to outperform kernel methods in simulation studies. This paper fills in some of these gaps by establishing asymptotic guarantees for the smoothness-penalized approach. Consistency is established in mean integrated squared error, and rates of convergence are derived for Gaussian, Cauchy, and Laplace error densities, attaining some lower bounds already in the literature. The assumptions are weak for most results; the estimator can be used with a broader class of error densities than the deconvoluting kernel. Our application example estimates the density of the mean cytotoxicity of certain bacterial isolates under random sampling; this mean cytotoxicity can only be measured experimentally with additive error, leading to the deconvolution problem. We also describe a method for approximating the solution by a cubic spline, which reduces to a quadratic program.

A predictive model makes outcome predictions based on some given features, i.e., it estimates the conditional probability of the outcome given a feature vector. In general, a predictive model cannot estimate the causal effect of a feature on the outcome, i.e., how the outcome will change if the feature is changed while keeping the values of other features unchanged. This is because causal effect estimation requires interventional probabilities. However, many real world problems such as personalised decision making, recommendation, and fairness computing, need to know the causal effect of any feature on the outcome for a given instance. This is different from the traditional causal effect estimation problem with a fixed treatment variable. This paper first tackles the challenge of estimating the causal effect of any feature (as the treatment) on the outcome w.r.t. a given instance. The theoretical results naturally link a predictive model to causal effect estimations and imply that a predictive model is causally interpretable when the conditions identified in the paper are satisfied. The paper also reveals the robust property of a causally interpretable model. We use experiments to demonstrate that various types of predictive models, when satisfying the conditions identified in this paper, can estimate the causal effects of features as accurately as state-of-the-art causal effect estimation methods. We also show the potential of such causally interpretable predictive models for robust predictions and personalised decision making.

In the geosciences, a recurring problem is one of estimating spatial means of a physical field using weighted averages of point observations. An important variant is when individual observations are counted with some probability less than one. This can occur in different contexts: from missing data to estimating the statistics across subsamples. In such situations, the spatial mean is a ratio of random variables, whose statistics involve approximate estimators derived through series expansion. The present paper considers truncated estimators of variance of the spatial mean and their general structure in the presence of missing data. To all orders, the variance estimator depends only on the first and second moments of the underlying field, and convergence requires these moments to be finite. Furthermore, convergence occurs if either the probability of counting individual observations is larger than 1/2 or the number of point observations is large. In case the point observations are weighted uniformly, the estimators are easily found using combinatorics and involve Stirling numbers of the second kind.

De Finetti's theorem, also called the de Finetti-Hewitt-Savage theorem, is a foundational result in probability and statistics. Roughly, it says that an infinite sequence of exchangeable random variables can always be written as a mixture of independent and identically distributed (i.i.d.) sequences of random variables. In this paper, we consider a weighted generalization of exchangeability that allows for weight functions to modify the individual distributions of the random variables along the sequence, provided that -- modulo these weight functions -- there is still some common exchangeable base measure. We study conditions under which a de Finetti-type representation exists for weighted exchangeable sequences, as a mixture of distributions which satisfy a weighted form of the i.i.d. property. Our approach establishes a nested family of conditions that lead to weighted extensions of other well-known related results as well, in particular, extensions of the zero-one law and the law of large numbers.

Transfer learning aims to improve the performance of a target model by leveraging data from related source populations, which is known to be especially helpful in cases with insufficient target data. In this paper, we study the problem of how to train a high-dimensional ridge regression model using limited target data and existing regression models trained in heterogeneous source populations. We consider a practical setting where only the parameter estimates of the fitted source models are accessible, instead of the individual-level source data. Under the setting with only one source model, we propose a novel flexible angle-based transfer learning (angleTL) method, which leverages the concordance between the source and the target model parameters. We show that angleTL unifies several benchmark methods by construction, including the target-only model trained using target data alone, the source model fitted on source data, and distance-based transfer learning method that incorporates the source parameter estimates and the target data under a distance-based similarity constraint. We also provide algorithms to effectively incorporate multiple source models accounting for the fact that some source models may be more helpful than others. Our high-dimensional asymptotic analysis provides interpretations and insights regarding when a source model can be helpful to the target model, and demonstrates the superiority of angleTL over other benchmark methods. We perform extensive simulation studies to validate our theoretical conclusions and show the feasibility of applying angleTL to transfer existing genetic risk prediction models across multiple biobanks.

Making causal inferences from observational studies can be challenging when confounders are missing not at random. In such cases, identifying causal effects is often not guaranteed. Motivated by a real example, we consider a treatment-independent missingness assumption under which we establish the identification of causal effects when confounders are missing not at random. We propose a weighted estimating equation (WEE) approach for estimating model parameters and introduce three estimators for the average causal effect, based on regression, propensity score weighting, and doubly robust estimation. We evaluate the performance of these estimators through simulations, and provide a real data analysis to illustrate our proposed method.

Angle of arrival (AOA) is widely used to locate a wireless signal emitter in unmanned aerial vehicle (UAV) localization. Compared with received signal strength (RSS) and time of arrival (TOA), it has higher accuracy and is not sensitive to time synchronization of the distributed sensors. However, there are few works focused on three-dimensional (3-D) scenario. Furthermore, although maximum likelihood estimator (MLE) has a relatively high performance, its computational complexity is ultra high. It is hard to employ it in practical applications. This paper proposed two multiplane geometric center based methods for 3-D AOA in UAV positioning. The first method could estimate the source position and angle measurement noise at the same time by seeking a center of the inscribed sphere, called CIS. Firstly, every sensor could measure two angles, azimuth angle and elevation angle. Based on that, two planes are constructed. Then, the estimated values of source position and angle noise are achieved by seeking the center and radius of the corresponding inscribed sphere. Deleting the estimation of the radius, the second algorithm, called MSD-LS, is born. It is not able to estimate angle noise but has lower computational complexity. Theoretical analysis and simulation results show that proposed methods could approach the Cramer-Rao lower bound (CRLB) and have lower complexity than MLE.

北京阿比特科技有限公司