亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Tie-breaker experimental designs are hybrids of Randomized Controlled Trials (RCTs) and Regression Discontinuity Designs (RDDs) in which subjects with moderate scores are placed in an RCT while subjects with extreme scores are deterministically assigned to the treatment or control group. The tie-breaker design (TBD) has practical advantages over the RCT in settings where it is unfair or uneconomical to deny the treatment to the most deserving recipients. Meanwhile, the TBD has statistical benefits due to randomization over the RDD. In this paper we discuss and quantify the statistical benefits of the TBD compared to the RDD. If the goal is estimation of the average treatment effect or the treatment at more than one score value, the statistical benefits of using a TBD over an RDD are apparent. If the goal is estimation of the average treatment effect at merely one score value, which is typically done by fitting local linear regressions, about 2.8 times more subjects are needed for an RDD in order to achieve the same asymptotic mean squared error. We further demonstrate using both theoretical results and simulations from the Angrist and Lavy (1999) classroom size dataset, that larger experimental radii choices for the TBD lead to greater statistical efficiency.

相關內容

TBD:IEEE Transactions on Big Data。 Explanation:IEEE大數據事務(處理)。 Publisher:IEEE。 SIT:

Federated Learning (FL) makes a large amount of edge computing devices (e.g., mobile phones) jointly learn a global model without data sharing. In FL, data are generated in a decentralized manner with high heterogeneity. This paper studies how to perform statistical estimation and inference in the federated setting. We analyze the so-called Local SGD, a multi-round estimation procedure that uses intermittent communication to improve communication efficiency. We first establish a {\it functional central limit theorem} that shows the averaged iterates of Local SGD weakly converge to a rescaled Brownian motion. We next provide two iterative inference methods: the {\it plug-in} and the {\it random scaling}. Random scaling constructs an asymptotically pivotal statistic for inference by using the information along the whole Local SGD path. Both the methods are communication efficient and applicable to online data. Our theoretical and empirical results show that Local SGD simultaneously achieves both statistical efficiency and communication efficiency.

Empirical likelihood enables a nonparametric, likelihood-driven style of inference without restrictive assumptions routinely made in parametric models. We develop a framework for applying empirical likelihood to the analysis of experimental designs, addressing issues that arise from blocking and multiple hypothesis testing. In addition to popular designs such as balanced incomplete block designs, our approach allows for highly unbalanced, incomplete block designs. Based on all these designs, we derive an asymptotic multivariate chi-square distribution for a set of empirical likelihood test statistics. Further, we propose two single-step multiple testing procedures: asymptotic Monte Carlo and nonparametric bootstrap. Both procedures asymptotically control the generalized family-wise error rate and efficiently construct simultaneous confidence intervals for comparisons of interest without explicitly considering the underlying covariance structure. A simulation study demonstrates that the performance of the procedures is robust to violations of standard assumptions of linear mixed models. Significantly, considering the asymptotic nature of empirical likelihood, the nonparametric bootstrap procedure performs well even for small sample sizes. We also present an application to experiments on a pesticide. Supplementary materials for this article are available online.

We prove various approximation theorems with polynomials whose coefficients with respect to the Bernstein basis of a given order are all integers. In the extreme, we draw the coefficients from the set $\{ \pm 1\}$ only. We show that for any Lipschitz function $f:[0,1] \to [-1,1]$ and for any positive integer $n$, there are signs $\sigma_0,\dots,\sigma_n \in \{\pm 1\}$ such that $$\left |f(x) - \sum_{k=0}^n \sigma_k \, \binom{n}{k} x^k (1-x)^{n-k} \right | \leq \frac{C (1+|f|_{\mathrm{Lip}})}{1+\sqrt{nx(1-x)}} ~\mbox{ for all } x \in [0,1].$$ These polynomial approximations are not constrained by saturation of Bernstein polynomials, and we show that higher accuracy is indeed achievable for smooth functions: If $f$ has a Lipschitz $(s{-}1)$st derivative, then accuracy of order $O(n^{-s/2})$ is achievable with $\pm 1$ coefficients provided $\|f \|_\infty < 1$, and accuracy of order $O(n^{-s})$ is achievable with unrestricted integer coefficients. Our approximations are constructive in nature.

We establish estimates on the error made by the Deep Ritz Method for elliptic problems on the space $H^1(\Omega)$ with different boundary conditions. For Dirichlet boundary conditions, we estimate the error when the boundary values are approximately enforced through the boundary penalty method. Our results apply to arbitrary and in general non linear classes $V\subseteq H^1(\Omega)$ of ansatz functions and estimate the error in dependence of the optimization accuracy, the approximation capabilities of the ansatz class and -- in the case of Dirichlet boundary values -- the penalisation strength $\lambda$. For non-essential boundary conditions the error of the Ritz method decays with the same rate as the approximation rate of the ansatz classes. For essential boundary conditions, given an approximation rate of $r$ in $H^1(\Omega)$ and an approximation rate of $s$ in $L^2(\partial\Omega)$ of the ansatz classes, the optimal decay rate of the estimated error is $\min(s/2, r)$ and achieved by choosing $\lambda_n\sim n^{s}$. We discuss the implications for ansatz classes which are given through ReLU networks and the relation to existing estimates for finite element functions.

This paper considers the finite element solution of the boundary value problem of Poisson's equation and proposes a guaranteed em a posteriori local error estimation based on the hypercircle method. Compared to the existing literature on qualitative error estimation, the proposed error estimation provides an explicit and sharp bound for the approximation error in the subdomain of interest, and its efficiency can be enhanced by further utilizing a non-uniform mesh. Such a result is applicable to problems without $H^2$-regularity, since it only utilizes the first order derivative of the solution. The efficiency of the proposed method is demonstrated by numerical experiments for both convex and non-convex 2D domains with uniform or non-uniform meshes.

Time-to-event endpoints show an increasing popularity in phase II cancer trials. The standard statistical tool for such one-armed survival trials is the one-sample log-rank test. Its distributional properties are commonly derived in the large sample limit. It is however known from the literature, that the asymptotical approximations suffer when sample size is small. There have already been several attempts to address this problem. While some approaches do not allow easy power and sample size calculations, others lack a clear theoretical motivation and require further considerations. The problem itself can partly be attributed to the dependence of the compensated counting process and its variance estimator. For this purpose, we suggest a variance estimator which is uncorrelated to the compensated counting process. Moreover, this and other present approaches to variance estimation are covered as special cases by our general framework. For practical application, we provide sample size and power calculations for any approach fitting into this framework. Finally, we use simulations and real world data to study the empirical type I error and power performance of our methodology as compared to standard approaches.

We provide guarantees for approximate Gaussian Process (GP) regression resulting from two common low-rank kernel approximations: based on random Fourier features, and based on truncating the kernel's Mercer expansion. In particular, we bound the Kullback-Leibler divergence between an exact GP and one resulting from one of the afore-described low-rank approximations to its kernel, as well as between their corresponding predictive densities, and we also bound the error between predictive mean vectors and between predictive covariance matrices computed using the exact versus using the approximate GP. We provide experiments on both simulated data and standard benchmarks to evaluate the effectiveness of our theoretical bounds.

Heatmap-based methods dominate in the field of human pose estimation by modelling the output distribution through likelihood heatmaps. In contrast, regression-based methods are more efficient but suffer from inferior performance. In this work, we explore maximum likelihood estimation (MLE) to develop an efficient and effective regression-based methods. From the perspective of MLE, adopting different regression losses is making different assumptions about the output density function. A density function closer to the true distribution leads to a better regression performance. In light of this, we propose a novel regression paradigm with Residual Log-likelihood Estimation (RLE) to capture the underlying output distribution. Concretely, RLE learns the change of the distribution instead of the unreferenced underlying distribution to facilitate the training process. With the proposed reparameterization design, our method is compatible with off-the-shelf flow models. The proposed method is effective, efficient and flexible. We show its potential in various human pose estimation tasks with comprehensive experiments. Compared to the conventional regression paradigm, regression with RLE bring 12.4 mAP improvement on MSCOCO without any test-time overhead. Moreover, for the first time, especially on multi-person pose estimation, our regression method is superior to the heatmap-based methods. Our code is available at //github.com/Jeff-sjtu/res-loglikelihood-regression

Many representative graph neural networks, $e.g.$, GPR-GNN and ChebyNet, approximate graph convolutions with graph spectral filters. However, existing work either applies predefined filter weights or learns them without necessary constraints, which may lead to oversimplified or ill-posed filters. To overcome these issues, we propose $\textit{BernNet}$, a novel graph neural network with theoretical support that provides a simple but effective scheme for designing and learning arbitrary graph spectral filters. In particular, for any filter over the normalized Laplacian spectrum of a graph, our BernNet estimates it by an order-$K$ Bernstein polynomial approximation and designs its spectral property by setting the coefficients of the Bernstein basis. Moreover, we can learn the coefficients (and the corresponding filter weights) based on observed graphs and their associated signals and thus achieve the BernNet specialized for the data. Our experiments demonstrate that BernNet can learn arbitrary spectral filters, including complicated band-rejection and comb filters, and it achieves superior performance in real-world graph modeling tasks.

UMAP (Uniform Manifold Approximation and Projection) is a novel manifold learning technique for dimension reduction. UMAP is constructed from a theoretical framework based in Riemannian geometry and algebraic topology. The result is a practical scalable algorithm that applies to real world data. The UMAP algorithm is competitive with t-SNE for visualization quality, and arguably preserves more of the global structure with superior run time performance. Furthermore, UMAP has no computational restrictions on embedding dimension, making it viable as a general purpose dimension reduction technique for machine learning.

北京阿比特科技有限公司