亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In settings where interference between units is possible, we define the prevelance of peer effects to be the number of units who are affected by the treatment of others. This quantity does not fully identify a peer effect, but may be used to show whether peer effects are widely prevalent. Given a randomized experiment with binary-valued outcomes, methods are presented for conservative point estimation and one-sided interval estimation. To show asymptotic coverage of our intervals in settings not previously covered, we provide a central limit theorem that combines local dependence and sampling without replacement. Consistency and minimax properties of the point estimator are shown as well. The approach is demonstrated on an experiment in which students were treated for a highly transmissible parasitic infection, for which we find that a significant fraction of students were affected by the treatment of schools other than their own.

相關內容

The prevailing statistical approach to analyzing persistence diagrams is concerned with filtering out topological noise. In this paper, we adopt a different viewpoint and aim at estimating the actual distribution of a random persistence diagram, which captures both topological signal and noise. To that effect, Chazel and Divol (2019) proved that, under general conditions, the expected value of a random persistence diagram is a measure admitting a Lebesgue density, called the persistence intensity function. In this paper, we are concerned with estimating the persistence intensity function and a novel, normalized version of it -- called the persistence density function. We present a class of kernel-based estimators based on an i.i.d. sample of persistence diagrams and derive estimation rates in the supremum norm. As a direct corollary, we obtain uniform consistency rates for estimating linear representations of persistence diagrams, including Betti numbers and persistence surfaces. Interestingly, the persistence density function delivers stronger statistical guarantees.

Classical inequality curves and inequality measures are defined for distributions with finite mean value. Moreover, their empirical counterparts are not resistant to outliers. For these reasons, quantile versions of known inequality curves such as the Lorenz, Bonferroni, Zenga and $D$ curves, and quantile versions of inequality measures such as the Gini, Bonferroni, Zenga and $D$ indices have been proposed in the literature. We propose various nonparametric estimators of quantile versions of inequality curves and inequality measures, prove their consistency, and compare their accuracy in a~simulation study. We also give examples of the use of quantile versions of inequality measures in real data analysis.

Spinal cord segmentation is clinically relevant and is notably used to compute spinal cord cross-sectional area (CSA) for the diagnosis and monitoring of cord compression or neurodegenerative diseases such as multiple sclerosis. While several semi and automatic methods exist, one key limitation remains: the segmentation depends on the MRI contrast, resulting in different CSA across contrasts. This is partly due to the varying appearance of the boundary between the spinal cord and the cerebrospinal fluid that depends on the sequence and acquisition parameters. This contrast-sensitive CSA adds variability in multi-center studies where protocols can vary, reducing the sensitivity to detect subtle atrophies. Moreover, existing methods enhance the CSA variability by training one model per contrast, while also producing binary masks that do not account for partial volume effects. In this work, we present a deep learning-based method that produces soft segmentations of the spinal cord. Using the Spine Generic Public Database of healthy participants ($\text{n}=267$; $\text{contrasts}=6$), we first generated participant-wise soft ground truth (GT) by averaging the binary segmentations across all 6 contrasts. These soft GT, along with a regression-based loss function, were then used to train a UNet model for spinal cord segmentation. We evaluated our model against state-of-the-art methods and performed ablation studies involving different GT mask types, loss functions, and contrast-specific models. Our results show that using the soft average segmentations along with a regression loss function reduces CSA variability ($p < 0.05$, Wilcoxon signed-rank test). The proposed spinal cord segmentation model generalizes better than the state-of-the-art contrast-specific methods amongst unseen datasets, vendors, contrasts, and pathologies (compression, lesions), while accounting for partial volume effects.

The linear varying coefficient models posits a linear relationship between an outcome and covariates in which the covariate effects are modeled as functions of additional effect modifiers. Despite a long history of study and use in statistics and econometrics, state-of-the-art varying coefficient modeling methods cannot accommodate multivariate effect modifiers without imposing restrictive functional form assumptions or involving computationally intensive hyperparameter tuning. In response, we introduce VCBART, which flexibly estimates the covariate effect in a varying coefficient model using Bayesian Additive Regression Trees. With simple default settings, VCBART outperforms existing varying coefficient methods in terms of covariate effect estimation, uncertainty quantification, and outcome prediction. We illustrate the utility of VCBART with two case studies: one examining how the association between later-life cognition and measures of socioeconomic position vary with respect to age and socio-demographics and another estimating how temporal trends in urban crime vary at the neighborhood level. An R package implementing VCBART is available at //github.com/skdeshpande91/VCBART

Most of the literature on causality considers the structural framework of Pearl and the potential-outcome framework of Neyman and Rubin to be formally equivalent, and therefore interchangeably uses the do-notation and the potential-outcome subscript notation to write counterfactual outcomes. In this paper, we superimpose the two causal frameworks to prove that structural counterfactual outcomes and potential outcomes do not coincide in general -- not even in law. More precisely, we express the law of the potential outcomes in terms of the latent structural causal model under the fundamental assumptions of causal inference. This enables us to precisely identify when counterfactual inference is or is not equivalent between approaches, and to clarify the meaning of each kind of counterfactuals.

High-dimensional central limit theorems have been intensively studied with most focus being on the case where the data is sub-Gaussian or sub-exponential. However, heavier tails are omnipresent in practice. In this article, we study the critical growth rates of dimension $d$ below which Gaussian approximations are asymptotically valid but beyond which they are not. We are particularly interested in how these thresholds depend on the number of moments $m$ that the observations possess. For every $m\in(2,\infty)$, we construct i.i.d. random vectors $\textbf{X}_1,...,\textbf{X}_n$ in $\mathbb{R}^d$, the entries of which are independent and have a common distribution (independent of $n$ and $d$) with finite $m$th absolute moment, and such that the following holds: if there exists an $\varepsilon\in(0,\infty)$ such that $d/n^{m/2-1+\varepsilon}\not\to 0$, then the Gaussian approximation error (GAE) satisfies $$ \limsup_{n\to\infty}\sup_{t\in\mathbb{R}}\left[\mathbb{P}\left(\max_{1\leq j\leq d}\frac{1}{\sqrt{n}}\sum_{i=1}^n\textbf{X}_{ij}\leq t\right)-\mathbb{P}\left(\max_{1\leq j\leq d}\textbf{Z}_j\leq t\right)\right]=1,$$ where $\textbf{Z} \sim \mathsf{N}_d(\textbf{0}_d,\mathbf{I}_d)$. On the other hand, a result in Chernozhukov et al. (2023a) implies that the left-hand side above is zero if just $d/n^{m/2-1-\varepsilon}\to 0$ for some $\varepsilon\in(0,\infty)$. In this sense, there is a moment-dependent phase transition at the threshold $d=n^{m/2-1}$ above which the limiting GAE jumps from zero to one.

An important strategy for identifying principal causal effects, which are often used in settings with noncompliance, is to invoke the principal ignorability (PI) assumption. As PI is untestable, it is important to gauge how sensitive effect estimates are to its violation. We focus on this task for the common one-sided noncompliance setting where there are two principal strata, compliers and noncompliers. Under PI, compliers and noncompliers share the same outcome-mean-given-covariates function under the control condition. For sensitivity analysis, we allow this function to differ between compliers and noncompliers in several ways, indexed by an odds ratio, a generalized odds ratio, a mean ratio, or a standardized mean difference sensitivity parameter. We tailor sensitivity analysis techniques (with any sensitivity parameter choice) to several types of PI-based main analysis methods, including outcome regression, influence function (IF) based and weighting methods. We illustrate the proposed sensitivity analyses using several outcome types from the JOBS II study. This application estimates nuisance functions parametrically -- for simplicity and accessibility. In addition, we establish rate conditions on nonparametric nuisance estimation for IF-based estimators to be asymptotically normal -- with a view to inform nonparametric inference.

If the Stokes equations are properly discretized, it is known that the Schur complement matrix is spectrally equivalent to the identity matrix. Moreover, in the case of simple geometries, it is often observed that most of its eigenvalues are equal to one. These facts form the basis for the famous Uzawa algorithm. Despite recent progress in developing efficient iterative methods for solving the Stokes problem, the Uzawa algorithm remains popular in science and engineering, especially when accelerated by Krylov subspace methods. However, in complex geometries, the Schur complement matrix can become severely ill-conditioned, having a significant portion of non-unit eigenvalues. This makes the established Uzawa preconditioner inefficient. To explain this behaviour, we examine the Pressure Schur Complement formulation for the staggered finite-difference discretization of the Stokes equations. Firstly, we conjecture that the no-slip boundary conditions are the reason for non-unit eigenvalues of the Schur complement matrix. Secondly, we demonstrate that its condition number increases with increasing the surface-to-volume ratio of the flow domain. As an alternative to the Uzawa preconditioner, we propose using the diffusive SIMPLE preconditioner for geometries with a large surface-to-volume ratio. We show that the latter is much more fast and robust for such geometries. Furthermore, we show that the usage of the SIMPLE preconditioner leads to more accurate practical computation of the permeability of tight porous media. Keywords: Stokes problem, tight geometries, computing permeability, preconditioned Krylov subspace methods

Nonparametric maximum likelihood estimators (MLEs) in inverse problems often have non-normal limit distributions, like Chernoff's distribution. However, if one considers smooth functionals of the model, with corresponding functionals of the MLE, one gets normal limit distributions and faster rates of convergence. We demonstrate this for a model for the incubation time of a disease. The usual approach in the latter models is to use parametric distributions, like Weibull and gamma distributions, which leads to inconsistent estimators. Smoothed bootstrap methods are discussed for constructing confidence intervals. The classical bootstrap, based on the nonparametric MLE itself, has been proved to be inconsistent in this situation.

This paper deals with unit root issues in time series analysis. It has been known for a long time that unit root tests may be flawed when a series although stationary has a root close to unity. That motivated recent papers dedicated to autoregressive processes where the bridge between stability and instability is expressed by means of time-varying coefficients. In this vein the process we consider has a companion matrix $A_{n}$ with spectral radius $\rho(A_{n}) < 1$ satisfying $\rho(A_{n}) \rightarrow 1$, a situation that we describe as `nearly unstable'. The question we investigate is the following: given an observed path supposed to come from a nearly-unstable process, is it possible to test for the `extent of instability', \textit{i.e.} to test how close we are to the unit root? In this regard, we develop a strategy to evaluate $\alpha$ and to test for $\mathcal{H}_0 : "\alpha = \alpha_0"$ against $\mathcal{H}_1 : "\alpha > \alpha_0"$ when $\rho(A_{n})$ lies in an inner $O(n^{-\alpha})$-neighborhood of the unity, for some $0 < \alpha < 1$. Empirical evidence is given (on simulations and real time series) about the advantages of the flexibility induced by such a procedure compared to the usual unit root tests and their binary responses. As a by-product, we also build a symmetric procedure for the usually left out situation where the dominant root lies around $-1$.

北京阿比特科技有限公司