亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This work considers Gaussian process interpolation with a periodized version of the Mat{\'e}rn covariance function (Stein, 1999, Section 6.7) with Fourier coefficients $\phi$($\alpha$^2 + j^2)^(--$\nu$--1/2). Convergence rates are studied for the joint maximum likelihood estimation of $\nu$ and $\phi$ when the data is sampled according to the model. The mean integrated squared error is also analyzed with fixed and estimated parameters, showing that maximum likelihood estimation yields asymptotically the same error as if the ground truth was known. Finally, the case where the observed function is a ''deterministic'' element of a continuous Sobolev space is also considered, suggesting that bounding assumptions on some parameters can lead to different estimates.

相關內容

The problem of simultaneous estimation of location/scale parameters $\theta_1$ and $\theta_2$ of a general bivariate location/scale model, when the ordering between the parameters is known apriori (say, $\theta_1\leq \theta_2$), has been considered. We consider isotonic regression estimators based on the best location/scale equivariant estimators (BLEEs/BSEEs) of $\theta_1$ and $\theta_2$ with general weight functions. Let $\mathcal{D}$ denote the corresponding class of isotonic regression estimators of $(\theta_1,\theta_2)$. Under the sum of the weighted squared error loss function, we characterize admissible estimators within the class $\mathcal{D}$, and identify estimators that dominate the BLEE/BSEE of ($\theta_1$,$\theta_2$). Our study unifies several studies reported in the literature for specific probability distributions having independent marginals. We also report a generalized version of the Katz (1963) result on the inadmissibility of certain estimators under a loss function that is weighted sum of general loss functions for component problems. A simulation study is also carried out to validate the findings of the paper.

The matrix factor model has drawn growing attention for its advantage in achieving two-directional dimension reduction simultaneously for matrix-structured observations. In this paper, we propose a simple iterative least squares algorithm for matrix factor models, in contrast to the Principal Component Analysis (PCA)-based methods in the literature. In detail, we first propose to estimate the latent factor matrices by projecting the observations with two deterministic weight matrices, which are chosen to diversify away the idiosyncratic components. We show that the inferences on factors are still asymptotically valid even if we overestimate both the row/column factor numbers. We then estimate the row/column loading matrices by minimizing the squared loss function under certain identifiability conditions. The resultant estimators of the loading matrices are treated as the new weight/projection matrices and thus the above update procedure can be iteratively performed until convergence. Theoretically, given the true dimensions of the factor matrices, we derive the convergence rates of the estimators for loading matrices and common components at any $s$-th step iteration. Thorough numerical simulations are conducted to investigate the finite-sample performance of the proposed methods and two real datasets associated with financial portfolios and multinational macroeconomic indices are used to illustrate practical usefulness.

We consider the problem of constructing minimax rate-optimal estimators for a doubly robust nonparametric functional that has witnessed applications across the causal inference and conditional independence testing literature. Minimax rate-optimal estimators for such functionals are typically constructed through higher-order bias corrections of plug-in and one-step type estimators and, in turn, depend on estimators of nuisance functions. In this paper, we consider a parallel question of interest regarding the optimality and/or sub-optimality of plug-in and one-step bias-corrected estimators for the specific doubly robust functional of interest. Specifically, we verify that by using undersmoothing and sample splitting techniques when constructing nuisance function estimators, one can achieve minimax rates of convergence in all H\"older smoothness classes of the nuisance functions (i.e. the propensity score and outcome regression) provided that the marginal density of the covariates is sufficiently regular. Additionally, by demonstrating suitable lower bounds on these classes of estimators, we demonstrate the necessity to undersmooth the nuisance function estimators to obtain minimax optimal rates of convergence.

General log-linear models specified by non-negative integer design matrices have a potentially wide range of applications, although using models without the genuine overall effect, that is, ones which cannot be reparameterized to include a normalizing constant, is still rare. The log-linear models without the overall effect arise naturally in practice, and can be handled in a similar manner to models with the overall effect. A novel iterative scaling procedure for the MLE computation under such models is proposed, and its convergence is proved. The results are illustrated using data from a recent clinical study.

In many investigations, the primary outcome of interest is difficult or expensive to collect. Examples include long-term health effects of medical interventions, measurements requiring expensive testing or follow-up, and outcomes only measurable on small panels as in marketing. This reduces effective sample sizes for estimating the average treatment effect (ATE). However, there is often an abundance of observations on surrogate outcomes not of primary interest, such as short-term health effects or online-ad click-through. We study the role of such surrogate observations in the efficient estimation of treatment effects. To quantify their value, we derive the semiparametric efficiency bounds on ATE estimation with and without the presence of surrogates and several intermediary settings. The difference between these characterizes the efficiency gains from optimally leveraging surrogates. We study two regimes: when the number of surrogate observations is comparable to primary-outcome observations and when the former dominates the latter. We take an agnostic missing-data approach circumventing strong surrogate conditions previously assumed. To leverage surrogates' efficiency gains, we develop efficient ATE estimation and inference based on flexible machine-learning estimates of nuisance functions appearing in the influence functions we derive. We empirically demonstrate the gains by studying the long-term earnings effect of job training.

We consider the problem of estimating a multivariate function $f_0$ of bounded variation (BV), from noisy observations $y_i = f_0(x_i) + z_i$ made at random design points $x_i \in \mathbb{R}^d$, $i=1,\ldots,n$. We study an estimator that forms the Voronoi diagram of the design points, and then solves an optimization problem that regularizes according to a certain discrete notion of total variation (TV): the sum of weighted absolute differences of parameters $\theta_i,\theta_j$ (which estimate the function values $f_0(x_i),f_0(x_j)$) at all neighboring cells $i,j$ in the Voronoi diagram. This is seen to be equivalent to a variational optimization problem that regularizes according to the usual continuum (measure-theoretic) notion of TV, once we restrict the domain to functions that are piecewise constant over the Voronoi diagram. The regression estimator under consideration hence performs (shrunken) local averaging over adaptively formed unions of Voronoi cells, and we refer to it as the Voronoigram, following the ideas in Koenker (2005), and drawing inspiration from Tukey's regressogram (Tukey, 1961). Our contributions in this paper span both the conceptual and theoretical frontiers: we discuss some of the unique properties of the Voronoigram in comparison to TV-regularized estimators that use other graph-based discretizations; we derive the asymptotic limit of the Voronoi TV functional; and we prove that the Voronoigram is minimax rate optimal (up to log factors) for estimating BV functions that are essentially bounded.

Augmenting the control arm of a randomized controlled trial (RCT) with external data may increase power at the risk of introducing bias. Existing data fusion estimators generally rely on stringent assumptions or may have decreased coverage or power in the presence of bias. Framing the problem as one of data-adaptive experiment selection, potential experiments include the RCT only or the RCT combined with different candidate real-world datasets. To select and analyze the experiment with the optimal bias-variance tradeoff, we develop a novel experiment-selector cross-validated targeted maximum likelihood estimator (ES-CVTMLE). The ES-CVTMLE uses two bias estimates: 1) a function of the difference in conditional mean outcome under control between the RCT and combined experiments and 2) an estimate of the average treatment effect on a negative control outcome (NCO). We define the asymptotic distribution of the ES-CVTMLE under varying magnitudes of bias and construct confidence intervals by Monte Carlo simulation. In simulations involving violations of identification assumptions, the ES-CVTMLE had better coverage than test-then-pool approaches and an NCO-based bias adjustment approach and higher power than one implementation of a Bayesian dynamic borrowing approach. We further demonstrate the ability of the ES-CVTMLE to distinguish biased from unbiased external controls through a re-analysis of the effect of liraglutide on glycemic control from the LEADER trial. The ES-CVTMLE has the potential to improve power while providing relatively robust inference for future hybrid RCT-RWD studies.

In the last few years, various communication compression techniques have emerged as an indispensable tool helping to alleviate the communication bottleneck in distributed learning. However, despite the fact {\em biased} compressors often show superior performance in practice when compared to the much more studied and understood {\em unbiased} compressors, very little is known about them. In this work we study three classes of biased compression operators, two of which are new, and their performance when applied to (stochastic) gradient descent and distributed (stochastic) gradient descent. We show for the first time that biased compressors can lead to linear convergence rates both in the single node and distributed settings. We prove that distributed compressed SGD method, employed with error feedback mechanism, enjoys the ergodic rate $\mathcal{O}\left( \delta L \exp[-\frac{\mu K}{\delta L}] + \frac{(C + \delta D)}{K\mu}\right)$, where $\delta\ge1$ is a compression parameter which grows when more compression is applied, $L$ and $\mu$ are the smoothness and strong convexity constants, $C$ captures stochastic gradient noise ($C=0$ if full gradients are computed on each node) and $D$ captures the variance of the gradients at the optimum ($D=0$ for over-parameterized models). Further, via a theoretical study of several synthetic and empirical distributions of communicated gradients, we shed light on why and by how much biased compressors outperform their unbiased variants. Finally, we propose several new biased compressors with promising theoretical guarantees and practical performance.

This article proposes a graphical model that handles mixed-type, multi-group data. The motivation for such a model originates from real-world observational data, which often contain groups of samples obtained under heterogeneous conditions in space and time, potentially resulting in differences in network structure among groups. Therefore, the i.i.d. assumption is unrealistic, and fitting a single graphical model on all data results in a network that does not accurately represent the between group differences. In addition, real-world observational data is typically of mixed discrete-and-continuous type, violating the Gaussian assumption that is typical of graphical models, which leads to the model being unable to adequately recover the underlying graph structure. The proposed model takes into account these properties of data, by treating observed data as transformed latent Gaussian data, by means of the Gaussian copula, and thereby allowing for the attractive properties of the Gaussian distribution such as estimating the optimal number of model parameter using the inverse covariance matrix. The multi-group setting is addressed by jointly fitting a graphical model for each group, and applying the fused group penalty to fuse similar graphs together. In an extensive simulation study, the proposed model is evaluated against alternative models, where the proposed model is better able to recover the true underlying graph structure for different groups. Finally, the proposed model is applied on real production-ecological data pertaining to on-farm maize yield in order to showcase the added value of the proposed method in generating new hypotheses for production ecologists.

Sparse principal component analysis (SPCA) has been widely used for dimensionality reduction and feature extraction in high-dimensional data analysis. Despite there are many methodological and theoretical developments in the past two decades, the theoretical guarantees of the popular SPCA algorithm proposed by Zou, Hastie & Tibshirani (2006) based on the elastic net are still unknown. We aim to close this important theoretical gap in this paper. We first revisit the SPCA algorithm of Zou et al. (2006) and present our implementation. Also, we study a computationally more efficient variant of the SPCA algorithm in Zou et al. (2006) that can be considered as the limiting case of SPCA. We provide the guarantees of convergence to a stationary point for both algorithms. We prove that, under a sparse spiked covariance model, both algorithms can recover the principal subspace consistently under mild regularity conditions. We show that their estimation error bounds match the best available bounds of existing works or the minimax rates up to some logarithmic factors. Moreover, we demonstrate the numerical performance of both algorithms in simulation studies.

北京阿比特科技有限公司