亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this article we study the asymptotic behaviour of the least square estimator in a linear regression model based on random observation instances. We provide mild assumptions on the moments and dependence structure on the randomly spaced observations and the residuals under which the estimator is strongly consistent. In particular, we consider observation instances that are negatively superadditive dependent within each other, while for the residuals we merely assume that they are generated by some continuous function. In addition, we prove that the rate of convergence is proportional to the sampling rate $N$, and we complement our findings with a simulation study providing insights on finite sample properties.

相關內容

A completely randomized experiment allows us to estimate the causal effect by the difference in the averages of the outcome under the treatment and control. But, difference-in-means type estimators behave poorly if the potential outcomes have a heavy-tail, or contain a few extreme observations or outliers. We study an alternative estimator by Rosenbaum that estimates the causal effect by inverting a randomization test using ranks. We study the asymptotic properties of this estimator and develop a framework to compare the efficiencies of different estimators of the treatment effect in the setting of randomized experiments. In particular, we show that the Rosenbaum estimator has variance that is asymptotically, in the worst case, at most about 1.16 times the variance of the difference-in-means estimator, and can be much smaller when the potential outcomes are not light-tailed. We further derive a consistent estimator of the asymptotic standard error for the Rosenbaum estimator which immediately yields a readily computable confidence interval for the treatment effect, thereby alleviating the expensive numerical calculations needed to implement the original proposal of Rosenbaum. Further, we propose a regression adjusted version of the Rosenbaum estimator to incorporate additional covariate information in randomization inference. We prove gain in efficiency by this regression adjustment method under a linear regression model. Finally, we illustrate through simulations that, unlike the difference-in-means based estimators, either unadjusted or regression adjusted, these rank-based estimators are efficient and robust against heavy-tailed distributions, contamination, and various model misspecifications.

Defeaturing consists in simplifying geometrical models by removing the geometrical features that are considered not relevant for a given simulation. Feature removal and simplification of computer-aided design models enables faster simulations for engineering analysis problems, and simplifies the meshing problem that is otherwise often unfeasible. The effects of defeaturing on the analysis are then neglected and, as of today, there are basically very few strategies to quantitatively evaluate such an impact. Understanding well the effects of this process is an important step for automatic integration of design and analysis. We formalize the process of defeaturing by understanding its effect on the solution of Poisson equation defined on the geometrical model of interest containing a single feature, with Neumann boundary conditions on the feature itself. We derive an a posteriori estimator of the energy error between the solutions of the exact and the defeatured geometries in $\mathbb{R}^n$, $n\in\{2,3\}$, that is simple, reliable and efficient up to oscillations. The dependence of the estimator upon the size of the features is explicit.

Data from both a randomized trial and an observational study are sometimes simultaneously available for evaluating the effect of an intervention. The randomized data typically allows for reliable estimation of average treatment effects but may be limited in sample size and patient heterogeneity for estimating conditional average treatment effects for a broad range of patients. Estimates from the observational study can potentially compensate for these limitations, but there may be concerns about whether confounding and treatment effect heterogeneity have been adequately addressed. We propose an approach for combining conditional treatment effect estimators from each source such that it aggressively weights toward the randomized estimator when bias in the observational estimator is detected. This allows the combination to be consistent for a conditional causal effect, regardless of whether assumptions required for consistent estimation in the observational study are satisfied. When the bias is negligible, the estimators from each source are combined for optimal efficiency. We show the problem can be formulated as a penalized least squares problem and consider its asymptotic properties. Simulations demonstrate the robustness and efficiency of the method in finite samples, in scenarios with bias or no bias in the observational estimator. We illustrate the method by estimating the effects of hormone replacement therapy on the risk of developing coronary heart disease in data from the Women's Health Initiative.

We present an elementary mathematical method to find the minimax estimator of the Bernoulli proportion $\theta$ under the squared error loss when $\theta$ belongs to the restricted parameter space of the form $\Omega = [0, \eta]$ for some pre-specified constant $0 \leq \eta \leq 1$. This problem is inspired from the problem of estimating the rate of positive COVID-19 tests. The presented results and applications would be useful materials for both instructors and students when teaching point estimation in statistical or machine learning courses.

Consider the problem of learning a large number of response functions simultaneously based on the same input variables. The training data consist of a single independent random sample of the input variables drawn from a common distribution together with the associated responses. The input variables are mapped into a high-dimensional linear space, called the feature space, and the response functions are modelled as linear functionals of the mapped features, with coefficients calibrated via ordinary least squares. We provide convergence guarantees on the worst-case excess prediction risk by controlling the convergence rate of the excess risk uniformly in the response function. The dimension of the feature map is allowed to tend to infinity with the sample size. The collection of response functions, although potentially infinite, is supposed to have a finite Vapnik-Chervonenkis dimension. The bound derived can be applied when building multiple surrogate models in a reasonable computing time.

In this work we consider the well-known Secretary Problem -- a number $n$ of elements, each having an adversarial value, are arriving one-by-one according to some random order, and the goal is to choose the highest value element. The decisions are made online and are irrevocable -- if the algorithm decides to choose or not to choose the currently seen element, based on the previously observed values, it cannot change its decision later regarding this element. The measure of success is the probability of selecting the highest value element, minimized over all adversarial assignments of values. We show existential and constructive upper bounds on approximation of the success probability in this problem, depending on the entropy of the randomly chosen arrival order, including the lowest possible entropy $O(\log\log (n))$ for which the probability of success could be constant. We show that below entropy level $\mathcal{H}<0.5\log\log n$, all algorithms succeed with probability $0$ if random order is selected uniformly at random from some subset of permutations, while we are able to construct in polynomial time a non-uniform distribution with entropy $\mathcal{H}$ resulting in success probability of at least $\Omega\left(\frac{1}{(\log\log n +3\log\log\log n -\mathcal{H})^{2+\epsilon}}\right)$, for any constant $\epsilon>0$. We also prove that no algorithm using entropy $\mathcal{H}=O((\log\log n)^a)$ can improve our result by more than polynomially, for any constant $0<a<1$. For entropy $\log\log (n)$ and larger, our analysis precisely quantifies both multiplicative and additive approximation of the success probability. In particular, we improve more than doubly exponentially on the best previously known additive approximation guarantee for the secretary problem.

We study the problem of density estimation for a random vector ${\boldsymbol X}$ in $\mathbb R^d$ with probability density $f(\boldsymbol x)$. For a spanning tree $T$ defined on the vertex set $\{1,\dots ,d\}$, the tree density $f_{T}$ is a product of bivariate conditional densities. The optimal spanning tree $T^*$ is the spanning tree $T$, for which the Kullback-Leibler divergence of $f$ and $f_{T}$ is the smallest. From i.i.d. data we identify the optimal tree $T^*$ and computationally efficiently construct a tree density estimate $f_n$ such that, without any regularity conditions on the density $f$, one has that $\lim_{n\to \infty} \int |f_n(\boldsymbol x)-f_{T^*}(\boldsymbol x)|d\boldsymbol x=0$ a.s. For Lipschitz continuous $f$ with bounded support, $\mathbb E\{ \int |f_n(\boldsymbol x)-f_{T^*}(\boldsymbol x)|d\boldsymbol x\}=O(n^{-1/4})$.

The estimation of information measures of continuous distributions based on samples is a fundamental problem in statistics and machine learning. In this paper, we analyze estimates of differential entropy in $K$-dimensional Euclidean space, computed from a finite number of samples, when the probability density function belongs to a predetermined convex family $\mathcal{P}$. First, estimating differential entropy to any accuracy is shown to be infeasible if the differential entropy of densities in $\mathcal{P}$ is unbounded, clearly showing the necessity of additional assumptions. Subsequently, we investigate sufficient conditions that enable confidence bounds for the estimation of differential entropy. In particular, we provide confidence bounds for simple histogram based estimation of differential entropy from a fixed number of samples, assuming that the probability density function is Lipschitz continuous with known Lipschitz constant and known, bounded support. Our focus is on differential entropy, but we provide examples that show that similar results hold for mutual information and relative entropy as well.

This paper studies distributed binary test of statistical independence under communication (information bits) constraints. While testing independence is very relevant in various applications, distributed independence test is particularly useful for event detection in sensor networks where data correlation often occurs among observations of devices in the presence of a signal of interest. By focusing on the case of two devices because of their tractability, we begin by investigating conditions on Type I error probability restrictions under which the minimum Type II error admits an exponential behavior with the sample size. Then, we study the finite sample-size regime of this problem. We derive new upper and lower bounds for the gap between the minimum Type II error and its exponential approximation under different setups, including restrictions imposed on the vanishing Type I error probability. Our theoretical results shed light on the sample-size regimes at which approximations of the Type II error probability via error exponents became informative enough in the sense of predicting well the actual error probability. We finally discuss an application of our results where the gap is evaluated numerically, and we show that exponential approximations are not only tractable but also a valuable proxy for the Type II probability of error in the finite-length regime.

From only positive (P) and unlabeled (U) data, a binary classifier could be trained with PU learning, in which the state of the art is unbiased PU learning. However, if its model is very flexible, empirical risks on training data will go negative, and we will suffer from serious overfitting. In this paper, we propose a non-negative risk estimator for PU learning: when getting minimized, it is more robust against overfitting, and thus we are able to use very flexible models (such as deep neural networks) given limited P data. Moreover, we analyze the bias, consistency, and mean-squared-error reduction of the proposed risk estimator, and bound the estimation error of the resulting empirical risk minimizer. Experiments demonstrate that our risk estimator fixes the overfitting problem of its unbiased counterparts.

北京阿比特科技有限公司