亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In surveys, the interest lies in estimating finite population parameters such as population totals and means. In most surveys, some auxiliary information is available at the estimation stage. This information may be incorporated in the estimation procedures to increase their precision. In this article, we use random forests to estimate the functional relationship between the survey variable and the auxiliary variables. In recent years, random forests have become attractive as National Statistical Offices have now access to a variety of data sources, potentially exhibiting a large number of observations on a large number of variables. We establish the theoretical properties of model-assisted procedures based on random forests and derive corresponding variance estimators. A model-calibration procedure for handling multiple survey variables is also discussed. The results of a simulation study suggest that the proposed point and estimation procedures perform well in term of bias, efficiency, and coverage of normal-based confidence intervals, in a wide variety of settings. Finally, we apply the proposed methods using data on radio audiences collected by M\'ediam\'etrie, a French audience company.

相關內容

Time-to-event endpoints show an increasing popularity in phase II cancer trials. The standard statistical tool for such endpoints in one-armed trials is the one-sample log-rank test. It is widely known, that the asymptotic providing the correctness of this test does not come into effect to full extent for small sample sizes. There have already been some attempts to solve this problem. While some do not allow easy power and sample size calculations, others lack a clear theoretical motivation and require further considerations. The problem itself can partly be attributed to the dependence of the compensated counting process and its variance estimator. We provide a framework in which the variance estimator can be flexibly adopted to the present situation while maintaining its asymptotical properties. We exemplarily suggest a variance estimator which is uncorrelated to the compensated counting process. Furthermore, we provide sample size and power calculations for any approach fitting into our framework. Finally, we compare several methods via simulation studies and the hypothetical setup of a Phase II trial based on real world data.

Gaussian processes that can be decomposed into a smooth mean function and a stationary autocorrelated noise process are considered and a fully automatic nonparametric method to simultaneous estimation of mean and auto-covariance functions of such processes is developed. Our empirical Bayes approach is data-driven, numerically efficient and allows for the construction of confidence sets for the mean function. Performance is demonstrated in simulations and real data analysis. The method is implemented in the R package eBsc that accompanies the paper.

This paper studies the asymptotic properties of and improved inference methods for kernel density estimation (KDE) for dyadic data. We first establish novel uniform convergence rates for dyadic KDE under general assumptions. As the existing analytic variance estimator is known to behave unreliably in finite samples, we propose a modified jackknife empirical likelihood procedure for inference. The proposed test statistic is self-normalised and no variance estimator is required. In addition, it is asymptotically pivotal regardless of presence of dyadic clustering. The results are extended to cover the practically relevant case of incomplete dyadic network data. Simulations show that this jackknife empirical likelihood-based inference procedure delivers precise coverage probabilities even under modest sample sizes and with incomplete dyadic data. Finally, we illustrate the method by studying airport congestion.

Suppose that particles are randomly distributed in $\bR^d$, and they are subject to identical stochastic motion independently of each other. The Smoluchowski process describes fluctuations of the number of particles in an observation region over time. This paper studies properties of the Smoluchowski processes and considers related statistical problems. In the first part of the paper we revisit probabilistic properties of the Smoluchowski process in a unified and principled way: explicit formulas for generating functionals and moments are derived, conditions for stationarity and Gaussian approximation are discussed, and relations to other stochastic models are highlighted. The second part deals with statistics of the Smoluchowki processes. We consider two different models of the particle displacement process: the undeviated uniform motion (when a particle moves with random constant velocity along a straight line) and the Brownian motion displacement. In the setting of the undeviated uniform motion we study the problems of estimating the mean speed and the speed distribution, while for the Brownian displacement model the problem of estimating the diffusion coefficient is considered. In all these settings we develop estimators with provable accuracy guarantees.

The paper describes a new class of capture-recapture models for closed populations when individual covariates are available. The novelty consists in combining a latent class model for capture probabilities where the class weights and the conditional distributions given the latent may depend on covariates, with a model for the marginal distribution of the available covariates as in Liu et al, Biometrika (2017). In addition, the conditional distributions given the latent and covariates are allowed to take into account any general form of serial dependence. A Fisher scoring algorithm for maximum likelihood estimation is presented, and a powerful result based on the implicit function theorem is used to show that the marginal distribution of observed covariates is uniquely determined, once an estimate of the probabilities of being never captured is available. Asymptotic results are outlined, and a procedure for constructing likelihood based confidence intervals for the population total is presented. Two examples with real data are used to illustrate the proposed approach

In Bayesian density estimation, a question of interest is how the number of components in a finite mixture model grows with the number of observations. We provide a novel perspective on this question by using results from stochastic geometry to find that the growth rate of the expected number of components of a finite mixture model whose components belong to the unit simplex $\Delta^{J-1}$ of the Euclidean space $\mathbb{R}^J$ is $(\log n)^{J-1}$. We also provide a central limit theorem for the number of components. In addition, we relate our model to a classical non-parametric density estimator based on a P\'olya tree. Combining this latter with techniques from Choquet theory, we are able to retrieve mixture weights. We also give the rate of convergence of the P\'olya tree posterior to the Dirac measure on the weights. We further present an algorithm to correctly specify the number of components in a latent Dirichlet allocation (LDA) analysis.

We consider a stationary linear AR($p$) model with unknown mean. The autoregression parameters as well as the distribution function (d.f.) $G$ of innovations are unknown. The observations contain gross errors (outliers). The distribution of outliers is unknown and arbitrary, their intensity is $\gamma n^{-1/2}$ with an unknown $\gamma$, $n$ is the sample size. The assential problem in such situation is to test the normality of innovations. Normality, as is known, ensures the optimality properties of widely used least squares procedures. To construct and study a Pearson chi-square type test for normality we estimate the unknown mean and the autoregression parameters. Then, using the estimates, we find the residuals in the autoregression. Based on them, we construct a kind of empirical distribution function (r.e.d.f.) , which is a counterpart of the (inaccessible) e.d.f. of the autoregression innovations. Our Pearson's satatistic is the functional from r.e.d.f. Its asymptotic distributions under the hypothesis and the local alternatives are determined by the asymptotic behavior of r.e.d.f. %Therefore, the study of the asymptotic properties of r.e.d.f. is a natural and meaningful task. In the present work, we find and substantiate in details the stochastic expansions of the r.e.d.f. in two situations. In the first one d.f. $ G (x) $ of innovations does not depend on $ n $. We need this result to investigate test statistic under the hypothesis. In the second situation $ G (x) $ depends on $ n $ and has the form of a mixture $ G (x) = A_n (x) = (1-n ^ {- 1/2}) G_0 (x) + n ^ { -1/2} H (x). $ We need this result to study the power of test under the local alternatives.

Stock trend forecasting, aiming at predicting the stock future trends, is crucial for investors to seek maximized profits from the stock market. Many event-driven methods utilized the events extracted from news, social media, and discussion board to forecast the stock trend in recent years. However, existing event-driven methods have two main shortcomings: 1) overlooking the influence of event information differentiated by the stock-dependent properties; 2) neglecting the effect of event information from other related stocks. In this paper, we propose a relational event-driven stock trend forecasting (REST) framework, which can address the shortcoming of existing methods. To remedy the first shortcoming, we propose to model the stock context and learn the effect of event information on the stocks under different contexts. To address the second shortcoming, we construct a stock graph and design a new propagation layer to propagate the effect of event information from related stocks. The experimental studies on the real-world data demonstrate the efficiency of our REST framework. The results of investment simulation show that our framework can achieve a higher return of investment than baselines.

Human pose estimation aims to locate the human body parts and build human body representation (e.g., body skeleton) from input data such as images and videos. It has drawn increasing attention during the past decade and has been utilized in a wide range of applications including human-computer interaction, motion analysis, augmented reality, and virtual reality. Although the recently developed deep learning-based solutions have achieved high performance in human pose estimation, there still remain challenges due to insufficient training data, depth ambiguities, and occlusions. The goal of this survey paper is to provide a comprehensive review of recent deep learning-based solutions for both 2D and 3D pose estimation via a systematic analysis and comparison of these solutions based on their input data and inference procedures. More than 240 research papers since 2014 are covered in this survey. Furthermore, 2D and 3D human pose estimation datasets and evaluation metrics are included. Quantitative performance comparisons of the reviewed methods on popular datasets are summarized and discussed. Finally, the challenges involved, applications, and future research directions are concluded. We also provide a regularly updated project page on: \url{//github.com/zczcwh/DL-HPE}

We propose a new method of estimation in topic models, that is not a variation on the existing simplex finding algorithms, and that estimates the number of topics K from the observed data. We derive new finite sample minimax lower bounds for the estimation of A, as well as new upper bounds for our proposed estimator. We describe the scenarios where our estimator is minimax adaptive. Our finite sample analysis is valid for any number of documents (n), individual document length (N_i), dictionary size (p) and number of topics (K), and both p and K are allowed to increase with n, a situation not handled well by previous analyses. We complement our theoretical results with a detailed simulation study. We illustrate that the new algorithm is faster and more accurate than the current ones, although we start out with a computational and theoretical disadvantage of not knowing the correct number of topics K, while we provide the competing methods with the correct value in our simulations.

北京阿比特科技有限公司