亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We provide a comprehensive study of a nonparametric likelihood ratio test on whether a random sample follows a distribution in a prespecified class of shape-constrained densities. While the conventional definition of likelihood ratio is not well-defined for general nonparametric problems, we consider a working sub-class of alternative densities that leads to test statistics with desirable properties. Under the null, a scaled and centered version of the test statistic is asymptotic normal and distribution-free, which comes from the fact that the asymptotic dominant term under the null depends only on a function of spacings of transformed outcomes that are uniform distributed. The nonparametric maximum likelihood estimator (NPMLE) under the hypothesis class appears only in an average log-density ratio which often converges to zero at a faster rate than the asymptotic normal term under the null, while diverges in general test so that the test is consistent. The main technicality is to show these results for log-density ratio which requires a case-by-case analysis, including new results for k-monotone densities with unbounded support and completely monotone densities that are of independent interest. A bootstrap method by simulating from the NPMLE is shown to have the same limiting distribution as the test statistic.

相關內容

Likelihood-free inference methods typically make use of a distance between simulated and real data. A common example is the maximum mean discrepancy (MMD), which has previously been used for approximate Bayesian computation, minimum distance estimation, generalised Bayesian inference, and within the nonparametric learning framework. The MMD is commonly estimated at a root-$m$ rate, where $m$ is the number of simulated samples. This can lead to significant computational challenges since a large $m$ is required to obtain an accurate estimate, which is crucial for parameter estimation. In this paper, we propose a novel estimator for the MMD with significantly improved sample complexity. The estimator is particularly well suited for computationally expensive smooth simulators with low- to mid-dimensional inputs. This claim is supported through both theoretical results and an extensive simulation study on benchmark simulators.

This paper studies multivariate nonparametric change point localization and inference problems. The data consists of a multivariate time series with potentially short range dependence. The distribution of this data is assumed to be piecewise constant with densities in a H\"{o}lder class. The change points, or times at which the distribution changes, are unknown. We derive the limiting distributions of the change point estimators when the minimal jump size vanishes or remains constant, a first in the literature on change point settings. We are introducing two new features: a consistent estimator that can detect when a change is happening in data with short-term dependence, and a consistent block-type long-run variance estimator. Numerical evidence is provided to back up our theoretical results.

We propose a distributional outcome regression (DOR) with scalar and distributional predictors. Distributional observations are represented via quantile functions and the dependence on predictors is modelled via functional regression coefficients. DOR expands existing literature with three key contributions: handling both scalar and distributional predictors, ensuring jointly monotone regression structure without enforcing monotonicity on individual functional regression coefficients, providing a statistical inference for estimated functional coefficients. Bernstein polynomial bases are employed to construct a jointly monotone regression structure without over-restricting individual functional regression coefficients to be monotone. Asymptotic projection-based joint confidence bands and a statistical test of global significance are developed to quantify uncertainty for estimated functional regression coefficients. Simulation studies illustrate a good performance of DOR model in accurately estimating the distributional effects. The method is applied to continuously monitored heart rate and physical activity data of 890 participants of Baltimore Longitudinal Study of Aging. Daily heart rate reserve, quantified via a subject-specific distribution of minute-level heart rate, is modelled additively as a function of age, gender, and BMI with an adjustment for the daily distribution of minute-level physical activity counts. Findings provide novel scientific insights in epidemiology of heart rate reserve.

Over the past two decades, we have seen an exponentially increased amount of point clouds collected with irregular shapes in various areas. Motivated by the importance of solid modeling for point clouds, we develop a novel and efficient smoothing tool based on multivariate splines over the tetrahedral partitions to extract the underlying signal and build up a 3D solid model from the point cloud. The proposed smoothing method can denoise or deblur the point cloud effectively and provide a multi-resolution reconstruction of the actual signal. In addition, it can handle sparse and irregularly distributed point clouds and recover the underlying trajectory. The proposed smoothing and interpolation method also provides a natural way of numerosity data reduction. Furthermore, we establish the theoretical guarantees of the proposed method. Specifically, we derive the convergence rate and asymptotic normality of the proposed estimator and illustrate that the convergence rate achieves the optimal nonparametric convergence rate. Through extensive simulation studies and a real data example, we demonstrate the superiority of the proposed method over traditional smoothing methods in terms of estimation accuracy and efficiency of data reduction.

Covariate measurement error in nonparametric regression is a common problem in nutritional epidemiology and geostatistics, and other fields. Over the last two decades, this problem has received substantial attention in the frequentist literature. Bayesian approaches for handling measurement error have only been explored recently and are surprisingly successful, although the lack of a proper theoretical justification regarding the asymptotic performance of the estimators. By specifying a Gaussian process prior on the regression function and a Dirichlet process Gaussian mixture prior on the unknown distribution of the unobserved covariates, we show that the posterior distribution of the regression function and the unknown covariates density attain optimal rates of contraction adaptively over a range of H\"{o}lder classes, up to logarithmic terms. This improves upon the existing classical frequentist results which require knowledge of the smoothness of the underlying function to deliver optimal risk bounds. We also develop a novel surrogate prior for approximating the Gaussian process prior that leads to efficient computation and preserves the covariance structure, thereby facilitating easy prior elicitation. We demonstrate the empirical performance of our approach and compare it with competitors in a wide range of simulation experiments and a real data example.

Maximum Inner Product Search or top-k retrieval on sparse vectors is well-understood in information retrieval, with a number of mature algorithms that solve it exactly. However, all existing algorithms are tailored to text and frequency-based similarity measures. To achieve optimal memory footprint and query latency, they rely on the near stationarity of documents and on laws governing natural languages. We consider, instead, a setup in which collections are streaming -- necessitating dynamic indexing -- and where indexing and retrieval must work with arbitrarily distributed real-valued vectors. As we show, existing algorithms are no longer competitive in this setup, even against naive solutions. We investigate this gap and present a novel approximate solution, called Sinnamon, that can efficiently retrieve the top-k results for sparse real valued vectors drawn from arbitrary distributions. Notably, Sinnamon offers levers to trade-off memory consumption, latency, and accuracy, making the algorithm suitable for constrained applications and systems. We give theoretical results on the error introduced by the approximate nature of the algorithm, and present an empirical evaluation of its performance on two hardware platforms and synthetic and real-valued datasets. We conclude by laying out concrete directions for future research on this general top-k retrieval problem over sparse vectors.

Reverse Unrestricted MIxed DAta Sampling (RU-MIDAS) regressions are used to model high-frequency responses by means of low-frequency variables. However, due to the periodic structure of RU-MIDAS regressions, the dimensionality grows quickly if the frequency mismatch between the high- and low-frequency variables is large. Additionally the number of high-frequency observations available for estimation decreases. We propose to counteract this reduction in sample size by pooling the high-frequency coefficients and further reduce the dimensionality through a sparsity-inducing convex regularizer that accounts for the temporal ordering among the different lags. To this end, the regularizer prioritizes the inclusion of lagged coefficients according to the recency of the information they contain. We demonstrate the proposed method on an empirical application for daily realized volatility forecasting where we explore whether modeling high-frequency volatility data in terms of low-frequency macroeconomic data pays off.

The symmetric Nonnegative Matrix Factorization (NMF), a special but important class of the general NMF, has found numerous applications in data analysis such as various clustering tasks. Unfortunately, designing fast algorithms for the symmetric NMF is not as easy as for its nonsymmetric counterpart, since the latter admits the splitting property that allows state-of-the-art alternating-type algorithms. To overcome this issue, we first split the decision variable and transform the symmetric NMF to a penalized nonsymmetric one, paving the way for designing efficient alternating-type algorithms. We then show that solving the penalized nonsymmetric reformulation returns a solution to the original symmetric NMF. Moreover, we design a family of alternating-type algorithms and show that they all admit strong convergence guarantee: the generated sequence of iterates is convergent and converges at least sublinearly to a critical point of the original symmetric NMF. Finally, we conduct experiments on both synthetic data and real image clustering to support our theoretical results and demonstrate the performance of the alternating-type algorithms.

In this paper, we apply the median-of-means principle to derive robust versions of local averaging rules in non-parametric regression. For various estimates, including nearest neighbors and kernel procedures, we obtain non-asymptotic exponential inequalities, with only a second moment assumption on the noise. We then show that these bounds cannot be significantly improved by establishing a corresponding lower bound on tail probabilities.

The estimation of the potential impact fraction (including the population attributable fraction) with continuous exposure data frequently relies on strong distributional assumptions. However, these assumptions are often violated if the underlying exposure distribution is unknown or if the same distribution is assumed across time or space. Nonparametric methods to estimate the potential impact fraction are available for cohort data, but no alternatives exist for cross-sectional data. In this article, we discuss the impact of distributional assumptions in the estimation of the population impact fraction, showing that under an infinite set of possibilities, distributional violations lead to biased estimates. We propose nonparametric methods to estimate the potential impact fraction for aggregated (mean and standard deviation) or individual data (e.g. observations from a cross-sectional population survey), and develop simulation scenarios to compare their performance against standard parametric procedures. We illustrate our methodology on an application of sugar-sweetened beverage consumption on incidence of type 2 diabetes. We also present an R package pifpaf to implement these methods.

北京阿比特科技有限公司