亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper we study nonparametric estimators of copulas and copula densities. We first focus our study on a density copula estimator based on a polynomial orthogonal projection of the joint density. A new copula estimator is then deduced. Its asymptotic properties are studied: we provide a large functional class for which this construction is optimal in the minimax and maxiset sense and we propose a method selection for the smoothing parameter. An intensive simulation study shows the very good performance of both copulas and copula densities estimators which we compare to a large panel of competitors. A real dataset in actuarial science illustrates this approach.

相關內容

Real world data often exhibit low-dimensional geometric structures, and can be viewed as samples near a low-dimensional manifold. This paper studies nonparametric regression of H\"{o}lder functions on low-dimensional manifolds using deep ReLU networks. Suppose $n$ training data are sampled from a H\"{o}lder function in $\mathcal{H}^{s,\alpha}$ supported on a $d$-dimensional Riemannian manifold isometrically embedded in $\mathbb{R}^D$, with sub-gaussian noise. A deep ReLU network architecture is designed to estimate the underlying function from the training data. The mean squared error of the empirical estimator is proved to converge in the order of $n^{-\frac{2(s+\alpha)}{2(s+\alpha) + d}}\log^3 n$. This result shows that deep ReLU networks give rise to a fast convergence rate depending on the data intrinsic dimension $d$, which is usually much smaller than the ambient dimension $D$. It therefore demonstrates the adaptivity of deep ReLU networks to low-dimensional geometric structures of data, and partially explains the power of deep ReLU networks in tackling high-dimensional data with low-dimensional geometric structures.

In this work we propose a semiparametric bivariate copula whose density is defined by a picewise constant function on disjoint squares. We obtain the maximum likelihood estimators which reduce to the sample copula under specific conditions. We carry out a full Bayesian analysis of the model and propose a spatial dependent prior distribution for the model parameters. This prior allows the parameters to borrow strength across neighbouring regions to produce smooth posterior estimates. We implement a simulation study and illustrate the performance of our model with a real dataset.

The subset sum problem is known to be an NP-hard problem in the field of computer science with the fastest known approach having a run-time complexity of $O(2^{0.3113n})$. A modified version of this problem is known as the perfect sum problem and extends the subset sum idea further. This extension results in additional complexity, making it difficult to compute for a large input. In this paper, I propose a probabilistic approach which approximates the solution to the perfect sum problem by approximating the distribution of potential sums. Since this problem is an extension of the subset sum, our approximation also grants some probabilistic insight into the solution for the subset sum problem. We harness distributional approximations to model the number of subsets which sum to a certain size. These distributional approximations are formulated in two ways: using bounds to justify normal approximation, and approximating the empirical distribution via density estimation. These approximations can be computed in $O(n)$ complexity, and can increase in accuracy with the size of the input data making it useful for large-scale combinatorial problems. Code is available at //github.com/KristofPusztai/PerfectSum.

Bayesian nonparametric methods are a popular choice for analysing survival data due to their ability to flexibly model the distribution of survival times. These methods typically employ a nonparametric prior on the survival function that is conjugate with respect to right-censored data. Eliciting these priors, particularly in the presence of covariates, can be challenging and inference typically relies on computationally intensive Markov chain Monte Carlo schemes. In this paper, we build on recent work that recasts Bayesian inference as assigning a predictive distribution on the unseen values of a population conditional on the observed samples, thus avoiding the need to specify a complex prior. We describe a copula-based predictive update which admits a scalable sequential importance sampling algorithm to perform inference that properly accounts for right-censoring. We provide theoretical justification through an extension of Doob's consistency theorem and illustrate the method on a number of simulated and real data sets, including an example with covariates. Our approach enables analysts to perform Bayesian nonparametric inference through only the specification of a predictive distribution.

Discrete kernel smoothing is now gaining importance in nonparametric statistics. In this paper, we investigate some asymptotic properties of the normalized discrete associated-kernel estimator of a probability mass function. We show, under some regularity and non-restrictive assumptions on the associated-kernel, that the normalizing random variable converges in mean square to 1. We then derive the consistency and the asymptotic normality of the proposed estimator. Various families of discrete kernels already exhibited satisfy the conditions, including the refined CoM-Poisson which is underdispersed and of second-order. Finally, the first-order binomial kernel is discussed and, surprisingly, its normalized estimator has a suitable asymptotic behaviour through simulations.

This paper proposes a simple unified approach to testing transformations on cumulative distribution functions (CDFs) with nuisance parameters. We consider testing general parametric transformations on two CDFs, and then generalize the test for multiple CDFs. We construct the test using a numerical bootstrap method which can easily be implemented. The proposed test is shown to be asymptotically size controlled and consistent. Monte Carlo simulations and an empirical application show that the test performs well on finite samples.

This paper is devoted to the numerical analysis of a piecewise constant discontinuous Galerkin method for time fractional subdiffusion problems. The regularity of weak solution is firstly established by using variational approach and Mittag-Leffler function. Then several optimal error estimates are derived with low regularity data. Finally, numerical experiments are conducted to verify the theoretical results.

Ensemble methods based on subsampling, such as random forests, are popular in applications due to their high predictive accuracy. Existing literature views a random forest prediction as an infinite-order incomplete U-statistic to quantify its uncertainty. However, these methods focus on a small subsampling size of each tree, which is theoretically valid but practically limited. This paper develops an unbiased variance estimator based on incomplete U-statistics, which allows the tree size to be comparable with the overall sample size, making statistical inference possible in a broader range of real applications. Simulation results demonstrate that our estimators enjoy lower bias and more accurate confidence interval coverage without additional computational costs. We also propose a local smoothing procedure to reduce the variation of our estimator, which shows improved numerical performance when the number of trees is relatively small. Further, we investigate the ratio consistency of our proposed variance estimator under specific scenarios. In particular, we develop a new "double U-statistic" formulation to analyze the Hoeffding decomposition of the estimator's variance.

We consider Ising models on the hypercube with a general interaction matrix $J$, and give a polynomial time sampling algorithm when all but $O(1)$ eigenvalues of $J$ lie in an interval of length one, a situation which occurs in many models of interest. This was previously known for the Glauber dynamics when *all* eigenvalues fit in an interval of length one; however, a single outlier can force the Glauber dynamics to mix torpidly. Our general result implies the first polynomial time sampling algorithms for low-rank Ising models such as Hopfield networks with a fixed number of patterns and Bayesian clustering models with low-dimensional contexts, and greatly improves the polynomial time sampling regime for the antiferromagnetic/ferromagnetic Ising model with inconsistent field on expander graphs. It also improves on previous approximation algorithm results based on the naive mean-field approximation in variational methods and statistical physics. Our approach is based on a new fusion of ideas from the MCMC and variational inference worlds. As part of our algorithm, we define a new nonconvex variational problem which allows us to sample from an exponential reweighting of a distribution by a negative definite quadratic form, and show how to make this procedure provably efficient using stochastic gradient descent. On top of this, we construct a new simulated tempering chain (on an extended state space arising from the Hubbard-Stratonovich transform) which overcomes the obstacle posed by large positive eigenvalues, and combine it with the SGD-based sampler to solve the full problem.

Discrete random structures are important tools in Bayesian nonparametrics and the resulting models have proven effective in density estimation, clustering, topic modeling and prediction, among others. In this paper, we consider nested processes and study the dependence structures they induce. Dependence ranges between homogeneity, corresponding to full exchangeability, and maximum heterogeneity, corresponding to (unconditional) independence across samples. The popular nested Dirichlet process is shown to degenerate to the fully exchangeable case when there are ties across samples at the observed or latent level. To overcome this drawback, inherent to nesting general discrete random measures, we introduce a novel class of latent nested processes. These are obtained by adding common and group-specific completely random measures and, then, normalising to yield dependent random probability measures. We provide results on the partition distributions induced by latent nested processes, and develop an Markov Chain Monte Carlo sampler for Bayesian inferences. A test for distributional homogeneity across groups is obtained as a by product. The results and their inferential implications are showcased on synthetic and real data.

北京阿比特科技有限公司