亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In various scientific fields, researchers are interested in exploring the relationship between some response variable Y and a vector of covariates X. In order to make use of a specific model for the dependence structure, it first has to be checked whether the conditional density function of Y given X fits into a given parametric family. We propose a consistent bootstrap-based goodness-of-fit test for this purpose. The test statistic traces the difference between a nonparametric and a semi-parametric estimate of the marginal distribution function of Y. As its asymptotic null distribution is not distribution-free, a parametric bootstrap method is used to determine the critical value. A simulation study shows that, in some cases, the new method is more sensitive to deviations from the parametric model than other tests found in the literature. We also apply our method to real-world datasets.

相關內容

Sequential neural posterior estimation (SNPE) techniques have been recently proposed for dealing with simulation-based models with intractable likelihoods. Unlike approximate Bayesian computation, SNPE techniques learn the posterior from sequential simulation using neural network-based conditional density estimators by minimizing a specific loss function. The SNPE method proposed by Lueckmann et al. (2017) used a calibration kernel to boost the sample weights around the observed data, resulting in a concentrated loss function. However, the use of calibration kernels may increase the variances of both the empirical loss and its gradient, making the training inefficient. To improve the stability of SNPE, this paper proposes to use an adaptive calibration kernel and several variance reduction techniques. The proposed method greatly speeds up the process of training and provides a better approximation of the posterior than the original SNPE method and some existing competitors as confirmed by numerical experiments. We also manage to demonstrate the superiority of the proposed method for a high-dimensional model with real-world dataset.

We consider the problem of sampling a high dimensional multimodal target probability measure. We assume that a good proposal kernel to move only a subset of the degrees of freedoms (also known as collective variables) is known a priori. This proposal kernel can for example be built using normalizing flows. We show how to extend the move from the collective variable space to the full space and how to implement an accept-reject step in order to get a reversible chain with respect to a target probability measure. The accept-reject step does not require to know the marginal of the original measure in the collective variable (namely to know the free energy). The obtained algorithm admits several variants, some of them being very close to methods which have been proposed previously in the literature. We show how the obtained acceptance ratio can be expressed in terms of the work which appears in the Jarzynski-Crooks equality, at least for some variants. Numerical illustrations demonstrate the efficiency of the approach on various simple test cases, and allow us to compare the variants of the algorithm.

Many natural computational problems in computer science, mathematics, physics, and other sciences amount to deciding if two objects are equivalent. Often this equivalence is defined in terms of group actions. A natural question is to ask when two objects can be distinguished by polynomial functions that are invariant under the group action. For finite groups, this is the usual notion of equivalence, but for continuous groups like the general linear groups it gives rise to a new notion, called orbit closure intersection. It captures, among others, the graph isomorphism problem, noncommutative PIT, null cone problems in invariant theory, equivalence problems for tensor networks, and the classification of multiparty quantum states. Despite recent algorithmic progress in celebrated special cases, the computational complexity of general orbit closure intersection problems is currently quite unclear. In particular, tensors seem to give rise to the most difficult problems. In this work we start a systematic study of orbit closure intersection from the complexity-theoretic viewpoint. To this end, we define a complexity class TOCI that captures the power of orbit closure intersection problems for general tensor actions, give an appropriate notion of algebraic reductions that imply polynomial-time reductions in the usual sense, but are amenable to invariant-theoretic techniques, identify natural tensor problems that are complete for TOCI, including the equivalence of 2D tensor networks with constant physical dimension, and show that the graph isomorphism problem can be reduced to these complete problems, hence GI$\subseteq$TOCI. As such, our work establishes the first lower bound on the computational complexity of orbit closure intersection problems, and it explains the difficulty of finding unconditional polynomial-time algorithms beyond special cases, as has been observed in the literature.

In decision-making, maxitive functions are used for worst-case and best-case evaluations. Maxitivity gives rise to a rich structure that is well-studied in the context of the pointwise order. In this article, we investigate maxitivity with respect to general preorders and provide a representation theorem for such functionals. The results are illustrated for different stochastic orders in the literature, including the usual stochastic order, the increasing convex/concave order, and the dispersive order.

In this work is considered an elliptic problem, referred to as the Ventcel problem, involvinga second order term on the domain boundary (the Laplace-Beltrami operator). A variationalformulation of the Ventcel problem is studied, leading to a finite element discretization. Thefocus is on the construction of high order curved meshes for the discretization of the physicaldomain and on the definition of the lift operator, which is aimed to transform a functiondefined on the mesh domain into a function defined on the physical one. This lift is definedin a way as to satisfy adapted properties on the boundary, relatively to the trace operator.The Ventcel problem approximation is investigated both in terms of geometrical error and offinite element approximation error. Error estimates are obtained both in terms of the meshorder r $\ge$ 1 and to the finite element degree k $\ge$ 1, whereas such estimates usually have beenconsidered in the isoparametric case so far, involving a single parameter k = r. The numericalexperiments we led, both in dimension 2 and 3, allow us to validate the results obtained andproved on the a priori error estimates depending on the two parameters k and r. A numericalcomparison is made between the errors using the former lift definition and the lift defined inthis work establishing an improvement in the convergence rate of the error in the latter case.

We consider linear models with scalar responses and covariates from a separable Hilbert space. The aim is to detect change points in the error distribution, based on sequential residual empirical distribution functions. Expansions for those estimated functions are more challenging in models with infinite-dimensional covariates than in regression models with scalar or vector-valued covariates due to a slower rate of convergence of the parameter estimators. Yet the suggested change point test is asymptotically distribution-free and consistent for one-change point alternatives. In the latter case we also show consistency of a change point estimator.

We propose a novel, highly efficient, second-order accurate, long-time unconditionally stable numerical scheme for a class of finite-dimensional nonlinear models that are of importance in geophysical fluid dynamics. The scheme is highly efficient in the sense that only a (fixed) symmetric positive definite linear problem (with varying right hand sides) is involved at each time-step. The solutions to the scheme are uniformly bounded for all time. We show that the scheme is able to capture the long-time dynamics of the underlying geophysical model, with the global attractors as well as the invariant measures of the scheme converge to those of the original model as the step size approaches zero. In our numerical experiments, we take an indirect approach, using long-term statistics to approximate the invariant measures. Our results suggest that the convergence rate of the long-term statistics, as a function of terminal time, is approximately first order using the Jensen-Shannon metric and half-order using the L1 metric. This implies that very long time simulation is needed in order to capture a few significant digits of long time statistics (climate) correct. Nevertheless, the second order scheme's performance remains superior to that of the first order one, requiring significantly less time to reach a small neighborhood of statistical equilibrium for a given step size.

For the distributions of finitely many binary random variables, we study the interaction of restrictions of the supports with conditional independence constraints. We prove a generalization of the Hammersley-Clifford theorem for distributions whose support is a natural distributive lattice: that is, any distribution which has natural lattice support and satisfies the pairwise Markov statements of a graph must factor according to the graph. We also show a connection to the Hibi ideals of lattices.

We consider random matrix ensembles on the set of Hermitian matrices that are heavy tailed, in particular not all moments exist, and that are invariant under the conjugate action of the unitary group. The latter property entails that the eigenvectors are Haar distributed and, therefore, factorise from the eigenvalue statistics. We prove a classification for stable matrix ensembles of this kind of matrices represented in terms of matrices, their eigenvalues and their diagonal entries with the help of the classification of the multivariate stable distributions and the harmonic analysis on symmetric matrix spaces. Moreover, we identify sufficient and necessary conditions for their domains of attraction. To illustrate our findings we discuss for instance elliptical invariant random matrix ensembles and P\'olya ensembles, the latter playing a particular role in matrix convolutions. As a byproduct we generalise the derivative principle on the Hermitian matrices to general tempered distributions. This principle relates the joint probability density of the eigenvalues and the diagonal entries of the random matrix.

This paper deals with Elliptical Wishart distributions - which generalize the Wishart distribution - in the context of signal processing and machine learning. Two algorithms to compute the maximum likelihood estimator (MLE) are proposed: a fixed point algorithm and a Riemannian optimization method based on the derived information geometry of Elliptical Wishart distributions. The existence and uniqueness of the MLE are characterized as well as the convergence of both estimation algorithms. Statistical properties of the MLE are also investigated such as consistency, asymptotic normality and an intrinsic version of Fisher efficiency. On the statistical learning side, novel classification and clustering methods are designed. For the $t$-Wishart distribution, the performance of the MLE and statistical learning algorithms are evaluated on both simulated and real EEG and hyperspectral data, showcasing the interest of our proposed methods.

北京阿比特科技有限公司