亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In Euclidean spaces, the empirical mean vector as an estimator of the population mean is known to have polynomial concentration unless a strong tail assumption is imposed on the underlying probability measure. The idea of median-of-means tournament has been considered as a way of overcoming the sub-optimality of the empirical mean vector. In this paper, to address the sub-optimal performance of the empirical mean in a more general setting, we consider general Polish spaces with a general metric, which are allowed to be non-compact and of infinite-dimension. We discuss the estimation of the associated population Frechet mean, and for this we extend the existing notion of median-of-means to this general setting. We devise several new notions and inequalities associated with the geometry of the underlying metric, and using them we study the concentration properties of the extended notions of median-of-means as the estimators of the population Frechet mean. We show that the new estimators achieve exponential concentration under only a second moment condition on the underlying distribution, while the empirical Frechet mean has polynomial concentration. We focus our study on spaces with non-positive Alexandrov curvature since they afford slower rates of convergence than spaces with positive curvature. We note that this is the first work that derives non-asymptotic concentration inequalities for extended notions of the median-of-means in non-vector spaces with a general metric.

相關內容

In this paper, we study the identifiability and the estimation of the parameters of a copula-based multivariate model when the margins are unknown and are arbitrary, meaning that they can be continuous, discrete, or mixtures of continuous and discrete. When at least one margin is not continuous, the range of values determining the copula is not the entire unit square and this situation could lead to identifiability issues that are discussed here. Next, we propose estimation methods when the margins are unknown and arbitrary, using pseudo log-likelihood adapted to the case of discontinuities. In view of applications to large data sets, we also propose a pairwise composite pseudo log-likelihood. These methodologies can also be easily modified to cover the case of parametric margins. One of the main theoretical result is an extension to arbitrary distributions of known convergence results of rank-based statistics when the margins are continuous. As a by-product, under smoothness assumptions, we obtain that the asymptotic distribution of the estimation errors of our estimators are Gaussian. Finally, numerical experiments are presented to assess the finite sample performance of the estimators, and the usefulness of the proposed methodologies is illustrated with a copula-based regression model for hydrological data.

We propose a framework for decision-making in the presence of strategic agents with panel data, a standard setting in econometrics and statistics where one gets noisy, repeated measurements of multiple units. We consider a setup where there is a pre-intervention period, when the principal observes the outcomes of each unit, after which the principal uses these observations to assign a treatment to each unit. Our model can be thought of as a generalization of the synthetic controls and synthetic interventions frameworks, where units (or agents) may strategically manipulate pre-intervention outcomes to receive a more desirable intervention. We identify necessary and sufficient conditions under which a strategyproof mechanism that assigns interventions in the post-intervention period exists. Under a latent factor model assumption, we show that whenever a strategyproof mechanism exists, there is one with a simple closed form. In the setting where there is a single treatment and control (i.e., no other interventions), we establish that there is always a strategyproof mechanism, and provide an algorithm for learning such a mechanism. For the setting of multiple interventions, we provide an algorithm for learning a strategyproof mechanism, if there exists a sufficiently large gap in rewards between the different interventions. Finally, we empirically evaluate our model using real-world panel data collected from product sales over 18 months. We find that our methods compare favorably to baselines which do not take strategic interactions into consideration -- even in the presence of model misspecification. Along the way, we prove impossibility results for multi-class strategic classification, which may be of independent interest.

Parametric mathematical models such as partial differential equations with random coefficients have received a lot of attention within the field of uncertainty quantification. The model uncertainties are often represented via a series expansion in terms of the parametric variables. In practice, this series expansion needs to be truncated to a finite number of terms, introducing a dimension truncation error to the numerical simulation of a parametric mathematical model. There have been several studies of the dimension truncation error corresponding to different models of the input random field in recent years, but many of these analyses have been carried out within the context of numerical integration. In this paper, we study the $L^2$ dimension truncation error of the parametric model problem. Estimates of this kind arise in the assessment of the dimension truncation error for function approximation in high dimensions. In addition, we show that the dimension truncation error rate is invariant with respect to certain transformations of the parametric variables. Numerical results are presented which showcase the sharpness of the theoretical results.

Neural networks have recently shown promise for likelihood-free inference, providing orders-of-magnitude speed-ups over classical methods. However, current implementations are suboptimal when estimating parameters from independent replicates. In this paper, we use a decision-theoretic framework to argue that permutation-invariant neural networks are ideally placed for constructing Bayes estimators for arbitrary models, provided that simulation from these models is straightforward. We show that the resulting neural Bayes estimators can quickly and optimally estimate parameters in weakly-identified and highly-parameterised models with relative ease, and that they are highly competitive and much faster than traditional likelihood-based estimators. We apply our estimator on a spatial analysis of sea-surface temperature in the Red Sea where, after training, we obtain parameter estimates, and uncertainty quantification of the estimates via bootstrap sampling, from hundreds of spatial fields in a fraction of a second.

In statistics, independent, identically distributed random samples do not carry a natural ordering, and their statistics are typically invariant with respect to permutations of their order. Thus, an $n$-sample in a space $M$ can be considered as an element of the quotient space of $M^n$ modulo the permutation group. The present paper takes this definition of sample space and the related concept of orbit types as a starting point for developing a geometric perspective on statistics. We aim at deriving a general mathematical setting for studying the behavior of empirical and population means in spaces ranging from smooth Riemannian manifolds to general stratified spaces. We fully describe the orbifold and path-metric structure of the sample space when $M$ is a manifold or path-metric space, respectively. These results are non-trivial even when $M$ is Euclidean. We show that the infinite sample space exists in a Gromov-Hausdorff type sense and coincides with the Wasserstein space of probability distributions on $M$. We exhibit Fr\'echet means and $k$-means as metric projections onto 1-skeleta or $k$-skeleta in Wasserstein space, and we define a new and more general notion of polymeans. This geometric characterization via metric projections applies equally to sample and population means, and we use it to establish asymptotic properties of polymeans such as consistency and asymptotic normality.

We introduce and analyze a symmetric low-regularity scheme for the nonlinear Schr\"odinger (NLS) equation beyond classical Fourier-based techniques. We show fractional convergence of the scheme in $L^2$-norm, from first up to second order, both on the torus $\mathbb{T}^d$ and on a smooth bounded domain $\Omega \subset \mathbb{R}^d$, $d\le 3$, equipped with homogeneous Dirichlet boundary condition. The new scheme allows for a symmetric approximation to the NLS equation in a more general setting than classical splitting, exponential integrators, and low-regularity schemes (i.e. under lower regularity assumptions, on more general domains, and with fractional rates). We motivate and illustrate our findings through numerical experiments, where we witness better structure preserving properties and an improved error-constant in low-regularity regimes.

Gaussian boson sampling, a computational model that is widely believed to admit quantum supremacy, has already been experimentally demonstrated to surpasses the classical simulation capabilities of even with the most powerful supercomputers today. However, whether the current approach limited by photon loss and noise in such experiments prescribes a scalable path to quantum advantage is an open question. For example, random circuit sampling with constant noise per gate was recently shown not to be a scalable approach to achieve quantum supremacy, although simulating intermediate scale systems is still difficult. To understand the effect of photon loss on the scability of Gaussian boson sampling, we use a tensor network algorithm with $U(1)$ symmetry to examine the asymptotic operator entanglement entropy scaling, which relates to the simulation complexity. We develop a custom-built algorithm that significantly reduces the computational time with state-of-the-art hardware accelerators, enabling simulations of much larger systems. With this capability, we observe, for Gaussian boson sampling, the crucial $N_\text{out}\propto\sqrt{N}$ scaling of the number of surviving photons in the number of input photons that marks the boundary between efficient and inefficient classical simulation. We further theoretically show that this should be general for other input states.

We consider the $\mathcal{H}^2$-formatted compression and computational estimation of covariance functions on a compact set in $\mathbb{R}^d$. The classical sample covariance or Monte Carlo estimator is prohibitively expensive for many practically relevant problems, where often approximation spaces with many degrees of freedom and many samples for the estimator are needed. In this article, we propose and analyze a data sparse multilevel sample covariance estimator, i.e., a multilevel Monte Carlo estimator. For this purpose, we generalize the notion of asymptotically smooth kernel functions to a Gevrey type class of kernels for which we derive new variable-order $\mathcal{H}^2$-approximation rates. These variable-order $\mathcal{H}^2$-approximations can be considered as a variant of $hp$-approximations. Our multilevel sample covariance estimator then uses an approximate multilevel hierarchy of variable-order $\mathcal{H}^2$-approximations to compress the sample covariances on each level. The non-nestedness of the different levels makes the reduction to the final estimator nontrivial and we present a suitable algorithm which can handle this task in linear complexity. This allows for a data sparse multilevel estimator of Gevrey covariance kernel functions in the best possible complexity for Monte Carlo type multilevel estimators, which is quadratic. Numerical examples which estimate covariance matrices with tens of billions of entries are presented.

The geometric optimisation of crystal structures is a procedure widely used in Chemistry that changes the geometrical placement of the particles inside a structure. It is called structural relaxation and constitutes a local minimization problem with a non-convex objective function whose domain complexity increases along with the number of particles involved. In this work we study the performance of the two most popular first order optimisation methods, Gradient Descent and Conjugate Gradient, in structural relaxation. The respective pseudocodes can be found in Section 6. Although frequently employed, there is a lack of their study in this context from an algorithmic point of view. In order to accurately define the problem, we provide a thorough derivation of all necessary formulae related to the crystal structure energy function and the function's differentiation. We run each algorithm in combination with a constant step size, which provides a benchmark for the methods' analysis and direct comparison. We also design dynamic step size rules and study how these improve the two algorithms' performance. Our results show that there is a trade-off between convergence rate and the possibility of an experiment to succeed, hence we construct a function to assign utility to each method based on our respective preference. The function is built according to a recently introduced model of preference indication concerning algorithms with deadline and their run time. Finally, building on all our insights from the experimental results, we provide algorithmic recipes that best correspond to each of the presented preferences and select one recipe as the optimal for equally weighted preferences.

In 1954, Alston S. Householder published Principles of Numerical Analysis, one of the first modern treatments on matrix decomposition that favored a (block) LU decomposition-the factorization of a matrix into the product of lower and upper triangular matrices. And now, matrix decomposition has become a core technology in machine learning, largely due to the development of the back propagation algorithm in fitting a neural network. The sole aim of this survey is to give a self-contained introduction to concepts and mathematical tools in numerical linear algebra and matrix analysis in order to seamlessly introduce matrix decomposition techniques and their applications in subsequent sections. However, we clearly realize our inability to cover all the useful and interesting results concerning matrix decomposition and given the paucity of scope to present this discussion, e.g., the separated analysis of the Euclidean space, Hermitian space, Hilbert space, and things in the complex domain. We refer the reader to literature in the field of linear algebra for a more detailed introduction to the related fields.

北京阿比特科技有限公司