亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Manifold data analysis is challenging due to the lack of parametric distributions on manifolds. To address this, we introduce a series of Riemannian radial distributions on Riemannian symmetric spaces. By utilizing the symmetry, we show that for many Riemannian radial distributions, the Riemannian $L^p$ center of mass is uniquely given by the location parameter, and the maximum likelihood estimator (MLE) of this parameter is given by an M-estimator. Therefore, these parametric distributions provide a promising tool for statistical modeling and algorithmic design. In addition, our paper develops a novel theory for parameter estimation and minimax optimality by integrating statistics, Riemannian geometry, and Lie theory. We demonstrate that the MLE achieves a convergence rate of root-$n$ up to logarithmic terms, where the rate is quantified by both the hellinger distance between distributions and geodesic distance between parameters. Then we derive a root-$n$ minimax lower bound for the parameter estimation rate, demonstrating the optimality of the MLE. Our minimax analysis is limited to the case of simply connected Riemannian symmetric spaces for technical reasons, but is still applicable to numerous applications. Finally, we extend our studies to Riemannian radial distributions with an unknown temperature parameter, and establish the convergence rate of the MLE. We also derive the model complexity of von Mises-Fisher distributions on spheres and discuss the effects of geometry in statistical estimation.

相關內容

In the study of extremes, the presence of asymptotic independence signifies that extreme events across multiple variables are probably less likely to occur together. Although well-understood in a bivariate context, the concept remains relatively unexplored when addressing the nuances of joint occurrence of extremes in higher dimensions. In this paper, we propose a notion of mutual asymptotic independence to capture the behavior of joint extremes in dimensions larger than two and contrast it with the classical notion of (pairwise) asymptotic independence. Furthermore, we define $k$-wise asymptotic independence which lies in between pairwise and mutual asymptotic independence. The concepts are compared using examples of Archimedean, Gaussian and Marshall-Olkin copulas among others. Notably, for the popular Gaussian copula, we provide explicit conditions on the correlation matrix for mutual asymptotic independence to hold; moreover, we are able to compute exact tail orders for various tail events.

"Accuracy-on-the-line" is a widely observed phenomenon in machine learning, where a model's accuracy on in-distribution (ID) and out-of-distribution (OOD) data is positively correlated across different hyperparameters and data configurations. But when does this useful relationship break down? In this work, we explore its robustness. The key observation is that noisy data and the presence of nuisance features can be sufficient to shatter the Accuracy-on-the-line phenomenon. In these cases, ID and OOD accuracy can become negatively correlated, leading to "Accuracy-on-the-wrong-line". This phenomenon can also occur in the presence of spurious (shortcut) features, which tend to overshadow the more complex signal (core, non-spurious) features, resulting in a large nuisance feature space. Moreover, scaling to larger datasets does not mitigate this undesirable behavior and may even exacerbate it. We formally prove a lower bound on Out-of-distribution (OOD) error in a linear classification model, characterizing the conditions on the noise and nuisance features for a large OOD error. We finally demonstrate this phenomenon across both synthetic and real datasets with noisy data and nuisance features.

A non-linear complex system governed by multi-spatial and multi-temporal physics scales cannot be fully understood with a single diagnostic, as each provides only a partial view and much information is lost during data extraction. Combining multiple diagnostics also results in imperfect projections of the system's physics. By identifying hidden inter-correlations between diagnostics, we can leverage mutual support to fill in these gaps, but uncovering these inter-correlations analytically is too complex. We introduce a groundbreaking machine learning methodology to address this issue. Our multimodal approach generates super resolution data encompassing multiple physics phenomena, capturing detailed structural evolution and responses to perturbations previously unobservable. This methodology addresses a critical problem in fusion plasmas: the Edge Localized Mode (ELM), a plasma instability that can severely damage reactor walls. One method to stabilize ELM is using resonant magnetic perturbation to trigger magnetic islands. However, low spatial and temporal resolution of measurements limits the analysis of these magnetic islands due to their small size, rapid dynamics, and complex interactions within the plasma. With super-resolution diagnostics, we can experimentally verify theoretical models of magnetic islands for the first time, providing unprecedented insights into their role in ELM stabilization. This advancement aids in developing effective ELM suppression strategies for future fusion reactors like ITER and has broader applications, potentially revolutionizing diagnostics in fields such as astronomy, astrophysics, and medical imaging.

Factor models are widely used for dimension reduction in the analysis of multivariate data. This is achieved through decomposition of a p x p covariance matrix into the sum of two components. Through a latent factor representation, they can be interpreted as a diagonal matrix of idiosyncratic variances and a shared variation matrix, that is, the product of a p x k factor loadings matrix and its transpose. If k << p, this defines a parsimonious factorisation of the covariance matrix. Historically, little attention has been paid to incorporating prior information in Bayesian analyses using factor models where, at best, the prior for the factor loadings is order invariant. In this work, a class of structured priors is developed that can encode ideas of dependence structure about the shared variation matrix. The construction allows data-informed shrinkage towards sensible parametric structures while also facilitating inference over the number of factors. Using an unconstrained reparameterisation of stationary vector autoregressions, the methodology is extended to stationary dynamic factor models. For computational inference, parameter-expanded Markov chain Monte Carlo samplers are proposed, including an efficient adaptive Gibbs sampler. Two substantive applications showcase the scope of the methodology and its inferential benefits.

We investigate the non-Gaussian effects of the Saha equation in Rindler space via Tsallis statistics. By considering a system with cylindrical geometry, we deduce the non-Gaussian Saha ionization equation for a partially ionized hydrogen plasma that expands with uniform acceleration. We demonstrate conditions for the validity of the equivalence principle within the realms of both Boltzmann-Gibbs and Tsallis statistics. In the non-Gaussian framework, our findings reveal that the effective binding energy exhibits a quadratic dependence on the frame acceleration, in contrast to the linear dependence predicted by Boltzmann-Gibbs statistics. We show that an accelerated observer shall notice a more pronounced effect on the effective binding energy for $a>0$ and a more attenuated one when $a<0$. We also ascertain that an accelerated observer will measure values of $q$ smaller than those measured in the rest frame. Besides, assuming the equivalence principle, we examine the effects of the gravitational field on the photoionization of hydrogen atoms and pair production. We show that both photoionization and pair production are more intensely suppressed in regions with a strong gravitational field in a non-Gaussian context than in the Boltzmann-Gibbs framework. Lastly, constraints on the gravitational field and the electron and positron chemical potentials are derived.

In this short paper, we present a simple variant of the recursive path ordering, specified for Logically Constrained Simply Typed Rewriting Systems (LCSTRSs). This is a method for curried systems, without lambda but with partially applied function symbols, which can deal with logical constraints. As it is designed for use in the dependency pair framework, it is defined as reduction pair, allowing weak monotonicity.

In this paper, we propose nonparametric estimators for varextropy function of an absolutely continuous random variable. Consistency of the estimators is established under suitable regularity conditions. Moreover, a simulation study is performed to compare the performance of the proposed estimators based on mean squared error (MSE) and bias. Furthermore, by using the proposed estimators some tests are constructed for uniformity. It is shown that the varextropybased test proposed here performs well compared to the power of the other uniformity hypothesis tests.

Riemannian optimization is concerned with problems, where the independent variable lies on a smooth manifold. There is a number of problems from numerical linear algebra that fall into this category, where the manifold is usually specified by special matrix structures, such as orthogonality or definiteness. Following this line of research, we investigate tools for Riemannian optimization on the symplectic Stiefel manifold. We complement the existing set of numerical optimization algorithms with a Riemannian trust region method tailored to the symplectic Stiefel manifold. To this end, we derive a matrix formula for the Riemannian Hessian under a right-invariant metric. Moreover, we propose a novel retraction for approximating the Riemannian geodesics. Finally, we conduct a comparative study in which we juxtapose the performance of the Riemannian variants of the steepest descent, conjugate gradients, and trust region methods on selected matrix optimization problems that feature symplectic constraints.

Practical parameter identifiability in ODE-based epidemiological models is a known issue, yet one that merits further study. It is essentially ubiquitous due to noise and errors in real data. In this study, to avoid uncertainty stemming from data of unknown quality, simulated data with added noise are used to investigate practical identifiability in two distinct epidemiological models. Particular emphasis is placed on the role of initial conditions, which are assumed unknown, except those that are directly measured. Instead of just focusing on one method of estimation, we use and compare results from various broadly used methods, including maximum likelihood and Markov Chain Monte Carlo (MCMC) estimation. Among other findings, our analysis revealed that the MCMC estimator is overall more robust than the point estimators considered. Its estimates and predictions are improved when the initial conditions of certain compartments are fixed so that the model becomes globally identifiable. For the point estimators, whether fixing or fitting the that are not directly measured improves parameter estimates is model-dependent. Specifically, in the standard SEIR model, fixing the initial condition for the susceptible population S(0) improved parameter estimates, while this was not true when fixing the initial condition of the asymptomatic population in a more involved model. Our study corroborates the change in quality of parameter estimates upon usage of pre-peak or post-peak time-series under consideration. Finally, our examples suggest that in the presence of significantly noisy data, the value of structural identifiability is moot.

We consider the problem of function approximation by two-layer neural nets with random weights that are "nearly Gaussian" in the sense of Kullback-Leibler divergence. Our setting is the mean-field limit, where the finite population of neurons in the hidden layer is replaced by a continuous ensemble. We show that the problem can be phrased as global minimization of a free energy functional on the space of (finite-length) paths over probability measures on the weights. This functional trades off the $L^2$ approximation risk of the terminal measure against the KL divergence of the path with respect to an isotropic Brownian motion prior. We characterize the unique global minimizer and examine the dynamics in the space of probability measures over weights that can achieve it. In particular, we show that the optimal path-space measure corresponds to the F\"ollmer drift, the solution to a McKean-Vlasov optimal control problem closely related to the classic Schr\"odinger bridge problem. While the F\"ollmer drift cannot in general be obtained in closed form, thus limiting its potential algorithmic utility, we illustrate the viability of the mean-field Langevin diffusion as a finite-time approximation under various conditions on entropic regularization. Specifically, we show that it closely tracks the F\"ollmer drift when the regularization is such that the minimizing density is log-concave.

北京阿比特科技有限公司