亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Gaussian processes (GPs) are popular nonparametric statistical models for learning unknown functions and quantifying the spatiotemporal uncertainty in data. Recent works have extended GPs to model scalar and vector quantities distributed over non-Euclidean domains, including smooth manifolds appearing in numerous fields such as computer vision, dynamical systems, and neuroscience. However, these approaches assume that the manifold underlying the data is known, limiting their practical utility. We introduce RVGP, a generalisation of GPs for learning vector signals over latent Riemannian manifolds. Our method uses positional encoding with eigenfunctions of the connection Laplacian, associated with the tangent bundle, readily derived from common graph-based approximation of data. We demonstrate that RVGP possesses global regularity over the manifold, which allows it to super-resolve and inpaint vector fields while preserving singularities. Furthermore, we use RVGP to reconstruct high-density neural dynamics derived from low-density EEG recordings in healthy individuals and Alzheimer's patients. We show that vector field singularities are important disease markers and that their reconstruction leads to a comparable classification accuracy of disease states to high-density recordings. Thus, our method overcomes a significant practical limitation in experimental and clinical applications.

相關內容

In sampling-based Bayesian models of brain function, neural activities are assumed to be samples from probability distributions that the brain uses for probabilistic computation. However, a comprehensive understanding of how mechanistic models of neural dynamics can sample from arbitrary distributions is still lacking. We use tools from functional analysis and stochastic differential equations to explore the minimum architectural requirements for $\textit{recurrent}$ neural circuits to sample from complex distributions. We first consider the traditional sampling model consisting of a network of neurons whose outputs directly represent the samples (sampler-only network). We argue that synaptic current and firing-rate dynamics in the traditional model have limited capacity to sample from a complex probability distribution. We show that the firing rate dynamics of a recurrent neural circuit with a separate set of output units can sample from an arbitrary probability distribution. We call such circuits reservoir-sampler networks (RSNs). We propose an efficient training procedure based on denoising score matching that finds recurrent and output weights such that the RSN implements Langevin sampling. We empirically demonstrate our model's ability to sample from several complex data distributions using the proposed neural dynamics and discuss its applicability to developing the next generation of sampling-based brain models.

We present a formulation for high-order generalized periodicity conditions in the context of a high-order electromechanical theory including flexoelectricity, strain gradient elasticity and gradient dielectricity, with the goal of studying periodic architected metamaterials. Such theory results in fourth-order governing partial differential equations, and the periodicity conditions involve continuity across the periodic boundary of primal fields (displacement and electric potential) and their normal derivatives, continuity of the corresponding dual generalized forces (tractions, double tractions, surface charge density and double surface charge density). Rather than imposing these conditions numerically as explicit constraints, we develop an approximation space which fulfils generalized periodicity by construction. Our method naturally allows us to impose general macroscopic fields (strains/stresses and electric fields/electric displacements) along arbitrary directions, enabling the characterization of the material anisotropy. We apply the proposed method to study periodic architected metamaterials with apparent piezoelectricity. We first verify the method by directly comparing the results with a large periodic structure, then apply it to evaluate the anisotropic apparently piezoelectricity of a geometrically polarized 2D lattice, and finally demonstrate the application of the method in a 3D architected metamaterial.

We introduce and analyse a family of hash and predicate functions that are more likely to produce collisions for small reducible configurations of vectors. These may offer practical improvements to lattice sieving for short vectors. In particular, in one asymptotic regime the family exhibits significantly different convergent behaviour than existing hash functions and predicates.

A common approach to evaluating the significance of a collection of $p$-values combines them with a pooling function, in particular when the original data are not available. These pooled $p$-values convert a sample of $p$-values into a single number which behaves like a univariate $p$-value. To clarify discussion of these functions, a telescoping series of alternative hypotheses are introduced that communicate the strength and prevalence of non-null evidence in the $p$-values before general pooling formulae are discussed. A pattern noticed in the UMP pooled $p$-value for a particular alternative motivates the definition and discussion of central and marginal rejection levels at $\alpha$. It is proven that central rejection is always greater than or equal to marginal rejection, motivating a quotient to measure the balance between the two for pooled $p$-values. A combining function based on the $\chi^2_{\kappa}$ quantile transformation is proposed to control this quotient and shown to be robust to mis-specified parameters relative to the UMP. Different powers for different parameter settings motivate a map of plausible alternatives based on where this pooled $p$-value is minimized.

This paper presents a novel centralized, variational data assimilation approach for calibrating transient dynamic models in electrical power systems, focusing on load model parameters. With the increasing importance of inverter-based resources, assessing power systems' dynamic performance under disturbances has become challenging, necessitating robust model calibration methods. The proposed approach expands on previous Bayesian frameworks by establishing a posterior distribution of parameters using an approximation around the maximum a posteriori value. We illustrate the efficacy of our method by generating events of varying intensity, highlighting its ability to capture the systems' evolution accurately and with associated uncertainty estimates. This research improves the precision of dynamic performance assessments in modern power systems, with potential applications in managing uncertainties and optimizing system operations.

Inference for functional linear models in the presence of heteroscedastic errors has received insufficient attention given its practical importance; in fact, even a central limit theorem has not been studied in this case. At issue, conditional mean (projection of the slope function) estimates have complicated sampling distributions due to the infinite dimensional regressors, which create truncation bias and scaling problems that are compounded by non-constant variance under heteroscedasticity. As a foundation for distributional inference, we establish a central limit theorem for the estimated projection under general dependent errors, and subsequently we develop a paired bootstrap method to approximate sampling distributions. The proposed paired bootstrap does not follow the standard bootstrap algorithm for finite dimensional regressors, as this version fails outside of a narrow window for implementation with functional regressors. The reason owes to a bias with functional regressors in a naive bootstrap construction. Our bootstrap proposal incorporates debiasing and thereby attains much broader validity and flexibility with truncation parameters for inference under heteroscedasticity; even when the naive approach may be valid, the proposed bootstrap method performs better numerically. The bootstrap is applied to construct confidence intervals for projections and for conducting hypothesis tests for the slope function. Our theoretical results on bootstrap consistency are demonstrated through simulation studies and also illustrated with real data examples.

Statistical learning methods are widely utilized in tackling complex problems due to their flexibility, good predictive performance and its ability to capture complex relationships among variables. Additionally, recently developed automatic workflows have provided a standardized approach to implementing statistical learning methods across various applications. However these tools highlight a main drawbacks of statistical learning: its lack of interpretation in their results. In the past few years an important amount of research has been focused on methods for interpreting black box models. Having interpretable statistical learning methods is relevant to have a deeper understanding of the model. In problems were spatial information is relevant, combined interpretable methods with spatial data can help to get better understanding of the problem and interpretation of the results. This paper is focused in the individual conditional expectation (ICE-plot), a model agnostic methods for interpreting statistical learning models and combined them with spatial information. ICE-plot extension is proposed where spatial information is used as restriction to define Spatial ICE curves (SpICE). Spatial ICE curves are estimated using real data in the context of an economic problem concerning property valuation in Montevideo, Uruguay. Understanding the key factors that influence property valuation is essential for decision-making, and spatial data plays a relevant role in this regard.

This study compares the performance of (1) fine-tuned models and (2) extremely large language models on the task of check-worthy claim detection. For the purpose of the comparison we composed a multilingual and multi-topical dataset comprising texts of various sources and styles. Building on this, we performed a benchmark analysis to determine the most general multilingual and multi-topical claim detector. We chose three state-of-the-art models in the check-worthy claim detection task and fine-tuned them. Furthermore, we selected three state-of-the-art extremely large language models without any fine-tuning. We made modifications to the models to adapt them for multilingual settings and through extensive experimentation and evaluation. We assessed the performance of all the models in terms of accuracy, recall, and F1-score in in-domain and cross-domain scenarios. Our results demonstrate that despite the technological progress in the area of natural language processing, the models fine-tuned for the task of check-worthy claim detection still outperform the zero-shot approaches in a cross-domain settings.

The consistency of the maximum likelihood estimator for mixtures of elliptically-symmetric distributions for estimating its population version is shown, where the underlying distribution $P$ is nonparametric and does not necessarily belong to the class of mixtures on which the estimator is based. In a situation where $P$ is a mixture of well enough separated but nonparametric distributions it is shown that the components of the population version of the estimator correspond to the well separated components of $P$. This provides some theoretical justification for the use of such estimators for cluster analysis in case that $P$ has well separated subpopulations even if these subpopulations differ from what the mixture model assumes.

The development of cubical type theory inspired the idea of "extension types" which has been found to have applications in other type theories that are unrelated to homotopy type theory or cubical type theory. This article describes these applications, including on records, metaprogramming, controlling unfolding, and some more exotic ones.

北京阿比特科技有限公司