亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We consider the equivalence between the two main categorical models for the type-theoretical operation of context comprehension, namely P. Dybjer's categories with families and B. Jacobs' comprehension categories, and generalise it to the non-discrete case. The classical equivalence can be summarised in the slogan: "terms as sections". By recognising "terms as coalgebras", we show how to use the structure-semantics adjunction to prove that a 2-category of comprehension categories is biequivalent to a 2-category of (non-discrete) categories with families. The biequivalence restricts to the classical one proved by Hofmann in the discrete case. It also provides a framework where to compare different morphisms of these structures that have appeared in the literature, varying on the degree of preservation of the relevant structure. We consider in particular morphisms defined by Claraimbault-Dybjer, Jacobs, Larrea, and Uemura.

相關內容

We consider the problem of approximating an unknown function in a nonlinear model class from point evaluations. When obtaining these point evaluations is costly, minimising the required sample size becomes crucial. Recently, an increasing focus has been on employing adaptive sampling strategies to achieve this. These strategies are based on linear spaces related to the nonlinear model class, for which the optimal sampling measures are known. However, the resulting optimal sampling measures depend on an orthonormal basis of the linear space, which is known rarely. Consequently, sampling from these measures is challenging in practice. This manuscript presents a sampling strategy that iteratively refines an estimate of the optimal sampling measure by updating it based on previously drawn samples. This strategy can be performed offline and does not require evaluations of the sought function. We establish convergence and illustrate the practical performance through numerical experiments. Comparing the presented approach with standard Monte Carlo sampling demonstrates a significant reduction in the number of samples required to achieve a good estimation of an orthonormal basis.

Predicting quantum operator matrices such as Hamiltonian, overlap, and density matrices in the density functional theory (DFT) framework is crucial for understanding material properties. Current methods often focus on individual operators and struggle with efficiency and scalability for large systems. Here we introduce a novel deep learning model, SLEM (strictly localized equivariant message-passing) for predicting multiple quantum operators, that achieves state-of-the-art accuracy while dramatically improving computational efficiency. SLEM's key innovation is its strict locality-based design, constructing local, equivariant representations for quantum tensors while preserving physical symmetries. This enables complex many-body dependence without expanding the effective receptive field, leading to superior data efficiency and transferability. Using an innovative SO(2) convolution technique, SLEM reduces the computational complexity of high-order tensor products and is therefore capable of handling systems requiring the $f$ and $g$ orbitals in their basis sets. We demonstrate SLEM's capabilities across diverse 2D and 3D materials, achieving high accuracy even with limited training data. SLEM's design facilitates efficient parallelization, potentially extending DFT simulations to systems with device-level sizes, opening new possibilities for large-scale quantum simulations and high-throughput materials discovery.

We propose a topological mapping and localization system able to operate on real human colonoscopies, despite significant shape and illumination changes. The map is a graph where each node codes a colon location by a set of real images, while edges represent traversability between nodes. For close-in-time images, where scene changes are minor, place recognition can be successfully managed with the recent transformers-based local feature matching algorithms. However, under long-term changes -- such as different colonoscopies of the same patient -- feature-based matching fails. To address this, we train on real colonoscopies a deep global descriptor achieving high recall with significant changes in the scene. The addition of a Bayesian filter boosts the accuracy of long-term place recognition, enabling relocalization in a previously built map. Our experiments show that ColonMapper is able to autonomously build a map and localize against it in two important use cases: localization within the same colonoscopy or within different colonoscopies of the same patient. Code: //github.com/jmorlana/ColonMapper.

Anomaly detection is a branch of data analysis and machine learning which aims at identifying observations that exhibit abnormal behaviour. Be it measurement errors, disease development, severe weather, production quality default(s) (items) or failed equipment, financial frauds or crisis events, their on-time identification, isolation and explanation constitute an important task in almost any branch of science and industry. By providing a robust ordering, data depth - statistical function that measures belongingness of any point of the space to a data set - becomes a particularly useful tool for detection of anomalies. Already known for its theoretical properties, data depth has undergone substantial computational developments in the last decade and particularly recent years, which has made it applicable for contemporary-sized problems of data analysis and machine learning. In this article, data depth is studied as an efficient anomaly detection tool, assigning abnormality labels to observations with lower depth values, in a multivariate setting. Practical questions of necessity and reasonability of invariances and shape of the depth function, its robustness and computational complexity, choice of the threshold are discussed. Illustrations include use-cases that underline advantageous behaviour of data depth in various settings.

We consider a non-linear Bayesian data assimilation model for the periodic two-dimensional Navier-Stokes equations with initial condition modelled by a Gaussian process prior. We show that if the system is updated with sufficiently many discrete noisy measurements of the velocity field, then the posterior distribution eventually concentrates near the ground truth solution of the time evolution equation, and in particular that the initial condition is recovered consistently by the posterior mean vector field. We further show that the convergence rate can in general not be faster than inverse logarithmic in sample size, but describe specific conditions on the initial conditions when faster rates are possible. In the proofs we provide an explicit quantitative estimate for backward uniqueness of solutions of the two-dimensional Navier-Stokes equations.

Discrete choice models with non-monotonic response functions are important in many areas of application, especially political sciences and marketing. This paper describes a novel unfolding model for binary data that allows for heavy-tailed shocks to the underlying utilities. One of our key contributions is a Markov chain Monte Carlo algorithm that requires little or no parameter tuning, fully explores the support of the posterior distribution, and can be used to fit various extensions of our core model that involve (Bayesian) hypothesis testing on the latent construct. Our empirical evaluations of the model and the associated algorithm suggest that they provide better complexity-adjusted fit to voting data from the United States House of Representatives.

We propose an extremely versatile approach to address a large family of matrix nearness problems, possibly with additional linear constraints. Our method is based on splitting a matrix nearness problem into two nested optimization problems, of which the inner one can be solved either exactly or cheaply, while the outer one can be recast as an unconstrained optimization task over a smooth real Riemannian manifold. We observe that this paradigm applies to many matrix nearness problems of practical interest appearing in the literature, thus revealing that they are equivalent in this sense to a Riemannian optimization problem. We also show that the objective function to be minimized on the Riemannian manifold can be discontinuous, thus requiring regularization techniques, and we give conditions for this to happen. Finally, we demonstrate the practical applicability of our method by implementing it for a number of matrix nearness problems that are relevant for applications and are currently considered very demanding in practice. Extensive numerical experiments demonstrate that our method often greatly outperforms its predecessors, including algorithms specifically designed for those particular problems.

Reduced basis methods for approximating the solutions of parameter-dependant partial differential equations (PDEs) are based on learning the structure of the set of solutions - seen as a manifold ${\mathcal S}$ in some functional space - when the parameters vary. This involves investigating the manifold and, in particular, understanding whether it is close to a low-dimensional affine space. This leads to the notion of Kolmogorov $N$-width that consists of evaluating to which extent the best choice of a vectorial space of dimension $N$ approximates ${\mathcal S}$ well enough. If a good approximation of elements in ${\mathcal S}$ can be done with some well-chosen vectorial space of dimension $N$ -- provided $N$ is not too large -- then a ``reduced'' basis can be proposed that leads to a Galerkin type method for the approximation of any element in ${\mathcal S}$. In many cases, however, the Kolmogorov $N$-width is not so small, even if the parameter set lies in a space of small dimension yielding a manifold with small dimension. In terms of complexity reduction, this gap between the small dimension of the manifold and the large Kolmogorov $N$-width can be explained by the fact that the Kolmogorov $N$-width is linear while, in contrast, the dependency in the parameter is, most often, non-linear. There have been many contributions aiming at reconciling these two statements, either based on deterministic or AI approaches. We investigate here further a new paradigm that, in some sense, merges these two aspects: the nonlinear compressive reduced basisapproximation. We focus on a simple multiparameter problem and illustrate rigorously that the complexity associated with the approximation of the solution to the parameter dependant PDE is directly related to the number of parameters rather than the Kolmogorov $N$-width.

This work explores multi-modal inference in a high-dimensional simplified model, analytically quantifying the performance gain of multi-modal inference over that of analyzing modalities in isolation. We present the Bayes-optimal performance and weak recovery thresholds in a model where the objective is to recover the latent structures from two noisy data matrices with correlated spikes. The paper derives the approximate message passing (AMP) algorithm for this model and characterizes its performance in the high-dimensional limit via the associated state evolution. The analysis holds for a broad range of priors and noise channels, which can differ across modalities. The linearization of AMP is compared numerically to the widely used partial least squares (PLS) and canonical correlation analysis (CCA) methods, which are both observed to suffer from a sub-optimal recovery threshold.

The manipulation of deformable linear objects (DLOs) via model-based control requires an accurate and computationally efficient dynamics model. Yet, data-driven DLO dynamics models require large training data sets while their predictions often do not generalize, whereas physics-based models rely on good approximations of physical phenomena and often lack accuracy. To address these challenges, we propose a physics-informed neural ODE capable of predicting agile movements with significantly less data and hyper-parameter tuning. In particular, we model DLOs as serial chains of rigid bodies interconnected by passive elastic joints in which interaction forces are predicted by neural networks. The proposed model accurately predicts the motion of an robotically-actuated aluminium rod and an elastic foam cylinder after being trained on only thirty seconds of data. The project code and data are available at: \url{//tinyurl.com/neuralprba}

北京阿比特科技有限公司