亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Conservation laws are key theoretical and practical tools for understanding, characterizing, and modeling nonlinear dynamical systems. However, for many complex systems, the corresponding conserved quantities are difficult to identify, making it hard to analyze their dynamics and build stable predictive models. Current approaches for discovering conservation laws often depend on detailed dynamical information or rely on black box parametric deep learning methods. We instead reformulate this task as a manifold learning problem and propose a non-parametric approach for discovering conserved quantities. We test this new approach on a variety of physical systems and demonstrate that our method is able to both identify the number of conserved quantities and extract their values. Using tools from optimal transport theory and manifold learning, our proposed method provides a direct geometric approach to identifying conservation laws that is both robust and interpretable without requiring an explicit model of the system nor accurate time information.

相關內容

 流形學習,全稱流形學習方法(Manifold Learning),自2000年在著名的科學雜志《Science》被首次提出以來,已成為信息科學領域的研究熱點。在理論和應用上,流形學習方法都具有重要的研究意義。假設數據是均勻采樣于一個高維歐氏空間中的低維流形,流形學習就是從高維采樣數據中恢復低維流形結構,即找到高維空間中的低維流形,并求出相應的嵌入映射,以實現維數約簡或者數據可視化。它是從觀測到的現象中去尋找事物的本質,找到產生數據的內在規律。

Graph diffusion equations are intimately related to graph neural networks (GNNs) and have recently attracted attention as a principled framework for analyzing GNN dynamics, formalizing their expressive power, and justifying architectural choices. One key open questions in graph learning is the generalization capabilities of GNNs. A major limitation of current approaches hinges on the assumption that the graph topologies in the training and test sets come from the same distribution. In this paper, we make steps towards understanding the generalization of GNNs by exploring how graph diffusion equations extrapolate and generalize in the presence of varying graph topologies. We first show deficiencies in the generalization capability of existing models built upon local diffusion on graphs, stemming from the exponential sensitivity to topology variation. Our subsequent analysis reveals the promise of non-local diffusion, which advocates for feature propagation over fully-connected latent graphs, under the assumption of a specific data-generating condition. In addition to these findings, we propose a novel graph encoder backbone, Advective Diffusion Transformer (ADiT), inspired by advective graph diffusion equations that have a closed-form solution backed up with theoretical guarantees of desired generalization under topological distribution shifts. The new model, functioning as a versatile graph Transformer, demonstrates superior performance across a wide range of graph learning tasks.

Neural collapse provides an elegant mathematical characterization of learned last layer representations (a.k.a. features) and classifier weights in deep classification models. Such results not only provide insights but also motivate new techniques for improving practical deep models. However, most of the existing empirical and theoretical studies in neural collapse focus on the case that the number of classes is small relative to the dimension of the feature space. This paper extends neural collapse to cases where the number of classes are much larger than the dimension of feature space, which broadly occur for language models, retrieval systems, and face recognition applications. We show that the features and classifier exhibit a generalized neural collapse phenomenon, where the minimum one-vs-rest margins is maximized.We provide empirical study to verify the occurrence of generalized neural collapse in practical deep neural networks. Moreover, we provide theoretical study to show that the generalized neural collapse provably occurs under unconstrained feature model with spherical constraint, under certain technical conditions on feature dimension and number of classes.

Training neural networks sequentially in time to approximate solution fields of time-dependent partial differential equations can be beneficial for preserving causality and other physics properties; however, the sequential-in-time training is numerically challenging because training errors quickly accumulate and amplify over time. This work introduces Neural Galerkin schemes that update randomized sparse subsets of network parameters at each time step. The randomization avoids overfitting locally in time and so helps prevent the error from accumulating quickly over the sequential-in-time training, which is motivated by dropout that addresses a similar issue of overfitting due to neuron co-adaptation. The sparsity of the update reduces the computational costs of training without losing expressiveness because many of the network parameters are redundant locally at each time step. In numerical experiments with a wide range of evolution equations, the proposed scheme with randomized sparse updates is up to two orders of magnitude more accurate at a fixed computational budget and up to two orders of magnitude faster at a fixed accuracy than schemes with dense updates.

This paper develops an updatable inverse probability weighting (UIPW) estimation for the generalized linear models with response missing at random in streaming data sets. A two-step online updating algorithm is provided for the proposed method. In the first step we construct an updatable estimator for the parameter in propensity function and hence obtain an updatable estimator of the propensity function; in the second step we propose an UIPW estimator with the inverse of the updating propensity function value at each observation as the weight for estimating the parameter of interest. The UIPW estimation is universally applicable due to its relaxation on the constraint on the number of data batches. It is shown that the proposed estimator is consistent and asymptotically normal with the same asymptotic variance as that of the oracle estimator, and hence the oracle property is obtained. The finite sample performance of the proposed estimator is illustrated by the simulation and real data analysis. All numerical studies confirm that the UIPW estimator performs as well as the batch learner.

We make an experimental comparison of methods for computing the numerical radius of an $n\times n$ complex matrix, based on two well-known characterizations, the first a nonconvex optimization problem in one real variable and the second a convex optimization problem in $n^{2}+1$ real variables. We make comparisons with respect to both accuracy and computation time using publicly available software.

Different notions of the consistency of obligations collapse in standard deontic logic. In justification logics, which feature explicit reasons for obligations, the situation is different. Their strength depends on a constant specification and on the available set of operations for combining different reasons. We present different consistency principles in justification logic and compare their logical strength. We propose a novel semantics for which justification logics with the explicit version of axiom D, jd, are complete for arbitrary constant specifications. We then discuss the philosophical implications with regard to some deontic paradoxes.

Neural radiance field (NeRF) is an emerging view synthesis method that samples points in a three-dimensional (3D) space and estimates their existence and color probabilities. The disadvantage of NeRF is that it requires a long training time since it samples many 3D points. In addition, if one samples points from occluded regions or in the space where an object is unlikely to exist, the rendering quality of NeRF can be degraded. These issues can be solved by estimating the geometry of 3D scene. This paper proposes a near-surface sampling framework to improve the rendering quality of NeRF. To this end, the proposed method estimates the surface of a 3D object using depth images of the training set and sampling is performed around there only. To obtain depth information on a novel view, the paper proposes a 3D point cloud generation method and a simple refining method for projected depth from a point cloud. Experimental results show that the proposed near-surface sampling NeRF framework can significantly improve the rendering quality, compared to the original NeRF and a state-of-the-art depth-based NeRF method. In addition, one can significantly accelerate the training time of a NeRF model with the proposed near-surface sampling framework.

Information geometry is a study of statistical manifolds, that is, spaces of probability distributions from a geometric perspective. Its classical information-theoretic applications relate to statistical concepts such as Fisher information, sufficient statistics, and efficient estimators. Today, information geometry has emerged as an interdisciplinary field that finds applications in diverse areas such as radar sensing, array signal processing, quantum physics, deep learning, and optimal transport. This article presents an overview of essential information geometry to initiate an information theorist, who may be unfamiliar with this exciting area of research. We explain the concepts of divergences on statistical manifolds, generalized notions of distances, orthogonality, and geodesics, thereby paving the way for concrete applications and novel theoretical investigations. We also highlight some recent information-geometric developments, which are of interest to the broader information theory community.

Concurrent estimation and control of robotic systems remains an ongoing challenge, where controllers rely on data extracted from states/parameters riddled with uncertainties and noises. Framework suitability hinges on task complexity and computational constraints, demanding a balance between computational efficiency and mission-critical accuracy. This study leverages recent advancements in neuromorphic computing, particularly spiking neural networks (SNNs), for estimation and control applications. Our presented framework employs a recurrent network of leaky integrate-and-fire (LIF) neurons, mimicking a linear quadratic regulator (LQR) through a robust filtering strategy, a modified sliding innovation filter (MSIF). Benefiting from both the robustness of MSIF and the computational efficiency of SNN, our framework customizes SNN weight matrices to match the desired system model without requiring training. Additionally, the network employs a biologically plausible firing rule similar to predictive coding. In the presence of uncertainties, we compare the SNN-LQR-MSIF with non-spiking LQR-MSIF and the optimal linear quadratic Gaussian (LQG) strategy. Evaluation across a workbench linear problem and a satellite rendezvous maneuver, implementing the Clohessy-Wiltshire (CW) model in space robotics, demonstrates that the SNN-LQR-MSIF achieves acceptable performance in computational efficiency, robustness, and accuracy. This positions it as a promising solution for addressing dynamic systems' concurrent estimation and control challenges in dynamic systems.

The existence of representative datasets is a prerequisite of many successful artificial intelligence and machine learning models. However, the subsequent application of these models often involves scenarios that are inadequately represented in the data used for training. The reasons for this are manifold and range from time and cost constraints to ethical considerations. As a consequence, the reliable use of these models, especially in safety-critical applications, is a huge challenge. Leveraging additional, already existing sources of knowledge is key to overcome the limitations of purely data-driven approaches, and eventually to increase the generalization capability of these models. Furthermore, predictions that conform with knowledge are crucial for making trustworthy and safe decisions even in underrepresented scenarios. This work provides an overview of existing techniques and methods in the literature that combine data-based models with existing knowledge. The identified approaches are structured according to the categories integration, extraction and conformity. Special attention is given to applications in the field of autonomous driving.

北京阿比特科技有限公司