亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Chaos presents complex dynamics arising from nonlinearity and a sensitivity to initial states. These characteristics suggest a depth of expressivity that underscores their potential for advanced computational applications. However, strategies to effectively exploit chaotic dynamics for information processing have largely remained elusive. In this study, we reveal that the essence of chaos can be found in various state-of-the-art deep neural networks. Drawing inspiration from this revelation, we propose a novel method that directly leverages chaotic dynamics for deep learning architectures. Our approach is systematically evaluated across distinct chaotic systems. In all instances, our framework presents superior results to conventional deep neural networks in terms of accuracy, convergence speed, and efficiency. Furthermore, we found an active role of transient chaos formation in our scheme. Collectively, this study offers a new path for the integration of chaos, which has long been overlooked in information processing, and provides insights into the prospective fusion of chaotic dynamics within the domains of machine learning and neuromorphic computation.

相關內容

神(shen)(shen)經(jing)(jing)(jing)網(wang)(wang)(wang)絡(luo)(Neural Networks)是世界上三個最古老的(de)(de)(de)神(shen)(shen)經(jing)(jing)(jing)建模學(xue)(xue)(xue)(xue)會(hui)的(de)(de)(de)檔案期刊:國際(ji)神(shen)(shen)經(jing)(jing)(jing)網(wang)(wang)(wang)絡(luo)學(xue)(xue)(xue)(xue)會(hui)(INNS)、歐洲(zhou)神(shen)(shen)經(jing)(jing)(jing)網(wang)(wang)(wang)絡(luo)學(xue)(xue)(xue)(xue)會(hui)(ENNS)和(he)(he)(he)日本神(shen)(shen)經(jing)(jing)(jing)網(wang)(wang)(wang)絡(luo)學(xue)(xue)(xue)(xue)會(hui)(JNNS)。神(shen)(shen)經(jing)(jing)(jing)網(wang)(wang)(wang)絡(luo)提供了一(yi)(yi)個論壇(tan),以發(fa)展和(he)(he)(he)培育一(yi)(yi)個國際(ji)社(she)會(hui)的(de)(de)(de)學(xue)(xue)(xue)(xue)者和(he)(he)(he)實踐者感(gan)興趣的(de)(de)(de)所有方面的(de)(de)(de)神(shen)(shen)經(jing)(jing)(jing)網(wang)(wang)(wang)絡(luo)和(he)(he)(he)相關(guan)方法的(de)(de)(de)計(ji)(ji)算(suan)智(zhi)能。神(shen)(shen)經(jing)(jing)(jing)網(wang)(wang)(wang)絡(luo)歡迎高(gao)質量論文的(de)(de)(de)提交,有助于(yu)全(quan)面的(de)(de)(de)神(shen)(shen)經(jing)(jing)(jing)網(wang)(wang)(wang)絡(luo)研(yan)究,從行(xing)為和(he)(he)(he)大腦(nao)建模,學(xue)(xue)(xue)(xue)習(xi)算(suan)法,通過數學(xue)(xue)(xue)(xue)和(he)(he)(he)計(ji)(ji)算(suan)分(fen)析,系統(tong)的(de)(de)(de)工(gong)程(cheng)(cheng)和(he)(he)(he)技術應用(yong),大量使(shi)用(yong)神(shen)(shen)經(jing)(jing)(jing)網(wang)(wang)(wang)絡(luo)的(de)(de)(de)概(gai)念和(he)(he)(he)技術。這(zhe)一(yi)(yi)獨特而(er)廣泛的(de)(de)(de)范(fan)圍(wei)促進了生(sheng)物(wu)和(he)(he)(he)技術研(yan)究之間的(de)(de)(de)思想(xiang)交流,并有助于(yu)促進對生(sheng)物(wu)啟發(fa)的(de)(de)(de)計(ji)(ji)算(suan)智(zhi)能感(gan)興趣的(de)(de)(de)跨學(xue)(xue)(xue)(xue)科社(she)區的(de)(de)(de)發(fa)展。因此,神(shen)(shen)經(jing)(jing)(jing)網(wang)(wang)(wang)絡(luo)編委會(hui)代表(biao)的(de)(de)(de)專(zhuan)(zhuan)家領域包括心理學(xue)(xue)(xue)(xue),神(shen)(shen)經(jing)(jing)(jing)生(sheng)物(wu)學(xue)(xue)(xue)(xue),計(ji)(ji)算(suan)機(ji)科學(xue)(xue)(xue)(xue),工(gong)程(cheng)(cheng),數學(xue)(xue)(xue)(xue),物(wu)理。該雜志發(fa)表(biao)文章、信件(jian)和(he)(he)(he)評論以及給編輯的(de)(de)(de)信件(jian)、社(she)論、時事(shi)、軟件(jian)調(diao)查和(he)(he)(he)專(zhuan)(zhuan)利信息。文章發(fa)表(biao)在五個部分(fen)之一(yi)(yi):認知科學(xue)(xue)(xue)(xue),神(shen)(shen)經(jing)(jing)(jing)科學(xue)(xue)(xue)(xue),學(xue)(xue)(xue)(xue)習(xi)系統(tong),數學(xue)(xue)(xue)(xue)和(he)(he)(he)計(ji)(ji)算(suan)分(fen)析、工(gong)程(cheng)(cheng)和(he)(he)(he)應用(yong)。 官網(wang)(wang)(wang)地址:

Machine learning is typically framed from a perspective of i.i.d., and more importantly, isolated data. In parts, federated learning lifts this assumption, as it sets out to solve the real-world challenge of collaboratively learning a shared model from data distributed across clients. However, motivated primarily by privacy and computational constraints, the fact that data may change, distributions drift, or even tasks advance individually on clients, is seldom taken into account. The field of continual learning addresses this separate challenge and first steps have recently been taken to leverage synergies in distributed supervised settings, in which several clients learn to solve changing classification tasks over time without forgetting previously seen ones. Motivated by these prior works, we posit that such federated continual learning should be grounded in unsupervised learning of representations that are shared across clients; in the loose spirit of how humans can indirectly leverage others' experience without exposure to a specific task. For this purpose, we demonstrate that masked autoencoders for distribution estimation are particularly amenable to this setup. Specifically, their masking strategy can be seamlessly integrated with task attention mechanisms to enable selective knowledge transfer between clients. We empirically corroborate the latter statement through several continual federated scenarios on both image and binary datasets.

A permutation graph is the intersection graph of a set of segments between two parallel lines. In other words, they are defined by a permutation $\pi$ on $n$ elements, such that $u$ and $v$ are adjacent if an only if $u<v$ but $\pi(u)>\pi(v)$. We consider the problem of computing the distances in such a graph in the setting of informative labeling schemes. The goal of such a scheme is to assign a short bitstring $\ell(u)$ to every vertex $u$, such that the distance between $u$ and $v$ can be computed using only $\ell(u)$ and $\ell(v)$, and no further knowledge about the whole graph (other than that it is a permutation graph). This elegantly captures the intuition that we would like our data structure to be distributed, and often leads to interesting combinatorial challenges while trying to obtain lower and upper bounds that match up to the lower-order terms. For distance labeling of permutation graphs on $n$ vertices, Katz, Katz, and Peleg [STACS 2000] showed how to construct labels consisting of $\mathcal{O}(\log^{2} n)$ bits. Later, Bazzaro and Gavoille [Discret. Math. 309(11)] obtained an asymptotically optimal bounds by showing how to construct labels consisting of $9\log{n}+\mathcal{O}(1)$ bits, and proving that $3\log{n}-\mathcal{O}(\log{\log{n}})$ bits are necessary. This however leaves a quite large gap between the known lower and upper bounds. We close this gap by showing how to construct labels consisting of $3\log{n}+\mathcal{O}(\log\log n)$ bits.

Camera-based 3D occupancy prediction has recently garnered increasing attention in outdoor driving scenes. However, research in indoor scenes remains relatively unexplored. The core differences in indoor scenes lie in the complexity of scene scale and the variance in object size. In this paper, we propose a novel method, named ISO, for predicting indoor scene occupancy using monocular images. ISO harnesses the advantages of a pretrained depth model to achieve accurate depth predictions. Furthermore, we introduce the Dual Feature Line of Sight Projection (D-FLoSP) module within ISO, which enhances the learning of 3D voxel features. To foster further research in this domain, we introduce Occ-ScanNet, a large-scale occupancy benchmark for indoor scenes. With a dataset size 40 times larger than the NYUv2 dataset, it facilitates future scalable research in indoor scene analysis. Experimental results on both NYUv2 and Occ-ScanNet demonstrate that our method achieves state-of-the-art performance. The dataset and code are made publicly at //github.com/hongxiaoy/ISO.git.

Smart contract transactions associated with security attacks often exhibit distinct behavioral patterns compared with historical benign transactions before the attacking events. While many runtime monitoring and guarding mechanisms have been proposed to validate invariants and stop anomalous transactions on the fly, the empirical effectiveness of the invariants used remains largely unexplored. In this paper, we studied 23 prevalent invariants of 8 categories, which are either deployed in high-profile protocols or endorsed by leading auditing firms and security experts. Using these well-established invariants as templates, we developed a tool Trace2Inv which dynamically generates new invariants customized for a given contract based on its historical transaction data. We evaluated Trace2Inv on 42 smart contracts that fell victim to 27 distinct exploits on the Ethereum blockchain. Our findings reveal that the most effective invariant guard alone can successfully block 18 of the 27 identified exploits with minimal gas overhead. Our analysis also shows that most of the invariants remain effective even when the experienced attackers attempt to bypass them. Additionally, we studied the possibility of combining multiple invariant guards, resulting in blocking up to 23 of the 27 benchmark exploits and achieving false positive rates as low as 0.32%. Trace2Inv outperforms current state-of-the-art works on smart contract invariant mining and transaction attack detection in terms of both practicality and accuracy. Though Trace2Inv is not primarily designed for transaction attack detection, it surprisingly found two previously unreported exploit transactions, earlier than any reported exploit transactions against the same victim contracts.

To assess the quality of a probabilistic prediction for stochastic dynamical systems (SDSs), scoring rules assign a numerical score based on the predictive distribution and the measured state. In this paper, we propose an $\epsilon$-logarithm score that generalizes the celebrated logarithm score by considering a neighborhood with radius $\epsilon$. We characterize the probabilistic predictability of an SDS by optimizing the expected score over the space of probability measures. We show how the probabilistic predictability is quantitatively determined by the neighborhood radius, the differential entropies of process noises, and the system dimension. Given any predictor, we provide approximations for the expected score with an error of scale $\mathcal{O}(\epsilon)$. In addition to the expected score, we also analyze the asymptotic behaviors of the score on individual trajectories. Specifically, we prove that the score on a trajectory can converge to the expected score when the process noises are independent and identically distributed. Moreover, the convergence speed against the trajectory length $T$ is of scale $\mathcal{O}(T^{-\frac{1}{2}})$ in the sense of probability. Finally, numerical examples are given to elaborate the results.

Modern data science applications often involve complex relational data with dynamic structures. An abrupt change in such dynamic relational data is typically observed in systems that undergo regime changes due to interventions. In such a case, we consider a factorized fusion shrinkage model in which all decomposed factors are dynamically shrunk towards group-wise fusion structures, where the shrinkage is obtained by applying global-local shrinkage priors to the successive differences of the row vectors of the factorized matrices. The proposed priors enjoy many favorable properties in comparison and clustering of the estimated dynamic latent factors. Comparing estimated latent factors involves both adjacent and long-term comparisons, with the time range of comparison considered as a variable. Under certain conditions, we demonstrate that the posterior distribution attains the minimax optimal rate up to logarithmic factors. In terms of computation, we present a structured mean-field variational inference framework that balances optimal posterior inference with computational scalability, exploiting both the dependence among components and across time. The framework can accommodate a wide variety of models, including dynamic matrix factorization, latent space models for networks and low-rank tensors. The effectiveness of our methodology is demonstrated through extensive simulations and real-world data analysis.

Data in Knowledge Graphs often represents part of the current state of the real world. Thus, to stay up-to-date the graph data needs to be updated frequently. To utilize information from Knowledge Graphs, many state-of-the-art machine learning approaches use embedding techniques. These techniques typically compute an embedding, i.e., vector representations of the nodes as input for the main machine learning algorithm. If a graph update occurs later on -- specifically when nodes are added or removed -- the training has to be done all over again. This is undesirable, because of the time it takes and also because downstream models which were trained with these embeddings have to be retrained if they change significantly. In this paper, we investigate embedding updates that do not require full retraining and evaluate them in combination with various embedding models on real dynamic Knowledge Graphs covering multiple use cases. We study approaches that place newly appearing nodes optimally according to local information, but notice that this does not work well. However, we find that if we continue the training of the old embedding, interleaved with epochs during which we only optimize for the added and removed parts, we obtain good results in terms of typical metrics used in link prediction. This performance is obtained much faster than with a complete retraining and hence makes it possible to maintain embeddings for dynamic Knowledge Graphs.

Graph neural networks (GNNs) is widely used to learn a powerful representation of graph-structured data. Recent work demonstrates that transferring knowledge from self-supervised tasks to downstream tasks could further improve graph representation. However, there is an inherent gap between self-supervised tasks and downstream tasks in terms of optimization objective and training data. Conventional pre-training methods may be not effective enough on knowledge transfer since they do not make any adaptation for downstream tasks. To solve such problems, we propose a new transfer learning paradigm on GNNs which could effectively leverage self-supervised tasks as auxiliary tasks to help the target task. Our methods would adaptively select and combine different auxiliary tasks with the target task in the fine-tuning stage. We design an adaptive auxiliary loss weighting model to learn the weights of auxiliary tasks by quantifying the consistency between auxiliary tasks and the target task. In addition, we learn the weighting model through meta-learning. Our methods can be applied to various transfer learning approaches, it performs well not only in multi-task learning but also in pre-training and fine-tuning. Comprehensive experiments on multiple downstream tasks demonstrate that the proposed methods can effectively combine auxiliary tasks with the target task and significantly improve the performance compared to state-of-the-art methods.

Recent contrastive representation learning methods rely on estimating mutual information (MI) between multiple views of an underlying context. E.g., we can derive multiple views of a given image by applying data augmentation, or we can split a sequence into views comprising the past and future of some step in the sequence. Contrastive lower bounds on MI are easy to optimize, but have a strong underestimation bias when estimating large amounts of MI. We propose decomposing the full MI estimation problem into a sum of smaller estimation problems by splitting one of the views into progressively more informed subviews and by applying the chain rule on MI between the decomposed views. This expression contains a sum of unconditional and conditional MI terms, each measuring modest chunks of the total MI, which facilitates approximation via contrastive bounds. To maximize the sum, we formulate a contrastive lower bound on the conditional MI which can be approximated efficiently. We refer to our general approach as Decomposed Estimation of Mutual Information (DEMI). We show that DEMI can capture a larger amount of MI than standard non-decomposed contrastive bounds in a synthetic setting, and learns better representations in a vision domain and for dialogue generation.

Embedding entities and relations into a continuous multi-dimensional vector space have become the dominant method for knowledge graph embedding in representation learning. However, most existing models ignore to represent hierarchical knowledge, such as the similarities and dissimilarities of entities in one domain. We proposed to learn a Domain Representations over existing knowledge graph embedding models, such that entities that have similar attributes are organized into the same domain. Such hierarchical knowledge of domains can give further evidence in link prediction. Experimental results show that domain embeddings give a significant improvement over the most recent state-of-art baseline knowledge graph embedding models.

北京阿比特科技有限公司