亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

A continuous one-dimensional map with period three includes all periods. This raises the following question: Can we obtain any types of periodic orbits solely by learning three data points? We consider learning period three with random neural networks and report the universal property associated with it. We first show that the trained networks have a thermodynamic limit that depends on the choice of target data and network settings. Our analysis reveals that almost all learned periods are unstable and each network has its characteristic attractors (which can even be untrained ones). Here, we propose the concept of characteristic bifurcation expressing embeddable attractors intrinsic to the network, in which the target data points and the scale of the network weights function as bifurcation parameters. In conclusion, learning period three generates various attractors through characteristic bifurcation due to the stability change in latently existing numerous unstable periods of the system.

相關內容

Data curation is an essential component of large-scale pretraining. In this work, we demonstrate that jointly selecting batches of data is more effective for learning than selecting examples independently. Multimodal contrastive objectives expose the dependencies between data and thus naturally yield criteria for measuring the joint learnability of a batch. We derive a simple and tractable algorithm for selecting such batches, which significantly accelerate training beyond individually-prioritized data points. As performance improves by selecting from larger super-batches, we also leverage recent advances in model approximation to reduce the associated computational overhead. As a result, our approach--multimodal contrastive learning with joint example selection (JEST)--surpasses state-of-the-art models with up to 13$\times$ fewer iterations and 10$\times$ less computation. Essential to the performance of JEST is the ability to steer the data selection process towards the distribution of smaller, well-curated datasets via pretrained reference models, exposing the level of data curation as a new dimension for neural scaling laws.

In recent years, the number of new applications for highly complex AI systems has risen significantly. Algorithmic decision-making systems (ADMs) are one of such applications, where an AI system replaces the decision-making process of a human expert. As one approach to ensure fairness and transparency of such systems, explainable AI (XAI) has become more important. One variant to achieve explainability are surrogate models, i.e., the idea to train a new simpler machine learning model based on the input-output-relationship of a black box model. The simpler machine learning model could, for example, be a decision tree, which is thought to be intuitively understandable by humans. However, there is not much insight into how well the surrogate model approximates the black box. Our main assumption is that a good surrogate model approach should be able to bring such a discriminating behavior to the attention of humans; prior to our research we assumed that a surrogate decision tree would identify such a pattern on one of its first levels. However, in this article we show that even if the discriminated subgroup - while otherwise being the same in all categories - does not get a single positive decision from the black box ADM system, the corresponding question of group membership can be pushed down onto a level as low as wanted by the operator of the system. We then generalize this finding to pinpoint the exact level of the tree on which the discriminating question is asked and show that in a more realistic scenario, where discrimination only occurs to some fraction of the disadvantaged group, it is even more feasible to hide such discrimination. Our approach can be generalized easily to other surrogate models.

We raise some questions about graph polynomials, highlighting concepts and phenomena that may merit consideration in the development of a general theory. Our questions are mainly of three types: When do graph polynomials have reduction relations (simple linear recursions based on local operations), perhaps in a wider class of combinatorial objects? How many levels of reduction relations does a graph polynomial need in order to express it in terms of trivial base cases? For a graph polynomial, how are properties such as equivalence and factorisation reflected in the structure of a graph? We illustrate our discussion with a variety of graph polynomials and other invariants. This leads us to reflect on the historical origins of graph polynomials. We also introduce some new polynomials based on partial colourings of graphs and establish some of their basic properties.

Deep Neural Networks (DNNs) have been successfully applied to a wide range of problems. However, two main limitations are commonly pointed out. The first one is that they require long time to design. The other is that they heavily rely on labelled data, which can sometimes be costly and hard to obtain. In order to address the first problem, neuroevolution has been proved to be a plausible option to automate the design of DNNs. As for the second problem, self-supervised learning has been used to leverage unlabelled data to learn representations. Our goal is to study how neuroevolution can help self-supervised learning to bridge the gap to supervised learning in terms of performance. In this work, we propose a framework that is able to evolve deep neural networks using self-supervised learning. Our results on the CIFAR-10 dataset show that it is possible to evolve adequate neural networks while reducing the reliance on labelled data. Moreover, an analysis to the structure of the evolved networks suggests that the amount of labelled data fed to them has less effect on the structure of networks that learned via self-supervised learning, when compared to individuals that relied on supervised learning.

The need of real-time of monitoring and alerting systems for Space Weather hazards has grown significantly in the last two decades. One of the most important challenge for space mission operations and planning is the prediction of solar proton events (SPEs). In this context, artificial intelligence and machine learning techniques have opened a new frontier, providing a new paradigm for statistical forecasting algorithms. The great majority of these models aim to predict the occurrence of a SPE, i.e., they are based on the classification approach. In this work we present a simple and efficient machine learning regression algorithm which is able to forecast the energetic proton flux up to 1 hour ahead by exploiting features derived from the electron flux only. This approach could be helpful to improve monitoring systems of the radiation risk in both deep space and near-Earth environments. The model is very relevant for mission operations and planning, especially when flare characteristics and source location are not available in real time, as at Mars distance.

In this manuscript, we introduce a tensor-based approach to Non-Negative Tensor Factorization (NTF). The method entails tensor dimension reduction through the utilization of the Einstein product. To maintain the regularity and sparsity of the data, certain constraints are imposed. Additionally, we present an optimization algorithm in the form of a tensor multiplicative updates method, which relies on the Einstein product. To guarantee a minimum number of iterations for the convergence of the proposed algorithm, we employ the Reduced Rank Extrapolation (RRE) and the Topological Extrapolation Transformation Algorithm (TEA). The efficacy of the proposed model is demonstrated through tests conducted on Hyperspectral Images (HI) for denoising, as well as for Hyperspectral Image Linear Unmixing. Numerical experiments are provided to substantiate the effectiveness of the proposed model for both synthetic and real data.

As a result of 33 intercontinental Zoom calls, we characterise big Ramsey degrees of the generic partial order. This is an infinitary extension of the well known fact that finite partial orders endowed with linear extensions form a Ramsey class (this result was announced by Ne\v{s}et\v{r}il and R\"odl in 1984 with first published proof by Paoli, Trotter and Walker in 1985). Towards this, we refine earlier upper bounds obtained by Hubi\v{c}ka based on a new connection of big Ramsey degrees to the Carlson-Simpson theorem and we also introduce a new technique of giving lower bounds using an iterated application of the upper-bound theorem.

As well known, weak K4 and the difference logic DL do not enjoy the Craig interpolation property. Our concern here is the problem of deciding whether any given implication does have an interpolant in these logics. We show that the nonexistence of an interpolant can always be witnessed by a pair of bisimilar models of polynomial size for DL and of triple-exponential size for weak K4, and so the interpolant existence problems for these logics are decidable in coNP and coN3ExpTime, respectively. We also establish coNExpTime-hardness of this problem for weak K4, which is higher than the PSpace-completeness of its decision problem.

Using probabilistic methods, we obtain grid-drawings of graphs without crossings with low volume and small aspect ratio. We show that every $D$-degenerate graph on $n$ vertices can be drawn in $[m]^3$ where $m^3 = O(D^2 n\log n)$. In particular, every graph of bounded maximum degree can be drawn in a grid with volume $O(n \log n)$.

In this paper, we investigate the module-checking problem of pushdown multi-agent systems (PMS) against ATL and ATL* specifications. We establish that for ATL, module checking of PMS is 2EXPTIME-complete, which is the same complexity as pushdown module-checking for CTL. On the other hand, we show that ATL* module-checking of PMS turns out to be 4EXPTIME-complete, hence exponentially harder than both CTL* pushdown module-checking and ATL* model-checking of PMS. Our result for ATL* provides a rare example of a natural decision problem that is elementary yet but with a complexity that is higher than triply exponential-time.

北京阿比特科技有限公司