亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This chapter delves into the realm of computational complexity, exploring the world of challenging combinatorial problems and their ties with statistical physics. Our exploration starts by delving deep into the foundations of combinatorial challenges, emphasizing their nature. We will traverse the class P, which comprises problems solvable in polynomial time using deterministic algorithms, contrasting it with the class NP, where finding efficient solutions remains an enigmatic endeavor, understanding the intricacies of algorithmic transitions and thresholds demarcating the boundary between tractable and intractable problems. We will discuss the implications of the P versus NP problem, representing one of the profoundest unsolved enigmas of computer science and mathematics, bearing a tantalizing reward for its resolution. Drawing parallels between combinatorics and statistical physics, we will uncover intriguing interconnections that shed light on the nature of challenging problems. Statistical physics unveils profound analogies with complexities witnessed in combinatorial landscapes. Throughout this chapter, we will discuss the interplay between computational complexity theory and statistical physics. By unveiling the mysteries surrounding challenging problems, we aim to deepen understanding of the very essence of computation and its boundaries. Through this interdisciplinary approach, we aspire to illuminate the intricate tapestry of complexity underpinning the mathematical and physical facets of hard problems.

相關內容

Traditionally, compiler researchers either conduct experiments within an existing production compiler or develop their own prototype compiler; both options come with trade-offs. On one hand, prototyping in a production compiler can be cumbersome, as they are often optimized for program compilation speed at the expense of software simplicity and development speed. On the other hand, the transition from a prototype compiler to production requires significant engineering work. To bridge this gap, we introduce the concept of sidekick compiler frameworks, an approach that uses multiple frameworks that interoperate with each other by leveraging textual interchange formats and declarative descriptions of abstractions. Each such compiler framework is specialized for specific use cases, such as performance or prototyping. Abstractions are by design shared across frameworks, simplifying the transition from prototyping to production. We demonstrate this idea with xDSL, a sidekick for MLIR focused on prototyping and teaching. xDSL interoperates with MLIR through a shared textual IR and the exchange of IRs through an IR Definition Language. The benefits of sidekick compiler frameworks are evaluated by showing on three use cases how xDSL impacts their development: teaching, DSL compilation, and rewrite system prototyping. We also investigate the trade-offs that xDSL offers, and demonstrate how we simplify the transition between frameworks using the IRDL dialect. With sidekick compilation, we envision a future in which engineers minimize the cost of development by choosing a framework built for their immediate needs, and later transitioning to production with minimal overhead.

We construct admissible polynomial meshes on piecewise polynomial or trigonometric curves of the complex plane, by mapping univariate Chebyshev points. Such meshes can be used for polynomial least-squares, for the extraction of Fekete-like and Leja-like interpolation sets, and also for the evaluation of their Lebesgue constants.

This study compares the performance of (1) fine-tuned models and (2) extremely large language models on the task of check-worthy claim detection. For the purpose of the comparison we composed a multilingual and multi-topical dataset comprising texts of various sources and styles. Building on this, we performed a benchmark analysis to determine the most general multilingual and multi-topical claim detector. We chose three state-of-the-art models in the check-worthy claim detection task and fine-tuned them. Furthermore, we selected three state-of-the-art extremely large language models without any fine-tuning. We made modifications to the models to adapt them for multilingual settings and through extensive experimentation and evaluation. We assessed the performance of all the models in terms of accuracy, recall, and F1-score in in-domain and cross-domain scenarios. Our results demonstrate that despite the technological progress in the area of natural language processing, the models fine-tuned for the task of check-worthy claim detection still outperform the zero-shot approaches in a cross-domain settings.

The reduction of Hamiltonian systems aims to build smaller reduced models, valid over a certain range of time and parameters, in order to reduce computing time. By maintaining the Hamiltonian structure in the reduced model, certain long-term stability properties can be preserved. In this paper, we propose a non-linear reduction method for models coming from the spatial discretization of partial differential equations: it is based on convolutional auto-encoders and Hamiltonian neural networks. Their training is coupled in order to simultaneously learn the encoder-decoder operators and the reduced dynamics. Several test cases on non-linear wave dynamics show that the method has better reduction properties than standard linear Hamiltonian reduction methods.

Partiality is a natural phenomenon in computability that we cannot get around. So, the question is whether we can give the areas where partiality occurs, that is, where non-termination happens, more structure. In this paper we consider function classes which besides the total functions only contain finite functions whose domain of definition is an initial segment of the natural numbers. Such functions appear naturally in computation. We show that a rich computability theory can be developed for these functions classes which embraces the central results of classical computability theory, in which all partial (computable) functions are considered. To do so, the concept of a G\"odel number is generalised, resulting in a broader class of numberings. The central algorithmic idea in this approach is to search in enumerated lists. In this way, function computability is reduced to set listability. Besides the development of a computability theory for the functions classes, the new numberings -- called quasi-G\"odel numberings -- are studied from a numbering-theoretic perspective: they are complete, and each of the function classes numbered in this way is a retract of the G\"odel numbered set of all partial computable functions. Moreover, the Rogers semi-lattice of all computable numberings of the considered function classes is studied and results as in the case of the computable numberings of the partial computable functions are obtained. The function classes are shown to be effectively given algebraic domains in the sense of Scott-Ershov. The quasi-G\"odel numberings are exactly the admissible numberings of the computable elements of the domain. Moreover, the domain can be computably mapped onto every other effectively given one so that every admissible numbering of the computable domain elements is generated by a quasi-G\"odel numbering via this mapping.

We study indiscriminate poisoning for linear learners where an adversary injects a few crafted examples into the training data with the goal of forcing the induced model to incur higher test error. Inspired by the observation that linear learners on some datasets are able to resist the best known attacks even without any defenses, we further investigate whether datasets can be inherently robust to indiscriminate poisoning attacks for linear learners. For theoretical Gaussian distributions, we rigorously characterize the behavior of an optimal poisoning attack, defined as the poisoning strategy that attains the maximum risk of the induced model at a given poisoning budget. Our results prove that linear learners can indeed be robust to indiscriminate poisoning if the class-wise data distributions are well-separated with low variance and the size of the constraint set containing all permissible poisoning points is also small. These findings largely explain the drastic variation in empirical attack performance of the state-of-the-art poisoning attacks on linear learners across benchmark datasets, making an important initial step towards understanding the underlying reasons some learning tasks are vulnerable to data poisoning attacks.

We propose and analyze a finite element method for the Oseen eigenvalue problem. This problem is an extension of the Stokes eigenvalue problem, where the presence of the convective term leads to a non-symmetric problem and hence, to complex eigenvalues and eigenfunctions. With the aid of the compact operators theory, we prove that for inf-sup stable finite elements the convergence holds and hence, error estimates for the eigenvalues and eigenfunctions are derived. We also propose an a posteriori error estimator which results to be reliable and efficient. We report a series of numerical tests in two and three dimension in order to assess the performance of the method and the proposed estimator.

Unraveling the emergence of collective learning in systems of coupled artificial neural networks points to broader implications for machine learning, neuroscience, and society. Here we introduce a minimal model that condenses several recent decentralized algorithms by considering a competition between two terms: the local learning dynamics in the parameters of each neural network unit, and a diffusive coupling among units that tends to homogenize the parameters of the ensemble. We derive an effective theory for linear networks to show that the coarse-grained behavior of our system is equivalent to a deformed Ginzburg-Landau model with quenched disorder. This framework predicts depth-dependent disorder-order-disorder phase transitions in the parameters' solutions that reveal a depth-delayed onset of a collective learning phase and a low-rank microscopic learning path. We validate the theory in coupled ensembles of realistic neural networks trained on the MNIST dataset under privacy constraints. Interestingly, experiments confirm that individual networks -- trained on private data -- can fully generalize to unseen data classes when the collective learning phase emerges. Our work establishes the physics of collective learning and contributes to the mechanistic interpretability of deep learning in decentralized settings.

Graph-centric artificial intelligence (graph AI) has achieved remarkable success in modeling interacting systems prevalent in nature, from dynamical systems in biology to particle physics. The increasing heterogeneity of data calls for graph neural architectures that can combine multiple inductive biases. However, combining data from various sources is challenging because appropriate inductive bias may vary by data modality. Multimodal learning methods fuse multiple data modalities while leveraging cross-modal dependencies to address this challenge. Here, we survey 140 studies in graph-centric AI and realize that diverse data types are increasingly brought together using graphs and fed into sophisticated multimodal models. These models stratify into image-, language-, and knowledge-grounded multimodal learning. We put forward an algorithmic blueprint for multimodal graph learning based on this categorization. The blueprint serves as a way to group state-of-the-art architectures that treat multimodal data by choosing appropriately four different components. This effort can pave the way for standardizing the design of sophisticated multimodal architectures for highly complex real-world problems.

Humans perceive the world by concurrently processing and fusing high-dimensional inputs from multiple modalities such as vision and audio. Machine perception models, in stark contrast, are typically modality-specific and optimised for unimodal benchmarks, and hence late-stage fusion of final representations or predictions from each modality (`late-fusion') is still a dominant paradigm for multimodal video classification. Instead, we introduce a novel transformer based architecture that uses `fusion bottlenecks' for modality fusion at multiple layers. Compared to traditional pairwise self-attention, our model forces information between different modalities to pass through a small number of bottleneck latents, requiring the model to collate and condense the most relevant information in each modality and only share what is necessary. We find that such a strategy improves fusion performance, at the same time reducing computational cost. We conduct thorough ablation studies, and achieve state-of-the-art results on multiple audio-visual classification benchmarks including Audioset, Epic-Kitchens and VGGSound. All code and models will be released.

北京阿比特科技有限公司