This paper introduces the design and implementation of PyOptInterface, a modeling language for mathematical optimization embedded in Python programming language. PyOptInterface uses lightweight and compact data structure to bridge high-level entities in optimization models like variables and constraints to internal indices of optimizers efficiently. It supports a variety of optimization solvers and a range of common problem classes. We provide benchmarks to exhibit the competitive performance of PyOptInterface compared with other state-of-the-art modeling languages.
The present work concerns the derivation of a numerical scheme to approximate weak solutions of the Euler equations with a gravitational source term. The designed scheme is proved to be fully well-balanced since it is able to exactly preserve all moving equilibrium solutions, as well as the corresponding steady solutions at rest obtained when the velocity vanishes. Moreover, the proposed scheme is entropy-preserving since it satisfies all fully discrete entropy inequalities. In addition, in order to satisfy the required admissibility of the approximate solutions, the positivity of both approximate density and pressure is established. Several numerical experiments attest the relevance of the developed numerical method.
The ability to manipulate logical-mathematical symbols (LMS), encompassing tasks such as calculation, reasoning, and programming, is a cognitive skill arguably unique to humans. Considering the relatively recent emergence of this ability in human evolutionary history, it has been suggested that LMS processing may build upon more fundamental cognitive systems, possibly through neuronal recycling. Previous studies have pinpointed two primary candidates, natural language processing and spatial cognition. Existing comparisons between these domains largely relied on task-level comparison, which may be confounded by task idiosyncrasy. The present study instead compared the neural correlates at the domain level with both automated meta-analysis and synthesized maps based on three representative LMS tasks, reasoning, calculation, and mental programming. Our results revealed a more substantial cortical overlap between LMS processing and spatial cognition, in contrast to language processing. Furthermore, in regions activated by both spatial and language processing, the multivariate activation pattern for LMS processing exhibited greater multivariate similarity to spatial cognition than to language processing. A hierarchical clustering analysis further indicated that typical LMS tasks were indistinguishable from spatial cognition tasks at the neural level, suggesting an inherent connection between these two cognitive processes. Taken together, our findings support the hypothesis that spatial cognition is likely the basis of LMS processing, which may shed light on the limitations of large language models in logical reasoning, particularly those trained exclusively on textual data without explicit emphasis on spatial content.
The state vector-based simulation offers a convenient approach to developing and validating quantum algorithms with noise-free results. However, limited by the absence of cache-aware implementations and unpolished circuit optimizations, the past simulators were severely constrained in performance, leading to stagnation in quantum computing. In this paper, we present an innovative quantum circuit simulation toolkit comprising gate optimization and simulation modules to address these performance challenges. For the performance, scalability, and comprehensive evaluation, we conduct a series of particular circuit benchmarks and strong scaling tests on a DGX-A100 workstation and achieve averaging 9 times speedup compared to state-of-the-art simulators, including QuEST, IBM-Aer, and NVIDIA-cuQuantum. Moreover, the critical performance metric FLOPS increases by up to a factor of 8-fold, and arithmetic intensity experiences a remarkable 96x enhancement. We believe the proposed toolkit paves the way for faster quantum circuit simulations, thereby facilitating the development of novel quantum algorithms.
We propose a third-order numerical integrator based on the Neumann series and the Filon quadrature, designed mainly for highly oscillatory partial differential equations. The method can be applied to equations that exhibit small or moderate oscillations; however, counter-intuitively, large oscillations increase the accuracy of the scheme. With the proposed approach, the convergence order of the method can be easily improved. Error analysis of the method is also performed. We consider linear evolution equations involving first- and second-time derivatives that feature elliptic differential operators, such as the heat equation or the wave equation. Numerical experiments consider the case in which the space dimension is greater than one and confirm the theoretical study.
Complex conjugate matrix equations (CCME) have aroused the interest of many researchers because of computations and antilinear systems. Existing research is dominated by its time-invariant solving methods, but lacks proposed theories for solving its time-variant version. Moreover, artificial neural networks are rarely studied for solving CCME. In this paper, starting with the earliest CCME, zeroing neural dynamics (ZND) is applied to solve its time-variant version. Firstly, the vectorization and Kronecker product in the complex field are defined uniformly. Secondly, Con-CZND1 model and Con-CZND2 model are proposed and theoretically prove convergence and effectiveness. Thirdly, three numerical experiments are designed to illustrate the effectiveness of the two models, compare their differences, highlight the significance of neural dynamics in the complex field, and refine the theory related to ZND.
This paper studies the convergence of a spatial semidiscretization of a three-dimensional stochastic Allen-Cahn equation with multiplicative noise. For non-smooth initial data, the regularity of the mild solution is investigated, and an error estimate is derived within the spatial (L^2)-norm setting. In the case of smooth initial data, two error estimates are established within the framework of general spatial (L^q)-norms.
The study of intelligent systems explains behaviour in terms of economic rationality. This results in an optimization principle involving a function or utility, which states that the system will evolve until the configuration of maximum utility is achieved. Recently, this theory has incorporated constraints, i.e., the optimum is achieved when the utility is maximized while respecting some information-processing constraints. This is reminiscent of thermodynamic systems. As such, the study of intelligent systems has benefited from the tools of thermodynamics. The first aim of this thesis is to clarify the applicability of these results in the study of intelligent systems. We can think of the local transition steps in thermodynamic or intelligent systems as being driven by uncertainty. In fact, the transitions in both systems can be described in terms of majorization. Hence, real-valued uncertainty measures like Shannon entropy are simply a proxy for their more involved behaviour. More in general, real-valued functions are fundamental to study optimization and complexity in the order-theoretic approach to several topics, including economics, thermodynamics, and quantum mechanics. The second aim of this thesis is to improve on this classification. The basic similarity between thermodynamic and intelligent systems is based on an uncertainty notion expressed by a preorder. We can also think of the transitions in the steps of a computational process as a decision-making procedure. In fact, by adding some requirements on the considered order structures, we can build an abstract model of uncertainty reduction that allows to incorporate computability, that is, to distinguish the objects that can be constructed by following a finite set of instructions from those that cannot. The third aim of this thesis is to clarify the requirements on the order structure that allow such a framework.
We study the computational problem of rigorously describing the asymptotic behaviour of topological dynamical systems up to a finite but arbitrarily small pre-specified error. More precisely, we consider the limit set of a typical orbit, both as a spatial object (attractor set) and as a statistical distribution (physical measure), and prove upper bounds on the computational resources of computing descriptions of these objects with arbitrary accuracy. We also study how these bounds are affected by different dynamical constrains and provide several examples showing that our bounds are sharp in general. In particular, we exhibit a computable interval map having a unique transitive attractor with Cantor set structure supporting a unique physical measure such that both the attractor and the measure are non computable.
We derive information-theoretic generalization bounds for supervised learning algorithms based on the information contained in predictions rather than in the output of the training algorithm. These bounds improve over the existing information-theoretic bounds, are applicable to a wider range of algorithms, and solve two key challenges: (a) they give meaningful results for deterministic algorithms and (b) they are significantly easier to estimate. We show experimentally that the proposed bounds closely follow the generalization gap in practical scenarios for deep learning.
When and why can a neural network be successfully trained? This article provides an overview of optimization algorithms and theory for training neural networks. First, we discuss the issue of gradient explosion/vanishing and the more general issue of undesirable spectrum, and then discuss practical solutions including careful initialization and normalization methods. Second, we review generic optimization methods used in training neural networks, such as SGD, adaptive gradient methods and distributed methods, and theoretical results for these algorithms. Third, we review existing research on the global issues of neural network training, including results on bad local minima, mode connectivity, lottery ticket hypothesis and infinite-width analysis.