We identify reduced order models (ROM) of forced systems from data using invariant foliations. The forcing can be external, parametric, periodic or quasi-periodic. The process has four steps: 1. identify an approximate invariant torus and the linear dynamics about the torus; 2. identify a globally defined invariant foliation about the torus; 3. identify a local foliation about an invariant manifold that complements the global foliation 4. extract the invariant manifold as the leaf going through the torus and interpret the result. We combine steps 2 and 3, so that we can track the location of the invariant torus and scale the invariance equations appropriately. We highlight some fundamental limitations of invariant manifolds and foliations when fitting them to data, that require further mathematics to resolve.
Agent-based models (ABMs) are simulation models used in economics to overcome some of the limitations of traditional frameworks based on general equilibrium assumptions. However, agents within an ABM follow predetermined, not fully rational, behavioural rules which can be cumbersome to design and difficult to justify. Here we leverage multi-agent reinforcement learning (RL) to expand the capabilities of ABMs with the introduction of fully rational agents that learn their policy by interacting with the environment and maximising a reward function. Specifically, we propose a 'Rational macro ABM' (R-MABM) framework by extending a paradigmatic macro ABM from the economic literature. We show that gradually substituting ABM firms in the model with RL agents, trained to maximise profits, allows for a thorough study of the impact of rationality on the economy. We find that RL agents spontaneously learn three distinct strategies for maximising profits, with the optimal strategy depending on the level of market competition and rationality. We also find that RL agents with independent policies, and without the ability to communicate with each other, spontaneously learn to segregate into different strategic groups, thus increasing market power and overall profits. Finally, we find that a higher degree of rationality in the economy always improves the macroeconomic environment as measured by total output, depending on the specific rational policy, this can come at the cost of higher instability. Our R-MABM framework is general, it allows for stable multi-agent learning, and represents a principled and robust direction to extend existing economic simulators.
Log-linear models are widely used to express the association in multivariate frequency data on contingency tables. The paper focuses on the power analysis for testing the goodness-of-fit hypothesis for this model type. Conventionally, for the power-related sample size calculations a deviation from the null hypothesis (effect size) is specified by means of the chi-square goodness-of-fit index. It is argued that the odds ratio is a more natural measure of effect size, with the advantage of having a data-relevant interpretation. Therefore, a class of log-affine models that are specified by odds ratios whose values deviate from those of the null by a small amount can be chosen as an alternative. Being expressed as sets of constraints on odds ratios, both hypotheses are represented by smooth surfaces in the probability simplex, and thus, the power analysis can be given a geometric interpretation as well. A concept of geometric power is introduced and a Monte-Carlo algorithm for its estimation is proposed. The framework is applied to the power analysis of goodness-of-fit in the context of multinomial sampling. An iterative scaling procedure for generating distributions from a log-affine model is described and its convergence is proved. To illustrate, the geometric power analysis is carried out for data from a clinical study.
Out-of-distribution (OOD) detection, crucial for reliable pattern classification, discerns whether a sample originates outside the training distribution. This paper concentrates on the high-dimensional features output by the final convolutional layer, which contain rich image features. Our key idea is to project these high-dimensional features into two specific feature subspaces, leveraging the dimensionality reduction capacity of the network's linear layers, trained with Predefined Evenly-Distribution Class Centroids (PEDCC)-Loss. This involves calculating the cosines of three projection angles and the norm values of features, thereby identifying distinctive information for in-distribution (ID) and OOD data, which assists in OOD detection. Building upon this, we have modified the batch normalization (BN) and ReLU layer preceding the fully connected layer, diminishing their impact on the output feature distributions and thereby widening the distribution gap between ID and OOD data features. Our method requires only the training of the classification network model, eschewing any need for input pre-processing or specific OOD data pre-tuning. Extensive experiments on several benchmark datasets demonstrates that our approach delivers state-of-the-art performance. Our code is available at //github.com/Hewell0/ProjOOD.
We consider the problem of finite-time identification of linear dynamical systems from $T$ samples of a single trajectory. Recent results have predominantly focused on the setup where no structural assumption is made on the system matrix $A^* \in \mathbb{R}^{n \times n}$, and have consequently analyzed the ordinary least squares (OLS) estimator in detail. We assume prior structural information on $A^*$ is available, which can be captured in the form of a convex set $\mathcal{K}$ containing $A^*$. For the solution of the ensuing constrained least squares estimator, we derive non-asymptotic error bounds in the Frobenius norm that depend on the local size of $\mathcal{K}$ at $A^*$. To illustrate the usefulness of these results, we instantiate them for four examples, namely when (i) $A^*$ is sparse and $\mathcal{K}$ is a suitably scaled $\ell_1$ ball; (ii) $\mathcal{K}$ is a subspace; (iii) $\mathcal{K}$ consists of matrices each of which is formed by sampling a bivariate convex function on a uniform $n \times n$ grid (convex regression); (iv) $\mathcal{K}$ consists of matrices each row of which is formed by uniform sampling (with step size $1/T$) of a univariate Lipschitz function. In all these situations, we show that $A^*$ can be reliably estimated for values of $T$ much smaller than what is needed for the unconstrained setting.
Exploring the semantic context in scene images is essential for indoor scene recognition. However, due to the diverse intra-class spatial layouts and the coexisting inter-class objects, modeling contextual relationships to adapt various image characteristics is a great challenge. Existing contextual modeling methods for scene recognition exhibit two limitations: 1) They typically model only one kind of spatial relationship among objects within scenes in an artificially predefined manner, with limited exploration of diverse spatial layouts. 2) They often overlook the differences in coexisting objects across different scenes, suppressing scene recognition performance. To overcome these limitations, we propose SpaCoNet, which simultaneously models Spatial relation and Co-occurrence of objects guided by semantic segmentation. Firstly, the Semantic Spatial Relation Module (SSRM) is constructed to model scene spatial features. With the help of semantic segmentation, this module decouples the spatial information from the scene image and thoroughly explores all spatial relationships among objects in an end-to-end manner. Secondly, both spatial features from the SSRM and deep features from the Image Feature Extraction Module are allocated to each object, so as to distinguish the coexisting object across different scenes. Finally, utilizing the discriminative features above, we design a Global-Local Dependency Module to explore the long-range co-occurrence among objects, and further generate a semantic-guided feature representation for indoor scene recognition. Experimental results on three widely used scene datasets demonstrate the effectiveness and generality of the proposed method.
The structural properties of mechanical metamaterials are typically studied with two-scale methods based on computational homogenization. Because such materials have a complex microstructure, enriched schemes such as second-order computational homogenization are required to fully capture their non-linear behavior, which arises from non-local interactions due to the buckling or patterning of the microstructure. In the two-scale formulation, the effective behavior of the microstructure is captured with a representative volume element (RVE), and a homogenized effective continuum is considered on the macroscale. Although an effective continuum formulation is introduced, solving such two-scale models concurrently is still computationally demanding due to the many repeated solutions for each RVE at the microscale level. In this work, we propose a reduced-order model for the microscopic problem arising in second-order computational homogenization, using proper orthogonal decomposition and a novel hyperreduction method that is specifically tailored for this problem and inspired by the empirical cubature method. Two numerical examples are considered, in which the performance of the reduced-order model is carefully assessed by comparing its solutions with direct numerical simulations (entirely resolving the underlying microstructure) and the full second-order computational homogenization model. The reduced-order model is able to approximate the result of the full computational homogenization well, provided that the training data is representative for the problem at hand. Any remaining errors, when compared with the direct numerical simulation, can be attributed to the inherent approximation errors in the computational homogenization scheme. Regarding run times for one thread, speed-ups on the order of 100 are achieved with the reduced-order model as compared to direct numerical simulations.
State transition algorithm (STA) is a metaheuristic method for global optimization. Recently, a modified STA named parameter optimal state transition algorithm (POSTA) is proposed. In POSTA, the performance of expansion operator, rotation operator and axesion operator is optimized through a parameter selection mechanism. But due to the insufficient utilization of historical information, POSTA still suffers from slow convergence speed and low solution accuracy on specific problems. To make better use of the historical information, Nelder-Mead (NM) simplex search and quadratic interpolation (QI) are integrated into POSTA. The enhanced POSTA is tested against 14 benchmark functions with 20-D, 30-D and 50-D space. An experimental comparison with several competitive metaheuristic methods demonstrates the effectiveness of the proposed method.
Most state-of-the-art machine learning techniques revolve around the optimisation of loss functions. Defining appropriate loss functions is therefore critical to successfully solving problems in this field. We present a survey of the most commonly used loss functions for a wide range of different applications, divided into classification, regression, ranking, sample generation and energy based modelling. Overall, we introduce 33 different loss functions and we organise them into an intuitive taxonomy. Each loss function is given a theoretical backing and we describe where it is best used. This survey aims to provide a reference of the most essential loss functions for both beginner and advanced machine learning practitioners.
We derive information-theoretic generalization bounds for supervised learning algorithms based on the information contained in predictions rather than in the output of the training algorithm. These bounds improve over the existing information-theoretic bounds, are applicable to a wider range of algorithms, and solve two key challenges: (a) they give meaningful results for deterministic algorithms and (b) they are significantly easier to estimate. We show experimentally that the proposed bounds closely follow the generalization gap in practical scenarios for deep learning.
The remarkable practical success of deep learning has revealed some major surprises from a theoretical perspective. In particular, simple gradient methods easily find near-optimal solutions to non-convex optimization problems, and despite giving a near-perfect fit to training data without any explicit effort to control model complexity, these methods exhibit excellent predictive accuracy. We conjecture that specific principles underlie these phenomena: that overparametrization allows gradient methods to find interpolating solutions, that these methods implicitly impose regularization, and that overparametrization leads to benign overfitting. We survey recent theoretical progress that provides examples illustrating these principles in simpler settings. We first review classical uniform convergence results and why they fall short of explaining aspects of the behavior of deep learning methods. We give examples of implicit regularization in simple settings, where gradient methods lead to minimal norm functions that perfectly fit the training data. Then we review prediction methods that exhibit benign overfitting, focusing on regression problems with quadratic loss. For these methods, we can decompose the prediction rule into a simple component that is useful for prediction and a spiky component that is useful for overfitting but, in a favorable setting, does not harm prediction accuracy. We focus specifically on the linear regime for neural networks, where the network can be approximated by a linear model. In this regime, we demonstrate the success of gradient flow, and we consider benign overfitting with two-layer networks, giving an exact asymptotic analysis that precisely demonstrates the impact of overparametrization. We conclude by highlighting the key challenges that arise in extending these insights to realistic deep learning settings.