Many important high-dimensional dynamical systems exhibit complex chaotic behaviour. Their complexity means that their dynamics are necessarily comprehended under strong reducing assumptions. It is therefore important to have a clear picture of these reducing assumptions' range of validity. The highly influential chaotic hypothesis of Gallavotti and Cohen states that the large-scale dynamics of high-dimensional systems are effectively hyperbolic, which implies many felicitous statistical properties. We demonstrate, contrary to the chaotic hypothesis, the existence of non-hyperbolic large-scale dynamics in a mean-field coupled system. To do this we reduce the system to its thermodynamic limit, which we approximate numerically with a Chebyshev Galerkin transfer operator discretisation. This enables us to obtain a high precision estimate of a homoclinic tangency, implying a failure of hyperbolicity. Robust non-hyperbolic behaviour is expected under perturbation. As a result, the chaotic hypothesis should not be assumed to hold in all systems, and a better understanding of the domain of its validity is required.
Analytical understanding of how low-dimensional latent features reveal themselves in large-dimensional data is still lacking. We study this by defining a linear latent feature model with additive noise constructed from probabilistic matrices, and analytically and numerically computing the statistical distributions of pairwise correlations and eigenvalues of the correlation matrix. This allows us to resolve the latent feature structure across a wide range of data regimes set by the number of recorded variables, observations, latent features and the signal-to-noise ratio. We find a characteristic imprint of latent features in the distribution of correlations and eigenvalues and provide an analytic estimate for the boundary between signal and noise even in the absence of a clear spectral gap.
We introduce a new test for conditional independence which is based on what we call the weighted generalised covariance measure (WGCM). It is an extension of the recently introduced generalised covariance measure (GCM). To test the null hypothesis of X and Y being conditionally independent given Z, our test statistic is a weighted form of the sample covariance between the residuals of nonlinearly regressing X and Y on Z. We propose different variants of the test for both univariate and multivariate X and Y. We give conditions under which the tests yield the correct type I error rate. Finally, we compare our novel tests to the original GCM using simulation and on real data sets. Typically, our tests have power against a wider class of alternatives compared to the GCM. This comes at the cost of having less power against alternatives for which the GCM already works well.
Multilevel methods are among the most efficient numerical methods for solving large-scale linear systems that arise from discretized partial differential equations. The fundamental module of such methods is a two-level procedure, which consists of compatible relaxation and coarse-level correction. Regarding two-level convergence theory, most previous works focus on the case of exact (Galerkin) coarse solver. In practice, however, it is often too costly to solve the Galerkin coarse-level system exactly when its size is relatively large. Compared with the exact case, the convergence theory of inexact two-level methods is of more practical significance, while it is still less developed in the literature, especially when nonlinear coarse solvers are used. In this paper, we establish a general framework for analyzing the convergence of inexact two-level methods, in which the coarse-level system is solved approximately by an inner iterative procedure. The framework allows us to use various (linear, nonlinear, deterministic, randomized, or hybrid) solvers in the inner iterations, as long as the corresponding accuracy estimates are available.
The single-site dynamics are a canonical class of Markov chains for sampling from high-dimensional probability distributions, e.g. the ones represented by graphical models. We give a simple and generic parallel algorithm that can faithfully simulate single-site dynamics. When the chain asymptotically satisfies the $\ell_p$-Dobrushin's condition, specifically, when the Dobrushin's influence matrix has constantly bounded $\ell_p$-induced operator norm for an arbitrary $p\in[1,\infty]$, the parallel simulation of $N$ steps of single-site updates succeeds within $O\left({N}/{n}+\log n\right)$ depth of parallel computing using $\tilde{O}(m)$ processors, where $n$ is the number of sites and $m$ is the size of graphical model. Since the Dobrushin's condition is almost always satisfied asymptotically by mixing chains, this parallel simulation algorithm essentially transforms single-site dynamics with optimal $O(n\log n)$ mixing time to RNC algorithms for sampling. In particular we obtain RNC samplers, for the Ising models on general graphs in the uniqueness regime, and for satisfying solutions of CNF formulas in a local lemma regime. With non-adaptive simulated annealing, these RNC samplers can be transformed routinely to RNC algorithms for approximate counting. A key step in our parallel simulation algorithm, is a so-called "universal coupling" procedure, which tries to simultaneously couple all distributions over the same sample space. We construct such a universal coupling, that for every pair of distributions the coupled probability is at least their Jaccard similarity. We also prove this is optimal in the worst case. The universal coupling and its applications are of independent interests.
The Federated Learning workflow of training a centralized model with distributed data is growing in popularity. However, until recently, this was the realm of contributing clients with similar computing capabilities. The fast expanding IoT space and data being generated and processed at the edge are encouraging more effort into expanding federated learning to include heterogeneous systems. Previous approaches distribute smaller models to clients for distilling the characteristic of local data. But the problem of training with vast amounts of local data on the client side still remains. We propose to reduce the amount of local data that is needed to train a global model. We do this by splitting the model into a lower part for generic feature extraction and an upper part that is more sensitive to the characteristics of the local data. We reduce the amount of data needed to train the upper part by clustering the local data and selecting only the most representative samples to use for training. Our experiments show that less than 1% of the local data can transfer the characteristics of the client data to the global model with our slit network approach. These preliminary results are encouraging continuing towards federated learning with reduced amount of data on devices with limited computing resources, but which hold critical information to contribute to the global model.
We investigate identifying the boundary of a domain from sample points in the domain. We introduce new estimators for the normal vector to the boundary, distance of a point to the boundary, and a test for whether a point lies within a boundary strip. The estimators can be efficiently computed and are more accurate than the ones present in the literature. We provide rigorous error estimates for the estimators. Furthermore we use the detected boundary points to solve boundary-value problems for PDE on point clouds. We prove error estimates for the Laplace and eikonal equations on point clouds. Finally we provide a range of numerical experiments illustrating the performance of our boundary estimators, applications to PDE on point clouds, and tests on image data sets.
The use of pessimism, when reasoning about datasets lacking exhaustive exploration has recently gained prominence in offline reinforcement learning. Despite the robustness it adds to the algorithm, overly pessimistic reasoning can be equally damaging in precluding the discovery of good policies, which is an issue for the popular bonus-based pessimism. In this paper, we introduce the notion of Bellman-consistent pessimism for general function approximation: instead of calculating a point-wise lower bound for the value function, we implement pessimism at the initial state over the set of functions consistent with the Bellman equations. Our theoretical guarantees only require Bellman closedness as standard in the exploratory setting, in which case bonus-based pessimism fails to provide guarantees. Even in the special case of linear function approximation where stronger expressivity assumptions hold, our result improves upon a recent bonus-based approach by $\mathcal{O}(d)$ in its sample complexity when the action space is finite. Remarkably, our algorithms automatically adapt to the best bias-variance tradeoff in the hindsight, whereas most prior approaches require tuning extra hyperparameters a priori.
The existence of simple, uncoupled no-regret dynamics that converge to correlated equilibria in normal-form games is a celebrated result in the theory of multi-agent systems. Specifically, it has been known for more than 20 years that when all players seek to minimize their internal regret in a repeated normal-form game, the empirical frequency of play converges to a normal-form correlated equilibrium. Extensive-form (that is, tree-form) games generalize normal-form games by modeling both sequential and simultaneous moves, as well as private information. Because of the sequential nature and presence of partial information in the game, extensive-form correlation has significantly different properties than the normal-form counterpart, many of which are still open research directions. Extensive-form correlated equilibrium (EFCE) has been proposed as the natural extensive-form counterpart to normal-form correlated equilibrium. However, it was currently unknown whether EFCE emerges as the result of uncoupled agent dynamics. In this paper, we give the first uncoupled no-regret dynamics that converge to the set of EFCEs in $n$-player general-sum extensive-form games with perfect recall. First, we introduce a notion of trigger regret in extensive-form games, which extends that of internal regret in normal-form games. When each player has low trigger regret, the empirical frequency of play is close to an EFCE. Then, we give an efficient no-trigger-regret algorithm. Our algorithm decomposes trigger regret into local subproblems at each decision point for the player, and constructs a global strategy of the player from the local solutions at each decision point.
Fashion is a complex social phenomenon. People follow fashion styles from demonstrations by experts or fashion icons. However, for machine agent, learning to imitate fashion experts from demonstrations can be challenging, especially for complex styles in environments with high-dimensional, multimodal observations. Most existing research regarding fashion outfit composition utilizes supervised learning methods to mimic the behaviors of style icons. These methods suffer from distribution shift: because the agent greedily imitates some given outfit demonstrations, it can drift away from one style to another styles given subtle differences. In this work, we propose an adversarial inverse reinforcement learning formulation to recover reward functions based on hierarchical multimodal representation (HM-AIRL) during the imitation process. The hierarchical joint representation can more comprehensively model the expert composited outfit demonstrations to recover the reward function. We demonstrate that the proposed HM-AIRL model is able to recover reward functions that are robust to changes in multimodal observations, enabling us to learn policies under significant variation between different styles.
Recent years have witnessed significant progresses in deep Reinforcement Learning (RL). Empowered with large scale neural networks, carefully designed architectures, novel training algorithms and massively parallel computing devices, researchers are able to attack many challenging RL problems. However, in machine learning, more training power comes with a potential risk of more overfitting. As deep RL techniques are being applied to critical problems such as healthcare and finance, it is important to understand the generalization behaviors of the trained agents. In this paper, we conduct a systematic study of standard RL agents and find that they could overfit in various ways. Moreover, overfitting could happen "robustly": commonly used techniques in RL that add stochasticity do not necessarily prevent or detect overfitting. In particular, the same agents and learning algorithms could have drastically different test performance, even when all of them achieve optimal rewards during training. The observations call for more principled and careful evaluation protocols in RL. We conclude with a general discussion on overfitting in RL and a study of the generalization behaviors from the perspective of inductive bias.