This paper investigates the convergence time of log-linear learning to an $\epsilon$-efficient Nash equilibrium (NE) in potential games. In such games, an efficient NE is defined as the maximizer of the potential function. Existing results are limited to potential games with stringent structural assumptions and entail exponential convergence times in $1/\epsilon$. Unaddressed so far, we tackle general potential games and prove the first finite-time convergence to an $\epsilon$-efficient NE. In particular, by using a problem-dependent analysis, our bound depends polynomially on $1/\epsilon$. Furthermore, we provide two extensions of our convergence result: first, we show that a variant of log-linear learning that requires a factor $A$ less feedback on the utility per round enjoys a similar convergence time; second, we demonstrate the robustness of our convergence guarantee if log-linear learning is subject to small perturbations such as alterations in the learning rule or noise-corrupted utilities.
We discuss a connection between a generative model, called the diffusion model, and nonequilibrium thermodynamics for the Fokker-Planck equation, called stochastic thermodynamics. Based on the techniques of stochastic thermodynamics, we derive the speed-accuracy trade-off for the diffusion models, which is a trade-off relationship between the speed and accuracy of data generation in diffusion models. Our result implies that the entropy production rate in the forward process affects the errors in data generation. From a stochastic thermodynamic perspective, our results provide quantitative insight into how best to generate data in diffusion models. The optimal learning protocol is introduced by the conservative force in stochastic thermodynamics and the geodesic of space by the 2-Wasserstein distance in optimal transport theory. We numerically illustrate the validity of the speed-accuracy trade-off for the diffusion models with different noise schedules such as the cosine schedule, the conditional optimal transport, and the optimal transport.
Linear causal disentanglement is a recent method in causal representation learning to describe a collection of observed variables via latent variables with causal dependencies between them. It can be viewed as a generalization of both independent component analysis and linear structural equation models. We study the identifiability of linear causal disentanglement, assuming access to data under multiple contexts, each given by an intervention on a latent variable. We show that one perfect intervention on each latent variable is sufficient and in the worst case necessary to recover parameters under perfect interventions, generalizing previous work to allow more latent than observed variables. We give a constructive proof that computes parameters via a coupled tensor decomposition. For soft interventions, we find the equivalence class of latent graphs and parameters that are consistent with observed data, via the study of a system of polynomial equations. Our results hold assuming the existence of non-zero higher-order cumulants, which implies non-Gaussianity of variables.
We discuss a connection between a generative model, called the diffusion model, and nonequilibrium thermodynamics for the Fokker-Planck equation, called stochastic thermodynamics. Based on the techniques of stochastic thermodynamics, we derive the speed-accuracy trade-off for the diffusion models, which is a trade-off relationship between the speed and accuracy of data generation in diffusion models. Our result implies that the entropy production rate in the forward process affects the errors in data generation. From a stochastic thermodynamic perspective, our results provide quantitative insight into how best to generate data in diffusion models. The optimal learning protocol is introduced by the conservative force in stochastic thermodynamics and the geodesic of space by the 2-Wasserstein distance in optimal transport theory. We numerically illustrate the validity of the speed-accuracy trade-off for the diffusion models with different noise schedules such as the cosine schedule, the conditional optimal transport, and the optimal transport.
Meta-learning, or "learning to learn," is a subfield of machine learning where the goal is to develop models and algorithms that can learn from various tasks and improve their learning process over time. Unlike traditional machine learning methods focusing on learning a specific task, meta-learning aims to leverage experience from previous tasks to enhance future learning. This approach is particularly beneficial in scenarios where the available data for a new task is limited, but there exists abundant data from related tasks. By extracting and utilizing the underlying structure and patterns across these tasks, meta-learning algorithms can achieve faster convergence and better performance with fewer data. The following notes are mainly inspired from \cite{vanschoren2018meta}, \cite{baxter2019learning}, and \cite{maurer2005algorithmic}.
We propose an extremely versatile approach to address a large family of matrix nearness problems, possibly with additional linear constraints. Our method is based on splitting a matrix nearness problem into two nested optimization problems, of which the inner one can be solved either exactly or cheaply, while the outer one can be recast as an unconstrained optimization task over a smooth real Riemannian manifold. We observe that this paradigm applies to many matrix nearness problems of practical interest appearing in the literature, thus revealing that they are equivalent in this sense to a Riemannian optimization problem. We also show that the objective function to be minimized on the Riemannian manifold can be discontinuous, thus requiring regularization techniques, and we give conditions for this to happen. Finally, we demonstrate the practical applicability of our method by implementing it for a number of matrix nearness problems that are relevant for applications and are currently considered very demanding in practice. Extensive numerical experiments demonstrate that our method often greatly outperforms its predecessors, including algorithms specifically designed for those particular problems.
Anderson acceleration (AA) is widely used for accelerating the convergence of an underlying fixed-point iteration $\bm{x}_{k+1} = \bm{q}( \bm{x}_{k} )$, $k = 0, 1, \ldots$, with $\bm{x}_k \in \mathbb{R}^n$, $\bm{q} \colon \mathbb{R}^n \to \mathbb{R}^n$. Despite AA's widespread use, relatively little is understood theoretically about the extent to which it may accelerate the underlying fixed-point iteration. To this end, we analyze a restarted variant of AA with a restart size of one, a method closely related to GMRES(1). We consider the case of $\bm{q}( \bm{x} ) = M \bm{x} + \bm{b}$ with matrix $M \in \mathbb{R}^{n \times n}$ either symmetric or skew-symmetric. For both classes of $M$ we compute the worst-case root-average asymptotic convergence factor of the AA method, partially relying on conjecture in the symmetric setting, proving that it is strictly smaller than that of the underlying fixed-point iteration. For symmetric $M$, we show that the AA residual iteration corresponds to a fixed-point iteration for solving an eigenvector-dependent nonlinear eigenvalue problem (NEPv), and we show how this can result in the convergence factor strongly depending on the initial iterate, which we quantify exactly in certain special cases. Conversely, for skew-symmetric $M$ we show that the AA residual iteration is closely related to a power iteration for $M$, and how this results in the convergence factor being independent of the initial iterate. Supporting numerical results are given, which also indicate the theory is applicable to the more general setting of nonlinear $\bm{q}$ with Jacobian at the fixed point that is symmetric or skew symmetric.
A Variable Parameter (VP) analysis aims to give precise time complexity expressions of algorithms with exponents appearing solely in terms of variable parameters. A variable parameter is the number of objects with specific properties. Here we describe two VP-algorithms, an implicit enumeration and a polynomial-time approximation scheme for a strongly $NP$-hard problem of scheduling $n$ independent jobs with release and due times on one machine to minimize the maximum job completion time $C_{\max}$. Our variable parameters are the amounts of some specially defined types of jobs. A partial solution without these jobs is constructed in a low degree polynomial time, and an exponential time procedure (in the number of variable parameters) is carried out to augment it to a complete optimal solution. In the alternative time complexity expressions, the exponential dependence is solely on the some job parameters. Applying the fixed parameter analysis to these estimations, a polynomial-time dependence is achieved. Both, the intuitive probabilistic estimations and our extensive experimental study support our conjecture that the total number of the variable parameters is far less than $n$ and its ratio to $n$ converges to 0 asymptotically.
We consider the problem of enumerating all minimal transversals (also called minimal hitting sets) of a hypergraph $\mathcal{H}$. An equivalent formulation of this problem known as the \emph{transversal hypergraph} problem (or \emph{hypergraph dualization} problem) is to decide, given two hypergraphs, whether one corresponds to the set of minimal transversals of the other. The existence of a polynomial time algorithm to solve this problem is a long standing open question. In \cite{fredman_complexity_1996}, the authors present the first sub-exponential algorithm to solve the transversal hypergraph problem which runs in quasi-polynomial time, making it unlikely that the problem is (co)NP-complete. In this paper, we show that when one of the two hypergraphs is of bounded VC-dimension, the transversal hypergraph problem can be solved in polynomial time, or equivalently that if $\mathcal{H}$ is a hypergraph of bounded VC-dimension, then there exists an incremental polynomial time algorithm to enumerate its minimal transversals. This result generalizes most of the previously known polynomial cases in the literature since they almost all consider classes of hypergraphs of bounded VC-dimension. As a consequence, the hypergraph transversal problem is solvable in polynomial time for any class of hypergraphs closed under partial subhypergraphs. We also show that the proposed algorithm runs in quasi-polynomial time in general hypergraphs and runs in polynomial time if the conformality of the hypergraph is bounded, which is one of the few known polynomial cases where the VC-dimension is unbounded.
Prediction models are used to predict an outcome based on input variables. Missing data in input variables often occurs at model development and at prediction time. The missForestPredict R package proposes an adaptation of the missForest imputation algorithm that is fast, user-friendly and tailored for prediction settings. The algorithm iteratively imputes variables using random forests until a convergence criterion (unified for continuous and categorical variables and based on the out-of-bag error) is met. The imputation models are saved for each variable and iteration and can be applied later to new observations at prediction time. The missForestPredict package offers extended error monitoring, control over variables used in the imputation and custom initialization. This allows users to tailor the imputation to their specific needs. The missForestPredict algorithm is compared to mean/mode imputation, linear regression imputation, mice, k-nearest neighbours, bagging, miceRanger and IterativeImputer on eight simulated datasets with simulated missingness (48 scenarios) and eight large public datasets using different prediction models. missForestPredict provides competitive results in prediction settings within short computation times.
Deep learning is usually described as an experiment-driven field under continuous criticizes of lacking theoretical foundations. This problem has been partially fixed by a large volume of literature which has so far not been well organized. This paper reviews and organizes the recent advances in deep learning theory. The literature is categorized in six groups: (1) complexity and capacity-based approaches for analyzing the generalizability of deep learning; (2) stochastic differential equations and their dynamic systems for modelling stochastic gradient descent and its variants, which characterize the optimization and generalization of deep learning, partially inspired by Bayesian inference; (3) the geometrical structures of the loss landscape that drives the trajectories of the dynamic systems; (4) the roles of over-parameterization of deep neural networks from both positive and negative perspectives; (5) theoretical foundations of several special structures in network architectures; and (6) the increasingly intensive concerns in ethics and security and their relationships with generalizability.