We consider the scenario of supervised learning in Deep Learning (DL) networks, and exploit the arbitrariness of choice in the Riemannian metric relative to which the gradient descent flow can be defined (a general fact of differential geometry). In the standard approach to DL, the gradient flow on the space of parameters (weights and biases) is defined with respect to the Euclidean metric. Here instead, we choose the gradient flow with respect to the Euclidean metric in the output layer of the DL network. This naturally induces two modified versions of the gradient descent flow in the parameter space, one adapted for the overparametrized setting, and the other for the underparametrized setting. In the overparametrized case, we prove that, provided that a rank condition holds, all orbits of the modified gradient descent drive the ${\mathcal L}^2$ cost to its global minimum at a uniform exponential convergence rate; one thereby obtains an a priori stopping time for any prescribed proximity to the global minimum. We point out relations of the latter to sub-Riemannian geometry. Moreover, we generalize the above framework to the situation in which the rank condition does not hold; in particular, we show that local equilibria can only exist if a rank loss occurs, and that generically, they are not isolated points, but elements of a critical submanifold of parameter space.
Boundary integral equation formulations of elliptic partial differential equations lead to dense system matrices when discretized, yet they are data-sparse. Using the $\mathcal{H}$-matrix format, this sparsity is exploited to achieve $\mathcal{O}(N\log N)$ complexity for storage and multiplication by a vector. This is achieved purely algebraically, based on low-rank approximations of subblocks, and hence the format is also applicable to a wider range of problems. The $\mathcal{H}^2$-matrix format improves the complexity to $\mathcal{O}(N)$ by introducing a recursive structure onto subblocks on multiple levels. However, in practice this comes with a large proportionality constant, making the $\mathcal{H}^2$-matrix format advantageous mostly for large problems. In this paper we investigate the usefulness of a matrix format that lies in between these two: Uniform $\mathcal{H}$-matrices. An algebraic compression algorithm is introduced to transform a regular $\mathcal{H}$-matrix into a uniform $\mathcal{H}$-matrix, which maintains the asymptotic complexity.
Approximating invariant subspaces of generalized eigenvalue problems (GEPs) is a fundamental computational problem at the core of machine learning and scientific computing. It is, for example, the root of Principal Component Analysis (PCA) for dimensionality reduction, data visualization, and noise filtering, and of Density Functional Theory (DFT), arguably the most popular method to calculate the electronic structure of materials. For a Hermitian definite GEP $HC=SC\Lambda$, let $\Pi_k$ be the true spectral projector on the invariant subspace that is associated with the $k$ smallest (or largest) eigenvalues. Given $H,$ $S$, an integer $k$, and accuracy $\varepsilon\in(0,1)$, we show that we can compute a matrix $\widetilde\Pi_k$ such that $\lVert\Pi_k-\widetilde\Pi_k\rVert_2\leq \varepsilon$, in $O\left( n^{\omega+\eta}\mathrm{polylog}(n,\varepsilon^{-1},\kappa(S),\mathrm{gap}_k^{-1}) \right)$ bit operations in the floating point model with probability $1-1/n$. Here, $\eta>0$ is arbitrarily small, $\omega\lesssim 2.372$ is the matrix multiplication exponent, $\kappa(S)=\lVert S\rVert_2\lVert S^{-1}\rVert_2$, and $\mathrm{gap}_k$ is the gap between eigenvalues $k$ and $k+1$. To the best of our knowledge, this is the first end-to-end analysis achieving such "forward-error" approximation guarantees with nearly $O(n^{\omega+\eta})$ bit complexity, improving classical $\widetilde O(n^3)$ eigensolvers, even for the regular case $(S=I)$. Our methods rely on a new $O(n^{\omega+\eta})$ stability analysis for the Cholesky factorization, and a new smoothed analysis for computing spectral gaps, which can be of independent interest. Ultimately, we obtain new matrix multiplication-type bit complexity upper bounds for PCA problems, including classical PCA and (randomized) low-rank approximation.
Learning to Optimize (L2O) stands at the intersection of traditional optimization and machine learning, utilizing the capabilities of machine learning to enhance conventional optimization techniques. As real-world optimization problems frequently share common structures, L2O provides a tool to exploit these structures for better or faster solutions. This tutorial dives deep into L2O techniques, introducing how to accelerate optimization algorithms, promptly estimate the solutions, or even reshape the optimization problem itself, making it more adaptive to real-world applications. By considering the prerequisites for successful applications of L2O and the structure of the optimization problems at hand, this tutorial provides a comprehensive guide for practitioners and researchers alike.
In this work we present a consistent reduction of the relaxed micromorphic model to its corresponding two-dimensional planar model, such that its capacity to capture discontinuous dilatation fields is preserved. As a direct consequence of our approach, new conforming finite elements for $H^\mathrm{dev}(\mathrm{Curl},A)$ become necessary. We present two novel $H^\mathrm{dev}(\mathrm{Curl},A)$-conforming finite element spaces, of which one is a macro element based on Clough--Tocher splits, as well as primal and mixed variational formulations of the planar relaxed micromorphic model. Finally, we demonstrate the effectiveness of our approach with two numerical examples.
We develop further the theory of monoidal bicategories by introducing and studying bicategorical counterparts of the notions of a linear explonential comonad, as considered in the study of linear logic, and of a codereliction transformation, introduced to study differential linear logic via differential categories. As an application, we extend the differential calculus of Joyal's analytic functors to analytic functors between presheaf categories, just as ordinary calculus extends from a single variable to many variables.
In this paper, we propose Wasserstein proximals of $\alpha$-divergences as suitable objective functionals for learning heavy-tailed distributions in a stable manner. First, we provide sufficient, and in some cases necessary, relations among data dimension, $\alpha$, and the decay rate of data distributions for the Wasserstein-proximal-regularized divergence to be finite. Finite-sample convergence rates for the estimation in the case of the Wasserstein-1 proximal divergences are then provided under certain tail conditions. Numerical experiments demonstrate stable learning of heavy-tailed distributions -- even those without first or second moment -- without any explicit knowledge of the tail behavior, using suitable generative models such as GANs and flow-based models related to our proposed Wasserstein-proximal-regularized $\alpha$-divergences. Heuristically, $\alpha$-divergences handle the heavy tails and Wasserstein proximals allow non-absolute continuity between distributions and control the velocities of flow-based algorithms as they learn the target distribution deep into the tails.
Objective: Magnetic particle imaging (MPI) is an emerging medical imaging modality which has gained increasing interest in recent years. Among the benefits of MPI are its high temporal resolution, and that the technique does not expose the specimen to any kind of ionizing radiation. It is based on the non-linear response of magnetic nanoparticles to an applied magnetic field. From the electric signal measured in receive coils, the particle concentration has to be reconstructed. Due to the ill-posedness of the reconstruction problem, various regularization methods have been proposed for reconstruction ranging from early stopping methods, via classical Tikhonov regularization and iterative methods to modern machine learning approaches. In this work, we contribute to the latter class: we propose a plug-and-play approach based on a generic zero-shot denoiser with an $\ell^1$-prior. Approach: We validate the reconstruction parameters of the method on a hybrid dataset and compare it with the baseline Tikhonov, DIP and the previous PP-MPI, which is a plug-and-play method with denoiser trained on MPI-friendly data. Main results: We offer a quantitative and qualitative evaluation of the zero-shot plug-and-play approach on the 3D Open MPI dataset. Moreover, we show the quality of the approach with different levels of preprocessing of the data. Significance: The proposed method employs a zero-shot denoiser which has not been trained for the MPI task and therefore saves the cost for training. Moreover, it offers a method that can be potentially applied in future MPI contexts.
Most research on fair machine learning has prioritized optimizing criteria such as Demographic Parity and Equalized Odds. Despite these efforts, there remains a limited understanding of how different bias mitigation strategies affect individual predictions and whether they introduce arbitrariness into the debiasing process. This paper addresses these gaps by exploring whether models that achieve comparable fairness and accuracy metrics impact the same individuals and mitigate bias in a consistent manner. We introduce the FRAME (FaiRness Arbitrariness and Multiplicity Evaluation) framework, which evaluates bias mitigation through five dimensions: Impact Size (how many people were affected), Change Direction (positive versus negative changes), Decision Rates (impact on models' acceptance rates), Affected Subpopulations (who was affected), and Neglected Subpopulations (where unfairness persists). This framework is intended to help practitioners understand the impacts of debiasing processes and make better-informed decisions regarding model selection. Applying FRAME to various bias mitigation approaches across key datasets allows us to exhibit significant differences in the behaviors of debiasing methods. These findings highlight the limitations of current fairness criteria and the inherent arbitrariness in the debiasing process.
We derive information-theoretic generalization bounds for supervised learning algorithms based on the information contained in predictions rather than in the output of the training algorithm. These bounds improve over the existing information-theoretic bounds, are applicable to a wider range of algorithms, and solve two key challenges: (a) they give meaningful results for deterministic algorithms and (b) they are significantly easier to estimate. We show experimentally that the proposed bounds closely follow the generalization gap in practical scenarios for deep learning.
When and why can a neural network be successfully trained? This article provides an overview of optimization algorithms and theory for training neural networks. First, we discuss the issue of gradient explosion/vanishing and the more general issue of undesirable spectrum, and then discuss practical solutions including careful initialization and normalization methods. Second, we review generic optimization methods used in training neural networks, such as SGD, adaptive gradient methods and distributed methods, and theoretical results for these algorithms. Third, we review existing research on the global issues of neural network training, including results on bad local minima, mode connectivity, lottery ticket hypothesis and infinite-width analysis.