Fano varieties are basic building blocks in geometry - they are `atomic pieces' of mathematical shapes. Recent progress in the classification of Fano varieties involves analysing an invariant called the quantum period. This is a sequence of integers which gives a numerical fingerprint for a Fano variety. It is conjectured that a Fano variety is uniquely determined by its quantum period. If this is true, one should be able to recover geometric properties of a Fano variety directly from its quantum period. We apply machine learning to the question: does the quantum period of X know the dimension of X? Note that there is as yet no theoretical understanding of this. We show that a simple feed-forward neural network can determine the dimension of X with 98% accuracy. Building on this, we establish rigorous asymptotics for the quantum periods of a class of Fano varieties. These asymptotics determine the dimension of X from its quantum period. Our results demonstrate that machine learning can pick out structure from complex mathematical data in situations where we lack theoretical understanding. They also give positive evidence for the conjecture that the quantum period of a Fano variety determines that variety.
Discovery of mathematical descriptors of physical phenomena from observational and simulated data, as opposed to from the first principles, is a rapidly evolving research area. Two factors, time-dependence of the inputs and hidden translation invariance, are known to complicate this task. To ameliorate these challenges, we combine Lagrangian dynamic mode decomposition with a locally time-invariant approximation of the Koopman operator. The former component of our method yields the best linear estimator of the system's dynamics, while the latter deals with the system's nonlinearity and non-autonomous behavior. We provide theoretical estimators (bounds) of prediction accuracy and perturbation error to guide the selection of both rank truncation and temporal discretization. We demonstrate the performance of our approach on several non-autonomous problems, including two-dimensional Navier-Stokes equations.
We study the stability of randomized Taylor schemes for ODEs. We consider three notions of probabilistic stability: asymptotic stability, mean-square stability, and stability in probability. We prove fundamental properties of the probabilistic stability regions and benchmark them against the absolute stability regions for deterministic Taylor schemes.
This paper presents a new algorithm for generating random inverse-Wishart matrices that directly generates the Cholesky factor of the matrix without computing the factorization. Whenever parameterized in terms of a precision matrix $\Omega=\Sigma^{-1}$, or its Cholesky factor, instead of a covariance matrix $\Sigma$, the new algorithm is more efficient than the current standard algorithm.
Quantum computing promises transformational gains for solving some problems, but little to none for others. For anyone hoping to use quantum computers now or in the future, it is important to know which problems will benefit. In this paper, we introduce a framework for answering this question both intuitively and quantitatively. The underlying structure of the framework is a race between quantum and classical computers, where their relative strengths determine when each wins. While classical computers operate faster, quantum computers can sometimes run more efficient algorithms. Whether the speed advantage or the algorithmic advantage dominates determines whether a problem will benefit from quantum computing or not. Our analysis reveals that many problems, particularly those of small to moderate size that can be important for typical businesses, will not benefit from quantum computing. Conversely, larger problems or those with particularly big algorithmic gains will benefit from near-term quantum computing. Since very large algorithmic gains are rare in practice and theorized to be rare even in principle, our analysis suggests that the benefits from quantum computing will flow either to users of these rare cases, or practitioners processing very large data.
Suppose a finite, unweighted, combinatorial graph $G = (V,E)$ is the union of several (degree-)regular graphs which are then additionally connected with a few additional edges. $G$ will then have only a small number of vertices $v \in V$ with the property that one of their neighbors $(v,w) \in E$ has a higher degree $\mbox{deg}(w) > \mbox{deg}(v)$. We prove the converse statement: if a graph has few vertices having a neighbor with higher degree and satisfies a mild regularity condition, then, via adding and removing a few edges, the graph can be turned into a disjoint union of (distance-)regular graphs. The number of edge operations depends on the maximum degree and number of vertices with a higher degree neighbor but is independent of the size of $|V|$.
If the Stokes equations are properly discretized, it is known that the Schur complement matrix is spectrally equivalent to the identity matrix. Moreover, in the case of simple geometries, it is often observed that most of its eigenvalues are equal to one. These facts form the basis for the famous Uzawa algorithm. Despite recent progress in developing efficient iterative methods for solving the Stokes problem, the Uzawa algorithm remains popular in science and engineering, especially when accelerated by Krylov subspace methods. However, in complex geometries, the Schur complement matrix can become severely ill-conditioned, having a significant portion of non-unit eigenvalues. This makes the established Uzawa preconditioner inefficient. To explain this behaviour, we examine the Pressure Schur Complement formulation for the staggered finite-difference discretization of the Stokes equations. Firstly, we conjecture that the no-slip boundary conditions are the reason for non-unit eigenvalues of the Schur complement matrix. Secondly, we demonstrate that its condition number increases with increasing the surface-to-volume ratio of the flow domain. As an alternative to the Uzawa preconditioner, we propose using the diffusive SIMPLE preconditioner for geometries with a large surface-to-volume ratio. We show that the latter is much more fast and robust for such geometries. Furthermore, we show that the usage of the SIMPLE preconditioner leads to more accurate practical computation of the permeability of tight porous media. Keywords: Stokes problem, tight geometries, computing permeability, preconditioned Krylov subspace methods
Symmetry is a cornerstone of much of mathematics, and many probability distributions possess symmetries characterized by their invariance to a collection of group actions. Thus, many mathematical and statistical methods rely on such symmetry holding and ostensibly fail if symmetry is broken. This work considers under what conditions a sequence of probability measures asymptotically gains such symmetry or invariance to a collection of group actions. Considering the many symmetries of the Gaussian distribution, this work effectively proposes a non-parametric type of central limit theorem. That is, a Lipschitz function of a high dimensional random vector will be asymptotically invariant to the actions of certain compact topological groups. Applications of this include a partial law of the iterated logarithm for uniformly random points in an $\ell_p^n$-ball and an asymptotic equivalence between classical parametric statistical tests and their randomization counterparts even when invariance assumptions are violated.
We propose a neural network-based meta-learning method to efficiently solve partial differential equation (PDE) problems. The proposed method is designed to meta-learn how to solve a wide variety of PDE problems, and uses the knowledge for solving newly given PDE problems. We encode a PDE problem into a problem representation using neural networks, where governing equations are represented by coefficients of a polynomial function of partial derivatives, and boundary conditions are represented by a set of point-condition pairs. We use the problem representation as an input of a neural network for predicting solutions, which enables us to efficiently predict problem-specific solutions by the forwarding process of the neural network without updating model parameters. To train our model, we minimize the expected error when adapted to a PDE problem based on the physics-informed neural network framework, by which we can evaluate the error even when solutions are unknown. We demonstrate that our proposed method outperforms existing methods in predicting solutions of PDE problems.
Differential geometric approaches are ubiquitous in several fields of mathematics, physics and engineering, and their discretizations enable the development of network-based mathematical and computational frameworks, which are essential for large-scale data science. The Forman-Ricci curvature (FRC) - a statistical measure based on Riemannian geometry and designed for networks - is known for its high capacity for extracting geometric information from complex networks. However, extracting information from dense networks is still challenging due to the combinatorial explosion of high-order network structures. Motivated by this challenge we sought a set-theoretic representation theory for high-order network cells and FRC, as well as their associated concepts and properties, which together provide an alternative and efficient formulation for computing high-order FRC in complex networks. We provide a pseudo-code, a software implementation coined FastForman, as well as a benchmark comparison with alternative implementations. Crucially, our representation theory reveals previous computational bottlenecks and also accelerates the computation of FRC. As a consequence, our findings open new research possibilities in complex systems where higher-order geometric computations are required.
The remarkable practical success of deep learning has revealed some major surprises from a theoretical perspective. In particular, simple gradient methods easily find near-optimal solutions to non-convex optimization problems, and despite giving a near-perfect fit to training data without any explicit effort to control model complexity, these methods exhibit excellent predictive accuracy. We conjecture that specific principles underlie these phenomena: that overparametrization allows gradient methods to find interpolating solutions, that these methods implicitly impose regularization, and that overparametrization leads to benign overfitting. We survey recent theoretical progress that provides examples illustrating these principles in simpler settings. We first review classical uniform convergence results and why they fall short of explaining aspects of the behavior of deep learning methods. We give examples of implicit regularization in simple settings, where gradient methods lead to minimal norm functions that perfectly fit the training data. Then we review prediction methods that exhibit benign overfitting, focusing on regression problems with quadratic loss. For these methods, we can decompose the prediction rule into a simple component that is useful for prediction and a spiky component that is useful for overfitting but, in a favorable setting, does not harm prediction accuracy. We focus specifically on the linear regime for neural networks, where the network can be approximated by a linear model. In this regime, we demonstrate the success of gradient flow, and we consider benign overfitting with two-layer networks, giving an exact asymptotic analysis that precisely demonstrates the impact of overparametrization. We conclude by highlighting the key challenges that arise in extending these insights to realistic deep learning settings.