Condensed mathematics, developed by Clausen and Scholze over the last few years, is a new way of studying the interplay between algebra and geometry. It replaces the concept of a topological space by a more sophisticated but better-behaved idea, namely that of a condensed set. Central to the theory are solid abelian groups and liquid vector spaces, analogues of complete topological groups. N\"obeling's theorem, a surprising result from the 1960s about the structure of the abelian group of continuous maps from a profinite space to the integers, is a crucial ingredient in the theory of solid abelian groups; without it one cannot give any nonzero examples of solid abelian groups. We discuss a recently completed formalisation of this result in the Lean theorem prover, and give a more detailed proof than those previously available in the literature. The proof is somewhat unusual in that it requires induction over ordinals -- a technique which has not previously been used to a great extent in formalised mathematics.
We present a simplified exposition of some pieces of [Gily\'en, Su, Low, and Wiebe, STOC'19, arXiv:1806.01838], which introduced a quantum singular value transformation (QSVT) framework for applying polynomial functions to block-encoded matrices. The QSVT framework has garnered substantial recent interest from the quantum algorithms community, as it was demonstrated by [GSLW19] to encapsulate many existing algorithms naturally phrased as an application of a matrix function. First, we posit that the lifting of quantum singular processing (QSP) to QSVT is better viewed not through Jordan's lemma (as was suggested by [GSLW19]) but as an application of the cosine-sine decomposition, which can be thought of as a more explicit and stronger version of Jordan's lemma. Second, we demonstrate that the constructions of bounded polynomial approximations given in [GSLW19], which use a variety of ad hoc approaches drawing from Fourier analysis, Chebyshev series, and Taylor series, can be unified under the framework of truncation of Chebyshev series, and indeed, can in large part be matched via a bounded variant of a standard meta-theorem from [Trefethen, 2013]. We hope this work finds use to the community as a companion guide for understanding and applying the powerful framework of [GSLW19].
Scientists continue to develop increasingly complex mechanistic models to reflect their knowledge more realistically. Statistical inference using these models can be challenging since the corresponding likelihood function is often intractable and model simulation may be computationally burdensome. Fortunately, in many of these situations, it is possible to adopt a surrogate model or approximate likelihood function. It may be convenient to conduct Bayesian inference directly with the surrogate, but this can result in bias and poor uncertainty quantification. In this paper we propose a new method for adjusting approximate posterior samples to reduce bias and produce more accurate uncertainty quantification. We do this by optimizing a transform of the approximate posterior that maximizes a scoring rule. Our approach requires only a (fixed) small number of complex model simulations and is numerically stable. We demonstrate good performance of the new method on several examples of increasing complexity.
In this paper we consider the finite element approximation of Maxwell's problem and analyse the prescription of essential boundary conditions in a weak sense using Nitsche's method. To avoid indefiniteness of the problem, the original equations are augmented with the gradient of a scalar field that allows one to impose the zero divergence of the magnetic induction, even if the exact solution for this scalar field is zero. Two finite element approximations are considered, namely, one in which the approximation spaces are assumed to satisfy the appropriate inf-sup condition that render the standard Galerkin method stable, and another augmented and stabilised one that permits the use of finite element interpolations of arbitrary order. Stability and convergence results are provided for the two finite element formulations considered.
Multiple testing is an important research direction that has gained major attention in recent years. Currently, most multiple testing procedures are designed with p-values or Local false discovery rate (Lfdr) statistics. However, p-values obtained by applying probability integral transform to some well-known test statistics often do not incorporate information from the alternatives, resulting in suboptimal procedures. On the other hand, Lfdr based procedures can be asymptotically optimal but their guarantee on false discovery rate (FDR) control relies on consistent estimation of Lfdr, which is often difficult in practice especially when the incorporation of side information is desirable. In this article, we propose a novel and flexibly constructed class of statistics, called rho-values, which combines the merits of both p-values and Lfdr while enjoys superiorities over methods based on these two types of statistics. Specifically, it unifies these two frameworks and operates in two steps, ranking and thresholding. The ranking produced by rho-values mimics that produced by Lfdr statistics, and the strategy for choosing the threshold is similar to that of p-value based procedures. Therefore, the proposed framework guarantees FDR control under weak assumptions; it maintains the integrity of the structural information encoded by the summary statistics and the auxiliary covariates and hence can be asymptotically optimal. We demonstrate the efficacy of the new framework through extensive simulations and two data applications.
The fundamental computational issues in Bayesian inverse problems (BIPs) governed by partial differential equations (PDEs) stem from the requirement of repeated forward model evaluations. A popular strategy to reduce such cost is to replace expensive model simulations by computationally efficient approximations using operator learning, motivated by recent progresses in deep learning. However, using the approximated model directly may introduce a modeling error, exacerbating the already ill-posedness of inverse problems. Thus, balancing between accuracy and efficiency is essential for the effective implementation of such approaches. To this end, we develop an adaptive operator learning framework that can reduce modeling error gradually by forcing the surrogate to be accurate in local areas. This is accomplished by fine-tuning the pre-trained approximate model during the inversion process with adaptive points selected by a greedy algorithm, which requires only a few forward model evaluations. To validate our approach, we adopt DeepOnet to construct the surrogate and use unscented Kalman inversion (UKI) to approximate the solution of BIPs, respectively. Furthermore, we present rigorous convergence guarantee in the linear case using the framework of UKI. We test the approach on several benchmarks, including the Darcy flow, the heat source inversion problem, and the reaction diffusion problems. Numerical results demonstrate that our method can significantly reduce computational costs while maintaining inversion accuracy.
We construct a bipartite generalization of Alon and Szegedy's nearly orthogonal vectors, thereby obtaining strong bounds for several extremal problems involving the Lov\'asz theta function, vector chromatic number, minimum semidefinite rank, nonnegative rank, and extension complexity of polytopes. In particular, we derive some general lower bounds for the vector chromatic number which may be of independent interest.
Gaussian approximations are routinely employed in Bayesian statistics to ease inference when the target posterior is intractable. Although these approximations are asymptotically justified by Bernstein-von Mises type results, in practice the expected Gaussian behavior may poorly represent the shape of the posterior, thus affecting approximation accuracy. Motivated by these considerations, we derive an improved class of closed-form approximations of posterior distributions which arise from a new treatment of a third-order version of the Laplace method yielding approximations in a tractable family of skew-symmetric distributions. Under general assumptions which account for misspecified models and non-i.i.d. settings, this family of approximations is shown to have a total variation distance from the target posterior whose rate of convergence improves by at least one order of magnitude the one established by the classical Bernstein-von Mises theorem. Specializing this result to the case of regular parametric models shows that the same improvement in approximation accuracy can be also derived for polynomially bounded posterior functionals. Unlike other higher-order approximations, our results prove that it is possible to derive closed-form and valid densities which are expected to provide, in practice, a more accurate, yet similarly-tractable, alternative to Gaussian approximations of the target posterior, while inheriting its limiting frequentist properties. We strengthen such arguments by developing a practical skew-modal approximation for both joint and marginal posteriors that achieves the same theoretical guarantees of its theoretical counterpart by replacing the unknown model parameters with the corresponding MAP estimate. Empirical studies confirm that our theoretical results closely match the remarkable performance observed in practice, even in finite, possibly small, sample regimes.
The goal of explainable Artificial Intelligence (XAI) is to generate human-interpretable explanations, but there are no computationally precise theories of how humans interpret AI generated explanations. The lack of theory means that validation of XAI must be done empirically, on a case-by-case basis, which prevents systematic theory-building in XAI. We propose a psychological theory of how humans draw conclusions from saliency maps, the most common form of XAI explanation, which for the first time allows for precise prediction of explainee inference conditioned on explanation. Our theory posits that absent explanation humans expect the AI to make similar decisions to themselves, and that they interpret an explanation by comparison to the explanations they themselves would give. Comparison is formalized via Shepard's universal law of generalization in a similarity space, a classic theory from cognitive science. A pre-registered user study on AI image classifications with saliency map explanations demonstrate that our theory quantitatively matches participants' predictions of the AI.
We derive information-theoretic generalization bounds for supervised learning algorithms based on the information contained in predictions rather than in the output of the training algorithm. These bounds improve over the existing information-theoretic bounds, are applicable to a wider range of algorithms, and solve two key challenges: (a) they give meaningful results for deterministic algorithms and (b) they are significantly easier to estimate. We show experimentally that the proposed bounds closely follow the generalization gap in practical scenarios for deep learning.
When and why can a neural network be successfully trained? This article provides an overview of optimization algorithms and theory for training neural networks. First, we discuss the issue of gradient explosion/vanishing and the more general issue of undesirable spectrum, and then discuss practical solutions including careful initialization and normalization methods. Second, we review generic optimization methods used in training neural networks, such as SGD, adaptive gradient methods and distributed methods, and theoretical results for these algorithms. Third, we review existing research on the global issues of neural network training, including results on bad local minima, mode connectivity, lottery ticket hypothesis and infinite-width analysis.