This manuscript summarizes the outcome of the focus groups at "The f(A)bulous workshop on matrix functions and exponential integrators", held at the Max Planck Institute for Dynamics of Complex Technical Systems in Magdeburg, Germany, on 25-27 September 2023. There were three focus groups in total, each with a different theme: knowledge transfer, high-performance and energy-aware computing, and benchmarking. We collect insights, open issues, and perspectives from each focus group, as well as from general discussions throughout the workshop. Our primary aim is to highlight ripe research directions and continue to build on the momentum from a lively meeting.
For factor analysis, many estimators, starting with the maximum likelihood estimator, are developed, and the statistical properties of most estimators are well discussed. In the early 2000s, a new estimator based on matrix factorization, called Matrix Decomposition Factor Analysis (MDFA), was developed. Although the estimator is obtained by minimizing the principal component analysis-like loss function, this estimator empirically behaves like other consistent estimators of factor analysis, not principal component analysis. Since the MDFA estimator cannot be formulated as a classical M-estimator, the statistical properties of the MDFA estimator have not yet been discussed. To explain this unexpected behavior theoretically, we establish the consistency of the MDFA estimator as the factor analysis. That is, we show that the MDFA estimator has the same limit as other consistent estimators of factor analysis.
In recent years, the fervent demand for computational power across various domains has prompted hardware manufacturers to introduce specialized computing hardware aimed at enhancing computational capabilities. Particularly, the utilization of tensor hardware supporting low precision has gained increasing prominence in scientific research. However, the use of low-precision tensor hardware for computational acceleration often introduces errors, posing a fundamental challenge of simultaneously achieving effective acceleration while maintaining computational accuracy. This paper proposes improvements in the methodology by incorporating low-precision quantization and employing a residual matrix for error correction and combines vector-wise quantization method.. The key innovation lies in the use of sparse matrices instead of dense matrices when compensating for errors with a residual matrix. By focusing solely on values that may significantly impact relative errors under a specified threshold, this approach aims to control quantization errors while reducing computational complexity. Experimental results demonstrate that this method can effectively control the quantization error while maintaining high acceleration effect.The improved algorithm on the CPU can achieve up to 15\% accuracy improvement while 1.46 times speed improvement.
First-order methods are often analyzed via their continuous-time models, where their worst-case convergence properties are usually approached via Lyapunov functions. In this work, we provide a systematic and principled approach to find and verify Lyapunov functions for classes of ordinary and stochastic differential equations. More precisely, we extend the performance estimation framework, originally proposed by Drori and Teboulle [10], to continuous-time models. We retrieve convergence results comparable to those of discrete methods using fewer assumptions and convexity inequalities, and provide new results for stochastic accelerated gradient flows.
In decision-making, maxitive functions are used for worst-case and best-case evaluations. Maxitivity gives rise to a rich structure that is well-studied in the context of the pointwise order. In this article, we investigate maxitivity with respect to general preorders and provide a representation theorem for such functionals. The results are illustrated for different stochastic orders in the literature, including the usual stochastic order, the increasing convex/concave order, and the dispersive order.
The family of bent functions is a known class of Boolean functions, which have a great importance in cryptography. The Cayley graph defined on $\mathbb{Z}_{2}^{n}$ by the support of a bent function is a strongly regular graph $srg(v,k\lambda,\mu)$, with $\lambda=\mu$. In this note we list the parameters of such Cayley graphs. Moreover, it is given a condition on $(n,m)$-bent functions $F=(f_1,\ldots,f_m)$, involving the support of their components $f_i$, and their $n$-ary symmetric differences.
The presented methodology for testing the goodness-of-fit of an Autoregressive Hilbertian model (ARH(1) model) provides an infinite-dimensional formulation of the approach proposed in Koul and Stute (1999), based on empirical process marked by residuals. Applying a central and functional central limit result for Hilbert-valued martingale difference sequences, the asymptotic behavior of the formulated H-valued empirical process, also indexed by H, is obtained under the null hypothesis. The limiting process is H-valued generalized (i.e., indexed by H) Wiener process, leading to an asymptotically distribution free test. Consistency of the test is also proved. The case of misspecified autocorrelation operator of the ARH(1) process is addressed. The asymptotic equivalence in probability, uniformly in the norm of H, of the empirical processes formulated under known and unknown autocorrelation operator is obtained. Beyond the Euclidean setting, this approach allows to implement goodness of fit testing in the context of manifold and spherical functional autoregressive processes.
Functional Differential Equations (FDEs) play a fundamental role in many areas of mathematical physics, including fluid dynamics (Hopf characteristic functional equation), quantum field theory (Schwinger-Dyson equation), and statistical physics. Despite their significance, computing solutions to FDEs remains a longstanding challenge in mathematical physics. In this paper we address this challenge by introducing new approximation theory and high-performance computational algorithms designed for solving FDEs on tensor manifolds. Our approach involves approximating FDEs using high-dimensional partial differential equations (PDEs), and then solving such high-dimensional PDEs on a low-rank tensor manifold leveraging high-performance parallel tensor algorithms. The effectiveness of the proposed approach is demonstrated through its application to the Burgers-Hopf FDE, which governs the characteristic functional of the stochastic solution to the Burgers equation evolving from a random initial state.
Charts, figures, and text derived from data play an important role in decision making, from data-driven policy development to day-to-day choices informed by online articles. Making sense of, or fact-checking, outputs means understanding how they relate to the underlying data. Even for domain experts with access to the source code and data sets, this poses a significant challenge. In this paper we introduce a new program analysis framework which supports interactive exploration of fine-grained I/O relationships directly through computed outputs, making use of dynamic dependence graphs. Our main contribution is a novel notion in data provenance which we call related inputs, a relation of mutual relevance or "cognacy" which arises between inputs when they contribute to common features of the output. Queries of this form allow readers to ask questions like "What outputs use this data element, and what other data elements are used along with it?". We show how Jonsson and Tarski's concept of conjugate operators on Boolean algebras appropriately characterises the notion of cognacy in a dependence graph, and give a procedure for computing related inputs over such a graph.
In this work we introduce a memory-efficient method for computing the action of a Hermitian matrix function on a vector. Our method consists of a rational Lanczos algorithm combined with a basis compression procedure based on rational Krylov subspaces that only involve small matrices. The cost of the compression procedure is negligible with respect to the cost of the Lanczos algorithm. This enables us to avoid storing the whole Krylov basis, leading to substantial reductions in memory requirements. This method is particularly effective when the rational Lanczos algorithm needs a significant number of iterations to converge and each iteration involves a low computational effort. This scenario often occurs when polynomial Lanczos, as well as extended and shift-and-invert Lanczos are employed. Theoretical results prove that, for a wide variety of functions, the proposed algorithm differs from rational Lanczos by an error term that is usually negligible. The algorithm is compared with other low-memory Krylov methods from the literature on a variety of test problems, showing competitive performance.
One of the main challenges for interpreting black-box models is the ability to uniquely decompose square-integrable functions of non-independent random inputs into a sum of functions of every possible subset of variables. However, dealing with dependencies among inputs can be complicated. We propose a novel framework to study this problem, linking three domains of mathematics: probability theory, functional analysis, and combinatorics. We show that, under two reasonable assumptions on the inputs (non-perfect functional dependence and non-degenerate stochastic dependence), it is always possible to decompose such a function uniquely. This generalizes the well-known Hoeffding decomposition. The elements of this decomposition can be expressed using oblique projections and allow for novel interpretability indices for evaluation and variance decomposition purposes. The properties of these novel indices are studied and discussed. This generalization offers a path towards a more precise uncertainty quantification, which can benefit sensitivity analysis and interpretability studies whenever the inputs are dependent. This decomposition is illustrated analytically, and the challenges for adopting these results in practice are discussed.