The attention score matrix ${\rm SoftMax}(XY^T)$ encodes relational similarity patterns between objects and is extremely popular in machine learning. However, the complexity required to calculate it runs quadratically with the problem size, making it a computationally heavy solution. In this article, we propose a linear-time approximation of the attention score normalization constants for embedding vectors with bounded norms. We show on several pre-trained embeddings that the accuracy of our estimation formula surpasses competing kernel methods by even orders of magnitude. From this result, we design a linear-time and task-agnostic embedding algorithm based on the optimization of the attention scores. The proposed algorithm is highly interpretable and easily adapted to an arbitrary embedding problem. We consider a few use-cases and observe similar or higher performances and a lower computational time with respect to comparable embedding algorithms.
An extremely schematic model of the forces acting an a sailing yacht equipped with a system of foils is here presented and discussed. The role of the foils is to raise the hull from the water in order to reduce the total resistance and then increase the speed. Some CFD simulations are providing the total resistance of the bare hull at some values of speed and displacement, as well as the characteristics (drag and lift coefficients) of the 2D foil sections used for the appendages. A parametric study has been performed for the characterization of a foil of finite dimensions. The equilibrium of the vertical forces and longitudinal moments, as well as a reduced displacement, is obtained by controlling the pitch angle of the foils. The value of the total resistance of the yacht with foils is then compared with the case without foils, evidencing the speed regime where an advantage is obtained, if any.
Finite element spaces by Whitney $k$-forms on cubical meshes in $\mathbb{R}^n$ are presented. Based on the spaces, compatible discretizations to $H\Lambda^k$ problems are provided, and discrete de Rham complexes and commutative diagrams are constructed.
We systematically investigate the preservation of differential privacy in functional data analysis, beginning with functional mean estimation and extending to varying coefficient model estimation. Our work introduces a distributed learning framework involving multiple servers, each responsible for collecting several sparsely observed functions. This hierarchical setup introduces a mixed notion of privacy. Within each function, user-level differential privacy is applied to $m$ discrete observations. At the server level, central differential privacy is deployed to account for the centralised nature of data collection. Across servers, only private information is exchanged, adhering to federated differential privacy constraints. To address this complex hierarchy, we employ minimax theory to reveal several fundamental phenomena: from sparse to dense functional data analysis, from user-level to central and federated differential privacy costs, and the intricate interplay between different regimes of functional data analysis and privacy preservation. To the best of our knowledge, this is the first study to rigorously examine functional data estimation under multiple privacy constraints. Our theoretical findings are complemented by efficient private algorithms and extensive numerical evidence, providing a comprehensive exploration of this challenging problem.
Stable distributions are a celebrated class of probability laws used in various fields. The $\alpha$-stable process, and its exponentially tempered counterpart, the Classical Tempered Stable (CTS) process, are also prominent examples of L\'evy processes. Simulating these processes is critical for many applications, yet it remains computationally challenging, due to their infinite jump activity. This survey provides an overview of the key properties of these objects offering a roadmap for practitioners. The first part is a review of the stability property, sampling algorithms are provided along with numerical illustrations. Then CTS processes are presented, with the Baeumer-Meerschaert algorithm for increment simulation, and a computational analysis is provided with numerical illustrations across different time scales.
We develop and analyze stochastic inexact Gauss-Newton methods for nonlinear least-squares problems and for nonlinear systems ofequations. Random models are formed using suitable sampling strategies for the matrices involved in the deterministic models. The analysis of the expected number of iterations needed in the worst case to achieve a desired level of accuracy in the first-order optimality condition provides guidelines for applying sampling and enforcing, with \minor{a} fixed probability, a suitable accuracy in the random approximations. Results of the numerical validation of the algorithms are presented.
We consider the problem of estimating the error when solving a system of differential algebraic equations. Richardson extrapolation is a classical technique that can be used to judge when computational errors are irrelevant and estimate the discretization error. We have simulated molecular dynamics with constraints using the GROMACS library and found that the output is not always amenable to Richardson extrapolation. We derive and illustrate Richardson extrapolation using a variety of numerical experiments. We identify two necessary conditions that are not always satisfied by the GROMACS library.
We study a cost-aware programming language for higher-order recursion dubbed $\textbf{PCF}_\mathsf{cost}$ in the setting of synthetic domain theory (SDT). Our main contribution relates the denotational cost semantics of $\textbf{PCF}_\mathsf{cost}$ to its computational cost semantics, a new kind of dynamic semantics for program execution that serves as a mathematically natural alternative to operational semantics in SDT. In particular we prove an internal, cost-sensitive version of Plotkin's computational adequacy theorem, giving a precise correspondence between the denotational and computational semantics for complete programs at base type. The constructions and proofs of this paper take place in the internal dependent type theory of an SDT topos extended by a phase distinction in the sense of Sterling and Harper. By controlling the interpretation of cost structure via the phase distinction in the denotational semantics, we show that $\textbf{PCF}_\mathsf{cost}$ programs also evince a noninterference property of cost and behavior. We verify the axioms of the type theory by means of a model construction based on relative sheaf models of SDT.
Quantum signal processing (QSP) represents a real scalar polynomial of degree $d$ using a product of unitary matrices of size $2\times 2$, parameterized by $(d+1)$ real numbers called the phase factors. This innovative representation of polynomials has a wide range of applications in quantum computation. When the polynomial of interest is obtained by truncating an infinite polynomial series, a natural question is whether the phase factors have a well defined limit as the degree $d\to \infty$. While the phase factors are generally not unique, we find that there exists a consistent choice of parameterization so that the limit is well defined in the $\ell^1$ space. This generalization of QSP, called the infinite quantum signal processing, can be used to represent a large class of non-polynomial functions. Our analysis reveals a surprising connection between the regularity of the target function and the decay properties of the phase factors. Our analysis also inspires a very simple and efficient algorithm to approximately compute the phase factors in the $\ell^1$ space. The algorithm uses only double precision arithmetic operations, and provably converges when the $\ell^1$ norm of the Chebyshev coefficients of the target function is upper bounded by a constant that is independent of $d$. This is also the first numerically stable algorithm for finding phase factors with provable performance guarantees in the limit $d\to \infty$.
We develop some graph-based tests for spherical symmetry of a multivariate distribution using a method based on data augmentation. These tests are constructed using a new notion of signs and ranks that are computed along a path obtained by optimizing an objective function based on pairwise dissimilarities among the observations in the augmented data set. The resulting tests based on these signs and ranks have the exact distribution-free property, and irrespective of the dimension of the data, the null distributions of the test statistics remain the same. These tests can be conveniently used for high-dimensional data, even when the dimension is much larger than the sample size. Under appropriate regularity conditions, we prove the consistency of these tests in high dimensional asymptotic regime, where the dimension grows to infinity while the sample size may or may not grow with the dimension. We also propose a generalization of our methods to take care of the situations, where the center of symmetry is not specified by the null hypothesis. Several simulated data sets and a real data set are analyzed to demonstrate the utility of the proposed tests.
We consider an unknown multivariate function representing a system-such as a complex numerical simulator-taking both deterministic and uncertain inputs. Our objective is to estimate the set of deterministic inputs leading to outputs whose probability (with respect to the distribution of the uncertain inputs) of belonging to a given set is less than a given threshold. This problem, which we call Quantile Set Inversion (QSI), occurs for instance in the context of robust (reliability-based) optimization problems, when looking for the set of solutions that satisfy the constraints with sufficiently large probability. To solve the QSI problem we propose a Bayesian strategy, based on Gaussian process modeling and the Stepwise Uncertainty Reduction (SUR) principle, to sequentially choose the points at which the function should be evaluated to efficiently approximate the set of interest. We illustrate the performance and interest of the proposed SUR strategy through several numerical experiments.