We present an algorithm which uses Fujiwara's inequality to bound algebraic functions over ellipses of a certain type, allowing us to concretely implement a rigorous Gauss-Legendre integration method for algebraic functions over a line segment. We consider path splitting strategies to improve convergence of the method and show that these yield significant practical and asymptotic benefits. We implemented these methods to compute period matrices of algebraic Riemann surfaces and these are available in SageMath.
We provide a polynomial-time classical algorithm for noisy quantum circuits. The algorithm computes the expectation value of any observable for any circuit, with a small average error over input states drawn from an ensemble (e.g. the computational basis). Our approach is based upon the intuition that noise exponentially damps non-local correlations relative to local correlations. This enables one to classically simulate a noisy quantum circuit by only keeping track of the dynamics of local quantum information. Our algorithm also enables sampling from the output distribution of a circuit in quasi-polynomial time, so long as the distribution anti-concentrates. A number of practical implications are discussed, including a fundamental limit on the efficacy of noise mitigation strategies: any quantum circuit for which error mitigation is efficient must be classically simulable.
We present a novel and mathematically transparent approach to function approximation and the training of large, high-dimensional neural networks, based on the approximate least-squares solution of associated Fredholm integral equations of the first kind by Ritz-Galerkin discretization, Tikhonov regularization and tensor-train methods. Practical application to supervised learning problems of regression and classification type confirm that the resulting algorithms are competitive with state-of-the-art neural network-based methods.
The "classical" (weak) greedy algorithm is widely used within model order reduction in order to compute a reduced basis in the offline training phase: An a posteriori error estimator is maximized and the snapshot corresponding to the maximizer is added to the basis. Since these snapshots are determined by a sufficiently detailed discretization, the offline phase is often computationally extremely costly. We suggest to replace the serial determination of one snapshot after the other by a parallel approach. In order to do so, we introduce a batch size $b$ and add $b$ snapshots to the current basis in every greedy iteration. These snapshots are computed in parallel. We prove convergence rates for this new batch greedy algorithm and compare them to those of the classical (weak) greedy algorithm in the Hilbert and Banach space case. Then, we present numerical results where we apply a (parallel) implementation of the proposed algorithm to the linear elliptic thermal block problem. We analyze the convergence rate as well as the offline and online wall-clock times for different batch sizes. We show that the proposed variant can significantly speed-up the offline phase while the size of the reduced problem is only moderately increased. The benefit of the parallel batch greedy increases for more complicated problems.
A global approximation method of Nystr\"om type is explored for the numerical solution of a class of nonlinear integral equations of the second kind. The cases of smooth and weakly singular kernels are both considered. In the first occurrence, the method uses a Gauss-Legendre rule whereas in the second one resorts to a product rule based on Legendre nodes. Stability and convergence are proved in functional spaces equipped with the uniform norm and several numerical tests are given to show the good performance of the proposed method. An application to the interior Neumann problem for the Laplace equation with nonlinear boundary conditions is also considered.
To solve many problems on graphs, graph traversals are used, the usual variants of which are the depth-first search and the breadth-first search. Implementing a graph traversal we consequently reach all vertices of the graph that belong to a connected component. The breadth-first search is the usual choice when constructing efficient algorithms for finding connected components of a graph. Methods of simple iteration for solving systems of linear equations with modified graph adjacency matrices and with the properly specified right-hand side can be considered as graph traversal algorithms. These traversal algorithms, generally speaking, turn out to be non-equivalent neither to the depth-first search nor the breadth-first search. The example of such a traversal algorithm is the one associated with the Gauss-Seidel method. For an arbitrary connected graph, to visit all its vertices, the algorithm requires not more iterations than that is required for BFS. For a large number of instances of the problem, fewer iterations will be required.
In this note we propose a new algorithm for checking whether two counting functions on a free monoid $M_r$ of rank $r$ are equivalent modulo a bounded function. The previously known algorithm has time complexity $O(n)$ for all ranks $r>2$, however in case $r=2$ it was estimated only as $O(n^2)$. Here we apply a new approach, based on explicit basis expansion and weighted rectangles summation, which allows us to construct a much simpler algorithm with time complexity $O(n)$ for any $r\geq 2$.
Neural ordinary differential equations (Neural ODEs) is a class of machine learning models that approximate the time derivative of hidden states using a neural network. They are powerful tools for modeling continuous-time dynamical systems, enabling the analysis and prediction of complex temporal behaviors. However, how to improve the model's stability and physical interpretability remains a challenge. This paper introduces new conservation relations in Neural ODEs using Lie symmetries in both the hidden state dynamics and the back propagation dynamics. These conservation laws are then incorporated into the loss function as additional regularization terms, potentially enhancing the physical interpretability and generalizability of the model. To illustrate this method, the paper derives Lie symmetries and conservation laws in a simple Neural ODE designed to monitor charged particles in a sinusoidal electric field. New loss functions are constructed from these conservation relations, demonstrating the applicability symmetry-regularized Neural ODE in typical modeling tasks, such as data-driven discovery of dynamical systems.
This work explores the representation of univariate and multivariate functions as matrix product states (MPS), also known as quantized tensor-trains (QTT). It proposes an algorithm that employs iterative Chebyshev expansions and Clenshaw evaluations to represent analytic and highly differentiable functions as MPS Chebyshev interpolants. It demonstrates rapid convergence for highly-differentiable functions, aligning with theoretical predictions, and generalizes efficiently to multidimensional scenarios. The performance of the algorithm is compared with that of tensor cross-interpolation (TCI) and multiscale interpolative constructions through a comprehensive comparative study. When function evaluation is inexpensive or when the function is not analytical, TCI is generally more efficient for function loading. However, the proposed method shows competitive performance, outperforming TCI in certain multivariate scenarios. Moreover, it shows advantageous scaling rates and generalizes to a wider range of tasks by providing a framework for function composition in MPS, which is useful for non-linear problems and many-body statistical physics.
Simplicial type theory extends homotopy type theory with a directed path type which internalizes the notion of a homomorphism within a type. This concept has significant applications both within mathematics -- where it allows for synthetic (higher) category theory -- and programming languages -- where it leads to a directed version of the structure identity principle. In this work, we construct the first types in simplicial type theory with non-trivial homomorphisms. We extend simplicial type theory with modalities and new reasoning principles to obtain triangulated type theory in order to construct the universe of discrete types $\mathcal{S}$. We prove that homomorphisms in this type correspond to ordinary functions of types i.e., that $\mathcal{S}$ is directed univalent. The construction of $\mathcal{S}$ is foundational for both of the aforementioned applications of simplicial type theory. We are able to define several crucial examples of categories and to recover important results from category theory. Using $\mathcal{S}$, we are also able to define various types whose usage is guaranteed to be functorial. These provide the first complete examples of the proposed directed structure identity principle.
Computing cross-partial derivatives using fewer model runs is relevant in modeling, such as stochastic approximation, derivative-based ANOVA, exploring complex models, and active subspaces. This paper introduces surrogates of all the cross-partial derivatives of functions by evaluating such functions at $N$ randomized points and using a set of $L$ constraints. Randomized points rely on independent, central, and symmetric variables. The associated estimators, based on $NL$ model runs, reach the optimal rates of convergence (i.e., $\mathcal{O}(N^{-1})$), and the biases of our approximations do not suffer from the curse of dimensionality for a wide class of functions. Such results are used for i) computing the main and upper-bounds of sensitivity indices, and ii) deriving emulators of simulators or surrogates of functions thanks to the derivative-based ANOVA. Simulations are presented to show the accuracy of our emulators and estimators of sensitivity indices. The plug-in estimates of indices using the U-statistics of one sample are numerically much stable.