We investigate the dividing line between classical and quantum computational power in estimating properties of matrix functions. More precisely, we study the computational complexity of two primitive problems: given a function $f$ and a Hermitian matrix $A$, compute a matrix element of $f(A)$ or compute a local measurement on $f(A)|0\rangle^{\otimes n}$, with $|0\rangle^{\otimes n}$ an $n$-qubit reference state vector, in both cases up to additive approximation error. We consider four functions -- monomials, Chebyshev polynomials, the time evolution function, and the inverse function -- and probe the complexity across a broad landscape covering different problem input regimes. Namely, we consider two types of matrix inputs (sparse and Pauli access), matrix properties (norm, sparsity), the approximation error, and function-specific parameters. We identify BQP-complete forms of both problems for each function and then toggle the problem parameters to easier regimes to see where hardness remains, or where the problem becomes classically easy. As part of our results we make concrete a hierarchy of hardness across the functions; in parameter regimes where we have classically efficient algorithms for monomials, all three other functions remain robustly BQP-hard, or hard under usual computational complexity assumptions. In identifying classically easy regimes, among others, we show that for any polynomial of degree $\mathrm{poly}(n)$ both problems can be efficiently classically simulated when $A$ has $O(\log n)$ non-zero coefficients in the Pauli basis. This contrasts with the fact that the problems are BQP-complete in the sparse access model even for constant row sparsity, whereas the stated Pauli access efficiently constructs sparse access with row sparsity $O(\log n)$. Our work provides a catalog of efficient quantum and classical algorithms for fundamental linear-algebra tasks.
We study the strong approximation of the solutions to singular stochastic kinetic equations (also referred to as second-order SDEs) driven by $\alpha$-stable processes, using an Euler-type scheme inspired by [11]. For these equations, the stability index $\alpha$ lies in the range $(1,2)$, and the drift term exhibits anisotropic $\beta$-H\"older continuity with $\beta >1 - \frac{\alpha}{2}$. We establish a convergence rate of $(\frac{1}{2} + \frac{\beta}{\alpha(1+\alpha)} \wedge \frac{1}{2})$, which aligns with the results in [4] concerning first-order SDEs.
We consider an unknown multivariate function representing a system-such as a complex numerical simulator-taking both deterministic and uncertain inputs. Our objective is to estimate the set of deterministic inputs leading to outputs whose probability (with respect to the distribution of the uncertain inputs) of belonging to a given set is less than a given threshold. This problem, which we call Quantile Set Inversion (QSI), occurs for instance in the context of robust (reliability-based) optimization problems, when looking for the set of solutions that satisfy the constraints with sufficiently large probability. To solve the QSI problem we propose a Bayesian strategy, based on Gaussian process modeling and the Stepwise Uncertainty Reduction (SUR) principle, to sequentially choose the points at which the function should be evaluated to efficiently approximate the set of interest. We illustrate the performance and interest of the proposed SUR strategy through several numerical experiments.
Many articles have recently been devoted to Mahler equations, partly because of their links with other branches of mathematics such as automata theory. Hahn series (a generalization of the Puiseux series allowing arbitrary exponents of the indeterminate as long as the set that supports them is well-ordered) play a central role in the theory of Mahler equations. In this paper, we address the following fundamental question: is there an algorithm to calculate the Hahn series solutions of a given linear Mahler equation? What makes this question interesting is the fact that the Hahn series appearing in this context can have complicated supports with infinitely many accumulation points. Our (positive) answer to the above question involves among other things the construction of a computable well-ordered receptacle for the supports of the potential Hahn series solutions.
This work is concerned with the computation of the first-order variation for one-dimensional hyperbolic partial differential equations. In the case of shock waves the main challenge is addressed by developing a numerical method to compute the evolution of the generalized tangent vector introduced by Bressan and Marson (1995). Our basic strategy is to combine the conservative numerical schemes and a novel expression of the interface conditions for the tangent vectors along the discontinuity. Based on this, we propose a simple numerical method to compute the tangent vectors for general hyperbolic systems. Numerical results are presented for Burgers' equation and a 2 x 2 hyperbolic system with two genuinely nonlinear fields.
Quantum computing has emerged as a promising avenue for achieving significant speedup, particularly in large-scale PDE simulations, compared to classical computing. One of the main quantum approaches involves utilizing Hamiltonian simulation, which is directly applicable only to Schr\"odinger-type equations. To address this limitation, Schr\"odingerisation techniques have been developed, employing the warped transformation to convert general linear PDEs into Schr\"odinger-type equations. However, despite the development of Schr\"odingerisation techniques, the explicit implementation of the corresponding quantum circuit for solving general PDEs remains to be designed. In this paper, we present detailed implementation of a quantum algorithm for general PDEs using Schr\"odingerisation techniques. We provide examples of the heat equation, and the advection equation approximated by the upwind scheme, to demonstrate the effectiveness of our approach. Complexity analysis is also carried out to demonstrate the quantum advantages of these algorithms in high dimensions over their classical counterparts.
The elapsed time equation is an age-structured model that describes the dynamics of interconnected spiking neurons through the elapsed time since the last discharge, leading to many interesting questions on the evolution of the system from a mathematical and biological point of view. In this work, we first deal with the case when transmission after a spike is instantaneous and the case when there exists a distributed delay that depends on the previous history of the system, which is a more realistic assumption. Then we revisit the well-posedness in order to make a numerical analysis by adapting the classical upwind scheme through a fixed-point approach. We improve the previous results on well-posedness by relaxing some hypotheses on the non-linearity for instantaneous transmission, including the strongly excitatory case, while for the numerical analysis we prove that the approximation given by the explicit upwind scheme converges to the solution of the non-linear problem through BV-estimates. We also show some numerical simulations to compare the behavior of the system in the case of instantaneous transmission with the case of distributed delay under different parameters, leading to solutions with different asymptotic profiles.
In this paper we develop a novel hidden Markov graphical model to investigate time-varying interconnectedness between different financial markets. To identify conditional correlation structures under varying market conditions and accommodate stylized facts embedded in financial time series, we rely upon the generalized hyperbolic family of distributions with time-dependent parameters evolving according to a latent Markov chain. We exploit its location-scale mixture representation to build a penalized EM algorithm for estimating the state-specific sparse precision matrices by means of an $L_1$ penalty. The proposed approach leads to regime-specific conditional correlation graphs that allow us to identify different degrees of network connectivity of returns over time. The methodology's effectiveness is validated through simulation exercises under different scenarios. In the empirical analysis we apply our model to daily returns of a large set of market indexes, cryptocurrencies and commodity futures over the period 2017-2023.
Overlaps between words are crucial in many areas of computer science, such as code design, stringology, and bioinformatics. A self overlapping word is characterized by its periods and borders. A period of a word $u$ is the starting position of a suffix of $u$ that is also a prefix $u$, and such a suffix is called a border. Each word of length, say $n>0$, has a set of periods, but not all combinations of integers are sets of periods. Computing the period set of a word $u$ takes linear time in the length of $u$. We address the question of computing, the set, denoted $\Gamma_n$, of all period sets of words of length $n$. Although period sets have been characterized, there is no formula to compute the cardinality of $\Gamma_n$ (which is exponential in $n$), and the known dynamic programming algorithm to enumerate $\Gamma_n$ suffers from its space complexity. We present an incremental approach to compute $\Gamma_n$ from $\Gamma_{n-1}$, which reduces the space complexity, and then a constructive certification algorithm useful for verification purposes. The incremental approach defines a parental relation between sets in $\Gamma_{n-1}$ and $\Gamma_n$, enabling one to investigate the dynamics of period sets, and their intriguing statistical properties. Moreover, the period set of a word $u$ is the key for computing the absence probability of $u$ in random texts. Thus, knowing $\Gamma_n$ is useful to assess the significance of word statistics, such as the number of missing words in a random text.
We show that one-way functions exist if and only there exists an efficient distribution relative to which almost-optimal compression is hard on average. The result is obtained by combining a theorem of Ilango, Ren, and Santhanam and one by Bauwens and Zimand.
We develop and analyze stochastic inexact Gauss-Newton methods for nonlinear least-squares problems and inexact Newton methods for nonlinear systems of equations. Random models are formed using suitable sampling strategies for the matrices involved in the deterministic models. The analysis of the expected number of iterations needed in the worst case to achieve a desired level of accuracy in the first-order optimality condition provides guidelines for applying sampling and enforcing, with fixed probability, a suitable accuracy in the random approximations. Results of the numerical validation of the algorithms are presented.