Chernoff approximations are a flexible and powerful tool of functional analysis, which can be used, in particular, to find numerically approximate solutions of some differential equations with variable coefficients. For many classes of equations such approximations have already been constructed since pioneering papers of Prof. O.G.Somlyanov in 2000, however, the speed of their convergence to the exact solution has not been properly studied. We select the heat equation (because its exact solutions are already known) as a simple yet informative model example for the study of the rate of convergence of Chernoff approximations. Examples illustrating the rate of convergence of Chernoff approximations to the solution of the Cauchy problem for the heat equation are constructed in the paper. Numerically we show that for initial conditions that are smooth enough the order of approximation is equal to the order of Chernoff tangency of the Chernoff function used. We also consider not smooth enough initial conditions and show how H\"older class of initial condition is related to the rate of convergence. This method of study in the future can be applied to general second order parabolic equation with variable coefficients by a slight modification of our Python 3 code. This arXiv version of the text is a supplementary material for our journal article. Here we include all the written text from the article and additionally all illustrations (Appendix A) and full text of the Python 3 code (Appendix B).
Understanding quantum channels and the strange behavior of their capacities is a key objective of quantum information theory. Here we study a remarkably simple, low-dimensional, single-parameter family of quantum channels with exotic quantum information-theoretic features. As the simplest example from this family, we focus on a qutrit-to-qutrit channel that is intuitively obtained by hybridizing together a simple degradable channel and a completely useless qubit channel. Such hybridizing makes this channel's capacities behave in a variety of interesting ways. For instance, the private and classical capacity of this channel coincide and can be explicitly calculated, even though the channel does not belong to any class for which the underlying information quantities are known to be additive. Moreover, the quantum capacity of the channel can be computed explicitly, given a clear and compelling conjecture is true. This "spin alignment conjecture," which may be of independent interest, is proved in certain special cases and additional numerical evidence for its validity is provided. Finally, we generalize the qutrit channel in two ways, and the resulting channels and their capacities display similarly rich behavior. In the companion paper [Phys. Rev. Lett. 130, 200801 (2023); arXiv:2202.08377], we further show that the qutrit channel demonstrates superadditivity when transmitting quantum information jointly with a variety of assisting channels, in a manner unknown before.
We provide new false discovery proportion (FDP) confidence envelopes in several multiple testing settings relevant to modern high dimensional-data methods. We revisit the scenarios considered in the recent work of \cite{katsevich2020simultaneous}(top-$k$, preordered -- including knockoffs -- , online) with a particular emphasis on obtaining FDP bounds that have both non-asymptotical coverage and asymptotical consistency, i.e. converge below the desired level $\alpha$ when applied to a classical $\alpha$-level false discovery rate (FDR) controlling procedure. This way, we derive new bounds that provide improvements over existing ones, both theoretically and practically, and are suitable for situations where at least a moderate number of rejections is expected. These improvements are illustrated with numerical experiments and real data examples. In particular, the improvement is significant in the knockoff setting, which shows the impact of the method for practical use. As side results, we introduce a new confidence envelope for the empirical cumulative distribution function of i.i.d. uniform variables and we provide new power results in sparse cases, both being of independent interest.
We consider the problem of testing the fit of a discrete sample of items from many categories to the uniform distribution over the categories. As a class of alternative hypotheses, we consider the removal of an $\ell_p$ ball of radius $\epsilon$ around the uniform rate sequence for $p \leq 2$. We deliver a sharp characterization of the asymptotic minimax risk when $\epsilon \to 0$ as the number of samples and number of dimensions go to infinity, for testing based on the occurrences' histogram (number of absent categories, singletons, collisions, ...). For example, for $p=1$ and in the limit of a small expected number of samples $n$ compared to the number of categories $N$ (aka "sub-linear" regime), the minimax risk $R^*_\epsilon$ asymptotes to $2 \bar{\Phi}\left(n \epsilon^2/\sqrt{8N}\right) $, with $\bar{\Phi}(x)$ the normal survival function. Empirical studies over a range of problem parameters show that this estimate is accurate in finite samples, and that our test is significantly better than the chisquared test or a test that only uses collisions. Our analysis is based on the asymptotic normality of histogram ordinates, the equivalence between the minimax setting to a Bayesian one, and the reduction of a multi-dimensional optimization problem to a one-dimensional problem.
Many stochastic continuous-state dynamical systems can be modeled as probabilistic programs with nonlinear non-polynomial updates in non-nested loops. We present two methods, one approximate and one exact, to automatically compute, without sampling, moment-based invariants for such probabilistic programs as closed-form solutions parameterized by the loop iteration. The exact method applies to probabilistic programs with trigonometric and exponential updates and is embedded in the Polar tool. The approximate method for moment computation applies to any nonlinear random function as it exploits the theory of polynomial chaos expansion to approximate non-polynomial updates as the sum of orthogonal polynomials. This translates the dynamical system to a non-nested loop with polynomial updates, and thus renders it conformable with the Polar tool that computes the moments of any order of the state variables. We evaluate our methods on an extensive number of examples ranging from modeling monetary policy to several physical motion systems in uncertain environments. The experimental results demonstrate the advantages of our approach with respect to the current state-of-the-art.
In this paper, we revisit the backward Euler method for numerical approximations of random periodic solutions of semilinear SDEs with additive noise. The existence and uniqueness of the random periodic solution are discussed as the limit of the pull-back flows of SDEs under a relaxed condition compared to literature. The backward Euler scheme is proved to converge with an order one in the mean square sense, which improves the existing order-half convergence. Numerical examples are presented to verify our theoretical analysis.
In this work, we aim at constructing numerical schemes, that are as efficient as possible in terms of cost and conservation of invariants, for the Vlasov--Fokker--Planck system coupled with Poisson or Amp\`ere equation. Splitting methods are used where the linear terms in space are treated by spectral or semi-Lagrangian methods and the nonlinear diffusion in velocity in the collision operator is treated using a stabilized Runge--Kutta--Chebyshev (RKC) integrator, a powerful alternative of implicit schemes. The new schemes are shown to exactly preserve mass and momentum. The conservation of total energy is obtained using a suitable approximation of the electric field. An H-theorem is proved in the semi-discrete case, while the entropy decay is illustrated numerically for the fully discretized problem. Numerical experiments that include investigation of Landau damping phenomenon and bump-on-tail instability are performed to illustrate the efficiency of the new schemes.
In this paper, we study the estimation of the derivative of a regression function in a standard univariate regression model. The estimators are defined either by derivating nonparametric least-squares estimators of the regression function or by estimating the projection of the derivative. We prove two simple risk bounds allowing to compare our estimators. More elaborate bounds under a stability assumption are then provided. Bases and spaces on which we can illustrate our assumptions and first results are both of compact or non compact type, and we discuss the rates reached by our estimators. They turn out to be optimal in the compact case. Lastly, we propose a model selection procedure and prove the associated risk bound. To consider bases with a non compact support makes the problem difficult.
Quadratic programming over orthogonal matrices encompasses a broad class of hard optimization problems that do not have an efficient quantum representation. Such problems are instances of the little noncommutative Grothendieck problem (LNCG), a generalization of binary quadratic programs to continuous, noncommutative variables. In this work, we establish a natural embedding for this class of LNCG problems onto a fermionic Hamiltonian, thereby enabling the study of this classical problem with the tools of quantum information. This embedding is accomplished by identifying the orthogonal group with its double cover, which can be represented as fermionic quantum states. Correspondingly, the embedded LNCG Hamiltonian is a two-body fermion model. Determining extremal states of this Hamiltonian provides an outer approximation to the original problem, a quantum analogue to classical semidefinite relaxations. In particular, when optimizing over the special orthogonal group our quantum relaxation obeys additional, powerful constraints based on the convex hull of rotation matrices. The classical size of this convex-hull representation is exponential in matrix dimension, whereas our quantum representation requires only a linear number of qubits. Finally, to project the relaxed solution back into the feasible space, we propose rounding procedures which return orthogonal matrices from appropriate measurements of the quantum state. Through numerical experiments we provide evidence that this rounded quantum relaxation can produce high-quality approximations.
Topological Data Analysis is a growing area of data science, which aims at computing and characterizing the geometry and topology of data sets, in order to produce useful descriptors for subsequent statistical and machine learning tasks. Its main computational tool is persistent homology, which amounts to track the topological changes in growing families of subsets of the data set itself, called filtrations, and encode them in an algebraic object, called persistence module. Even though algorithms and theoretical properties of modules are now well-known in the single-parameter case, that is, when there is only one filtration to study, much less is known in the multi-parameter case, where several filtrations are given at once. Though more complicated, the resulting persistence modules are usually richer and encode more information, making them better descriptors for data science. In this article, we present the first approximation scheme, which is based on fibered barcodes and exact matchings, two constructions that stem from the theory of single-parameter persistence, for computing and decomposing general multi-parameter persistence modules. Our algorithm has controlled complexity and running time, and works in arbitrary dimension, i.e., with an arbitrary number of filtrations. Moreover, when restricting to specific classes of multi-parameter persistence modules, namely the ones that can be decomposed into intervals, we establish theoretical results about the approximation error between our estimate and the true module in terms of interleaving distance. Finally, we present empirical evidence validating output quality and speed-up on several data sets.
A well-known boundary observability inequality for the elasticity system establishes that the energy of the system can be estimated from the solution on a sufficiently large part of the boundary for a sufficiently large time. This inequality is relevant in different contexts as the exact boundary controllability, boundary stabilization, or some inverse source problems. Here we show that a corresponding boundary observability inequality for the spectral collocation approximation of the linear elasticity system in a d-dimensional cube also holds, uniformly with respect to the discretization parameter. This property is essential to prove that natural numerical approaches to the previous problems based on replacing the elasticity system by collocation discretization will give successful approximations of the continuous counterparts.