Nonlinear differential equations model diverse phenomena but are notoriously difficult to solve. While there has been extensive previous work on efficient quantum algorithms for linear differential equations, the linearity of quantum mechanics has limited analogous progress for the nonlinear case. Despite this obstacle, we develop a quantum algorithm for dissipative quadratic $n$-dimensional ordinary differential equations. Assuming $R < 1$, where $R$ is a parameter characterizing the ratio of the nonlinearity and forcing to the linear dissipation, this algorithm has complexity $T^2 q~\mathrm{poly}(\log T, \log n, \log 1/\epsilon)/\epsilon$, where $T$ is the evolution time, $\epsilon$ is the allowed error, and $q$ measures decay of the solution. This is an exponential improvement over the best previous quantum algorithms, whose complexity is exponential in $T$. While exponential decay precludes efficiency, driven equations can avoid this issue despite the presence of dissipation. Our algorithm uses the method of Carleman linearization, for which we give a novel convergence theorem. This method maps a system of nonlinear differential equations to an infinite-dimensional system of linear differential equations, which we discretize, truncate, and solve using the forward Euler method and the quantum linear system algorithm. We also provide a lower bound on the worst-case complexity of quantum algorithms for general quadratic differential equations, showing that the problem is intractable for $R \ge \sqrt{2}$. Finally, we discuss potential applications, showing that the $R < 1$ condition can be satisfied in realistic epidemiological models and giving numerical evidence that the method may describe a model of fluid dynamics even for larger values of $R$.
We study the mathematical structure of the solution set (and its tangent space) to the matrix equation $G^*JG=J$ for a given square matrix $J$. In the language of pure mathematics, this is a Lie group which is the isometry group for a bilinear (or a sesquilinear) form. Generally these groups are described as intersections of a few special groups. The tangent space to $\{G: G^*JG=J \}$ consists of solutions to the linear matrix equation $X^*J+JX=0$. For the complex case, the solution set of this linear equation was computed by De Ter{\'a}n and Dopico. We found that on its own, the equation $X^*J+JX=0$ is hard to solve. By throwing into the mix the complementary linear equation $X^*J-JX=0$, we find that rather than increasing the complexity, we reduce the complexity. Not only is it possible to now solve the original problem, but we can approach the broader algebraic and geometric structure. One implication is that the two equations form an $\mathfrak{h}$ and $\mathfrak{m}$ pair familiar in the study of pseudo-Riemannian symmetric spaces. We explicitly demonstrate the computation of the solutions to the equation $X^*J\pm XJ=0$ for real and complex matrices. However, any real, complex or quaternionic case with an arbitrary involution (e.g., transpose, conjugate transpose, and the various quaternion transposes) can be effectively solved with the same strategy. We provide numerical examples and visualizations.
We construct deep operator networks (ONets) between infinite-dimensional spaces that emulate with an exponential rate of convergence the coefficient-to-solution map of elliptic second-order PDEs. In particular, we consider problems set in $d$-dimensional periodic domains, $d=1, 2, \dots$, and with analytic right-hand sides and coefficients. Our analysis covers diffusion-reaction problems, parametric diffusion equations, and elliptic systems such as linear isotropic elastostatics in heterogeneous materials. We leverage the exponential convergence of spectral collocation methods for boundary value problems whose solutions are analytic. In the present periodic and analytic setting, this follows from classical elliptic regularity. Within the ONet branch and trunk construction of [Chen and Chen, 1993] and of [Lu et al., 2021], we show the existence of deep ONets which emulate the coefficient-to-solution map to accuracy $\varepsilon>0$ in the $H^1$ norm, uniformly over the coefficient set. We prove that the neural networks in the ONet have size $\mathcal{O}(\left|\log(\varepsilon)\right|^\kappa)$ for some $\kappa>0$ depending on the physical space dimension.
Group synchronization refers to estimating a collection of group elements from the noisy pairwise measurements. Such a nonconvex problem has received much attention from numerous scientific fields including computer vision, robotics, and cryo-electron microscopy. In this paper, we focus on the orthogonal group synchronization problem with general additive noise models under incomplete measurements, which is much more general than the commonly considered setting of complete measurements. Characterizations of the orthogonal group synchronization problem are given from perspectives of optimality conditions as well as fixed points of the projected gradient ascent method which is also known as the generalized power method (GPM). It is well worth noting that these results still hold even without generative models. In the meantime, we derive the local error bound property for the orthogonal group synchronization problem which is useful for the convergence rate analysis of different algorithms and can be of independent interest. Finally, we prove the linear convergence result of the GPM to a global maximizer under a general additive noise model based on the established local error bound property. Our theoretical convergence result holds under several deterministic conditions which can cover certain cases with adversarial noise, and as an example we specialize it to the setting of the Erd\"os-R\'enyi measurement graph and Gaussian noise.
Our study aims to specify the asymptotic error distribution in the discretization of a stochastic Volterra equation with a fractional kernel. It is well-known that for a standard stochastic differential equation, the discretization error, normalized with its rate of convergence $1/\sqrt{n}$, converges in law to the solution of a certain linear equation. Similarly to this, we show that a suitably normalized discretization error of the Volterra equation converges in law to the solution of a certain linear Volterra equation with the same fractional kernel.
We consider the upper confidence bound strategy for Gaussian multi-armed bandits with known control horizon sizes $N$ and build its limiting description with a system of stochastic differential equations and ordinary differential equations. Rewards for the arms are assumed to have unknown expected values and known variances. A set of Monte-Carlo simulations was performed for the case of close distributions of rewards, when mean rewards differ by the magnitude of order $N^{-1/2}$, as it yields the highest normalized regret, to verify the validity of the obtained description. The minimal size of the control horizon when the normalized regret is not noticeably larger than maximum possible was estimated.
Even the most carefully curated economic data sets have variables that are noisy, missing, discretized, or privatized. The standard workflow for empirical research involves data cleaning followed by data analysis that typically ignores the bias and variance consequences of data cleaning. We formulate a semiparametric model for causal inference with corrupted data to encompass both data cleaning and data analysis. We propose a new end-to-end procedure for data cleaning, estimation, and inference with data cleaning-adjusted confidence intervals. We prove consistency, Gaussian approximation, and semiparametric efficiency for our estimator of the causal parameter by finite sample arguments. The rate of Gaussian approximation is $n^{-1/2}$ for global parameters such as average treatment effect, and it degrades gracefully for local parameters such as heterogeneous treatment effect for a specific demographic. Our key assumption is that the true covariates are approximately low rank. In our analysis, we provide nonasymptotic theoretical contributions to matrix completion, statistical learning, and semiparametric statistics. We verify the coverage of the data cleaning-adjusted confidence intervals in simulations calibrated to resemble differential privacy as implemented in the 2020 US Census.
The eigenvalue density of a matrix plays an important role in various types of scientific computing such as electronic-structure calculations. In this paper, we propose a quantum algorithm for computing the eigenvalue density in a given interval. Our quantum algorithm is based on a method that approximates the eigenvalue counts by applying the numerical contour integral and the stochastic trace estimator applied to a matrix involving resolvent matrices. As components of our algorithm, the HHL solver is applied to an augmented linear system of the resolvent matrices, and the quantum Fourier transform (QFT) is adopted to represent the operation of the numerical contour integral. To reduce the size of the augmented system, we exploit a certain symmetry of the numerical integration. We also introduce a permutation formed by CNOT gates to make the augmented system solution consistent with the QFT input. The eigenvalue count in a given interval is derived as the probability of observing a bit pattern in a fraction of the qubits of the output state.
This article is concerned with the nonconforming finite element method for distributed elliptic optimal control problems with pointwise constraints on the control and gradient of the state variable. We reduce the minimization problem into a pure state constraint minimization problem. In this case, the solution of the minimization problem can be characterized as fourth-order elliptic variational inequalities of the first kind. To discretize the control problem we have used the bubble enriched Morley finite element method. To ensure the existence of the solution to discrete problems three bubble functions corresponding to the mean of the edge are added to the discrete space. We derive the error in the state variable in $H^2$-type energy norm. Numerical results are presented to illustrate our analytical findings.
We introduce a new family of deep neural network models. Instead of specifying a discrete sequence of hidden layers, we parameterize the derivative of the hidden state using a neural network. The output of the network is computed using a black-box differential equation solver. These continuous-depth models have constant memory cost, adapt their evaluation strategy to each input, and can explicitly trade numerical precision for speed. We demonstrate these properties in continuous-depth residual networks and continuous-time latent variable models. We also construct continuous normalizing flows, a generative model that can train by maximum likelihood, without partitioning or ordering the data dimensions. For training, we show how to scalably backpropagate through any ODE solver, without access to its internal operations. This allows end-to-end training of ODEs within larger models.
In this paper, we study the optimal convergence rate for distributed convex optimization problems in networks. We model the communication restrictions imposed by the network as a set of affine constraints and provide optimal complexity bounds for four different setups, namely: the function $F(\xb) \triangleq \sum_{i=1}^{m}f_i(\xb)$ is strongly convex and smooth, either strongly convex or smooth or just convex. Our results show that Nesterov's accelerated gradient descent on the dual problem can be executed in a distributed manner and obtains the same optimal rates as in the centralized version of the problem (up to constant or logarithmic factors) with an additional cost related to the spectral gap of the interaction matrix. Finally, we discuss some extensions to the proposed setup such as proximal friendly functions, time-varying graphs, improvement of the condition numbers.