Scattered data fitting is a frequently encountered problem for reconstructing an unknown function from given scattered data. Radial basis function (RBF) methods have proven to be highly useful to deal with this problem. We describe two quantum algorithms to efficiently fit scattered data based on globally and compactly supported RBFs respectively. For the globally supported RBF method, the core of the quantum algorithm relies on using coherent states to calculate the radial functions and a nonsparse matrix exponentiation technique for efficiently performing a matrix inversion. A quadratic speedup is achieved in the number of data over the classical algorithms. For the compactly supported RBF method, we mainly use the HHL algorithm as a subroutine to design an efficient quantum procedure that runs in time logarithmic in the number of data, achieving an exponential improvement over the classical methods.
Let $\mathcal{G}$ be a directed graph with vertices $1,2,\ldots, 2N$. Let $\mathcal{T}=(T_{i,j})_{(i,j)\in\mathcal{G}}$ be a family of contractive similitudes. For every $1\leq i\leq N$, let $i^+:=i+N$. For $1\leq i,j\leq N$, we define $\mathcal{M}_{i,j}=\{(i,j),(i,j^+),(i^+,j),(i^+,j^+)\}\cap\mathcal{G}$. We assume that $T_{\widetilde{i},\widetilde{j}}=T_{i,j}$ for every $(\widetilde{i},\widetilde{j})\in \mathcal{M}_{i,j}$. Let $K$ denote the Mauldin-Williams fractal determined by $\mathcal{T}$. Let $\chi=(\chi_i)_{i=1}^{2N}$ be a positive probability vector and $P$ a row-stochastic matrix which serves as an incidence matrix for $\mathcal{G}$. We denote by $\nu$ the Markov-type measure associated with $\chi$ and $P$. Let $\Omega=\{1,\ldots,2N\}$ and $G_\infty=\{\sigma\in\Omega^{\mathbb{N}}:(\sigma_i,\sigma_{i+1})\in\mathcal{G}, \;i\geq 1\}$. Let $\pi$ be the natural projection from $G_\infty$ to $K$ and $\mu=\nu\circ\pi^{-1}$. We consider the following two cases: 1. $\mathcal{G}$ has two strongly connected components consisting of $N$ vertices; 2. $\mathcal{G}$ is strongly connected. With some assumptions for $\mathcal{G}$ and $\mathcal{T}$, for case 1, we determine the exact value $s_r$ of the quantization dimension $D_r(\mu)$ for $\mu$ and prove that the $s_r$-dimensional lower quantization coefficient is always positive, but the upper one can be infinite; we establish a necessary and sufficient condition for the upper quantization coefficient for $\mu$ to be finite; for case 2, we determine $D_r(\mu)$ in terms of a pressure-like function and prove that $D_r(\mu)$-dimensional upper and lower quantization coefficient are both positive and finite.
In this work, we study complex-valued data detection performance in massive multiple-input multiple-output (MIMO) systems. We focus on the problem of recovering an $n$-dimensional signal whose entries are drawn from an arbitrary constellation $\mathcal{K} \subset \mathbb{C}$ from $m$ noisy linear measurements, with an independent and identically distributed (i.i.d.) complex Gaussian channel. Since the optimal maximum likelihood (ML) detector is computationally prohibitive for large dimensions, many convex relaxation heuristic methods have been proposed to solve the detection problem. In this paper, we consider a regularized version of this convex relaxation that we call the regularized convex relaxation (RCR) detector and sharply derive asymptotic expressions for its mean square error and symbol error probability. Monte-Carlo simulations are provided to validate the derived analytical results.
In this paper, we consider two fundamental symmetric kernels in linear algebra: the Cholesky factorization and the symmetric rank-$k$ update (SYRK), with the classical three nested loops algorithms for these kernels. In addition, we consider a machine model with a fast memory of size $S$ and an unbounded slow memory. In this model, all computations must be performed on operands in fast memory, and the goal is to minimize the amount of communication between slow and fast memories. As the set of computations is fixed by the choice of the algorithm, only the ordering of the computations (the schedule) directly influences the volume of communications.We prove lower bounds of $\frac{1}{3\sqrt{2}}\frac{N^3}{\sqrt{S}}$ for the communication volume of the Cholesky factorization of an $N\times N$ symmetric positive definite matrix, and of $\frac{1}{\sqrt{2}}\frac{N^2M}{\sqrt{S}}$ for the SYRK computation of $\mat{A}\cdot\transpose{\mat{A}}$, where $\mathbf{A}$ is an $N\times M$ matrix. Both bounds improve the best known lower bounds from the literature by a factor $\sqrt{2}$.In addition, we present two out-of-core, sequential algorithms with matching communication volume: \TBS for SYRK, with a volume of $\frac{1}{\sqrt{2}}\frac{N^2M}{\sqrt{S}} + \bigo{NM\log N}$, and \LBC for Cholesky, with a volume of $\frac{1}{3\sqrt{2}}\frac{N^3}{\sqrt{S}} + \bigo{N^{5/2}}$. Both algorithms improve over the best known algorithms from the literature by a factor $\sqrt{2}$, and prove that the leading terms in our lower bounds cannot be improved further. This work shows that the operational intensity of symmetric kernels like SYRK or Cholesky is intrinsically higher (by a factor $\sqrt{2}$) than that of corresponding non-symmetric kernels (GEMM and LU factorization).
Q-learning is widely used algorithm in reinforcement learning community. Under the lookup table setting, its convergence is well established. However, its behavior is known to be unstable with the linear function approximation case. This paper develops a new Q-learning algorithm that converges when linear function approximation is used. We prove that simply adding an appropriate regularization term ensures convergence of the algorithm. We prove its stability using a recent analysis tool based on switching system models. Moreover, we experimentally show that it converges in environments where Q-learning with linear function approximation has known to diverge. We also provide an error bound on the solution where the algorithm converges.
In this paper, we initiate the study of isogeometric analysis (IGA) of a quantum three-body problem that has been well-known to be difficult to solve. In the IGA setting, we represent the wavefunctions by linear combinations of B-spline basis functions and solve the problem as a matrix eigenvalue problem. The eigenvalue gives the eigenstate energy while the eigenvector gives the coefficients of the B-splines that lead to the eigenstate. The major difficulty of isogeometric or other finite-element-method-based analyses lies in the lack of boundary conditions and a large number of degrees of freedom for accuracy. For a typical many-body problem with attractive interaction, there are bound and scattering states where bound states have negative eigenvalues. We focus on bound states and start with the analysis for a two-body problem. We demonstrate through various numerical experiments that IGA provides a promising technique to solve the three-body problems.
Time efficiency is one of the more critical concerns in computational fluid dynamics simulations of industrial applications. Extensive research has been conducted to improve the underlying numerical schemes to achieve time process reduction. Within this context, this paper presents a new time discretization method based on the Adomian decomposition technique for Euler equations. The obtained scheme is time-order adaptive; the order is automatically adjusted at each time step and over the space domain, leading to significant processing time reduction. The scheme is formulated in an appropriate recursive formula, and its efficiency is demonstrated through numerical tests by comparison to exact solutions and the popular Runge-Kutta-DG method.
We consider the following oblivious sketching problem: given $\epsilon \in (0,1/3)$ and $n \geq d/\epsilon^2$, design a distribution $\mathcal{D}$ over $\mathbb{R}^{k \times nd}$ and a function $f: \mathbb{R}^k \times \mathbb{R}^{nd} \rightarrow \mathbb{R}$, so that for any $n \times d$ matrix $A$, $$\Pr_{S \sim \mathcal{D}} [(1-\epsilon) \|A\|_{op} \leq f(S(A),S) \leq (1+\epsilon)\|A\|_{op}] \geq 2/3,$$ where $\|A\|_{op}$ is the operator norm of $A$ and $S(A)$ denotes $S \cdot A$, interpreting $A$ as a vector in $\mathbb{R}^{nd}$. We show a tight lower bound of $k = \Omega(d^2/\epsilon^2)$ for this problem. Our result considerably strengthens the result of Nelson and Nguyen (ICALP, 2014), as it (1) applies only to estimating the operator norm, which can be estimated given any OSE, and (2) applies to distributions over general linear operators $S$ which treat $A$ as a vector and compute $S(A)$, rather than the restricted class of linear operators corresponding to matrix multiplication. Our technique also implies the first tight bounds for approximating the Schatten $p$-norm for even integers $p$ via general linear sketches, improving the previous lower bound from $k = \Omega(n^{2-6/p})$ [Regev, 2014] to $k = \Omega(n^{2-4/p})$. Importantly, for sketching the operator norm up to a factor of $\alpha$, where $\alpha - 1 = \Omega(1)$, we obtain a tight $k = \Omega(n^2/\alpha^4)$ bound, matching the upper bound of Andoni and Nguyen (SODA, 2013), and improving the previous $k = \Omega(n^2/\alpha^6)$ lower bound. Finally, we also obtain the first lower bounds for approximating Ky Fan norms.
Recently, recovering an unknown signal from quadratic measurements has gained popularity because it includes many interesting applications as special cases such as phase retrieval, fusion frame phase retrieval, and positive operator-valued measure. In this paper, by employing the least squares approach to reconstruct the signal, we establish the non-asymptotic statistical property showing that the gap between the estimator and the true signal is vanished in the noiseless case and is bounded in the noisy case by an error rate of $O(\sqrt{p\log(1+2n)/n})$, where $n$ and $p$ are the number of measurements and the dimension of the signal, respectively. We develop a gradient regularized Newton method (GRNM) to solve the least squares problem and prove that it converges to a unique local minimum at a superlinear rate under certain mild conditions. In addition to the deterministic results, GRNM can reconstruct the true signal exactly for the noiseless case and achieve the above error rate with a high probability for the noisy case. Numerical experiments demonstrate the GRNM performs nicely in terms of high order of recovery accuracy, faster computational speed, and strong recovery capability.
This paper is concerned with a numerical solution to the scattering of a time-harmonic electromagnetic wave by a bounded and impenetrable obstacle in three dimensions. The electromagnetic wave propagation is modeled by a boundary value problem of Maxwell's equations in the exterior domain of the obstacle. Based on the Dirichlet-to-Neumann (DtN) operator, which is defined by an infinite series, an exact transparent boundary condition is introduced and the scattering problem is reduced equivalently into a bounded domain. An a posteriori error estimate based adaptive finite element DtN method is developed to solve the discrete variational problem, where the DtN operator is truncated into a sum of finitely many terms. The a posteriori error estimate takes into account both the finite element approximation error and the truncation error of the DtN operator. The latter is shown to decay exponentially with respect to the truncation parameter. Numerical experiments are presented to illustrate the effectiveness of the proposed method.
Network embedding has attracted considerable research attention recently. However, the existing methods are incapable of handling billion-scale networks, because they are computationally expensive and, at the same time, difficult to be accelerated by distributed computing schemes. To address these problems, we propose RandNE, a novel and simple billion-scale network embedding method. Specifically, we propose a Gaussian random projection approach to map the network into a low-dimensional embedding space while preserving the high-order proximities between nodes. To reduce the time complexity, we design an iterative projection procedure to avoid the explicit calculation of the high-order proximities. Theoretical analysis shows that our method is extremely efficient, and friendly to distributed computing schemes without any communication cost in the calculation. We demonstrate the efficacy of RandNE over state-of-the-art methods in network reconstruction and link prediction tasks on multiple datasets with different scales, ranging from thousands to billions of nodes and edges.