The weighted Euclidean norm $\|x\|_w$ of a vector $x\in \mathbb{R}^d$ with weights $w\in \mathbb{R}^d$ is the Euclidean norm where the contribution of each dimension is scaled by a given weight. Approaches to dimensionality reduction that satisfy the Johnson-Lindenstrauss (JL) lemma can be easily adapted to the weighted Euclidean distance if weights are fixed: it suffices to scale each dimension of the input vectors according to the weights, and then apply any standard approach. However, this is not the case when weights are unknown during the dimensionality reduction or might dynamically change. We address this issue by providing an approach that maps vectors into a smaller complex vector space, but still allows to satisfy a JL-like property for the weighted Euclidean distance when weights are revealed. Specifically, let $\Delta\geq 1, \epsilon \in (0,1)$ be arbitrary values, and let $S\subset \mathbb{R}^d$ be a set of $n$ vectors. We provide a weight-oblivious linear map $g:\mathbb{R}^d \rightarrow \mathbb{C}^k$, with $k=\Theta(\epsilon^{-2}\Delta^4 \ln{n})$, to reduce vectors in $S$, and an estimator $\rho: \mathbb{C}^k \times \mathbb{R}^d \rightarrow \mathbb R$ with the following property. For any $x\in S$, the value $\rho(g(x), w)$ is an unbiased estimate of $\|x\|^2_w$, and $\rho$ is computed from the reduced vector $g(x)$ and the weights $w$. Moreover, the error of the estimate $\rho((g(x), w)$ depends on the norm distortion due to weights and parameter $\Delta$: for any $x\in S$, the estimate has a multiplicative error $\epsilon$ if $\|x\|_2\|w\|_2/\|x\|_w\leq \Delta$, otherwise the estimate has an additive $\epsilon \|x\|^2_2\|w\|^2_2/\Delta^2$ error. Finally, we consider the estimation of weighted Euclidean norms in streaming settings: we show how to estimate the weighted norm when the weights are provided either after or concurrently with the input vector.
We develop a sparse spectral method for a class of fractional differential equations, posed on $\mathbb{R}$, in one dimension. These equations can include sqrt-Laplacian, Hilbert, derivative and identity terms. The numerical method utilizes a basis consisting of weighted Chebyshev polynomials of the second kind in conjunction with their Hilbert transforms. The former functions are supported on $[-1,1]$ whereas the latter have global support. The global approximation space can contain different affine transformations of the basis, mapping $[-1,1]$ to other intervals. Remarkably, not only are the induced linear systems sparse, but the operator decouples across the different affine transformations. Hence, the solve reduces to solving $K$ independent sparse linear systems of size $\mathcal{O}(n)\times \mathcal{O}(n)$, with $\mathcal{O}(n)$ nonzero entries, where $K$ is the number of different intervals and $n$ is the highest polynomial degree contained in the sum space. This results in an $\mathcal{O}(n)$ complexity solve. Applications to fractional heat and wave equations are considered.
We give an alternative derivation of $(N,N)$-isogenies between fastKummer surfaces which complements existing works based on the theory oftheta functions. We use this framework to produce explicit formulae for thecase of $N = 3$, and show that the resulting algorithms are more efficient thanall prior $(3, 3)$-isogeny algorithms.
This paper proposes a novel technique for the approximation of strong solutions $u \in C(\overline{\Omega}) \cap W^{2,n}_\mathrm{loc}(\Omega)$ to uniformly elliptic linear PDE of second order in nondivergence form with continuous leading coefficient in nonsmooth domains by finite element methods. These solutions satisfy the Alexandrov-Bakelman-Pucci (ABP) maximum principle, which provides an a~posteriori error control for $C^1$ conforming approximations. By minimizing this residual, we obtain an approximation to the solution $u$ in the $L^\infty$ norm. Although discontinuous functions do not satisfy the ABP maximum principle, this approach extends to nonconforming FEM as well thanks to well-established enrichment operators. Convergence of the proposed FEM is established for uniform mesh-refinements. The built-in a~posteriori error control (even for inexact solve) can be utilized in adaptive computations for the approximation of singular solutions, which performs superiorly in the numerical benchmarks in comparison to the uniform mesh-refining algorithm.
Partial differential equations (PDEs) have become an essential tool for modeling complex physical systems. Such equations are typically solved numerically via mesh-based methods, such as finite element methods, with solutions over the spatial domain. However, obtaining these solutions are often prohibitively costly, limiting the feasibility of exploring parameters in PDEs. In this paper, we propose an efficient emulator that simultaneously predicts the solutions over the spatial domain, with theoretical justification of its uncertainty quantification. The novelty of the proposed method lies in the incorporation of the mesh node coordinates into the statistical model. In particular, the proposed method segments the mesh nodes into multiple clusters via a Dirichlet process prior and fits Gaussian process models with the same hyperparameters in each of them. Most importantly, by revealing the underlying clustering structures, the proposed method can provide valuable insights into qualitative features of the resulting dynamics that can be used to guide further investigations. Real examples are demonstrated to show that our proposed method has smaller prediction errors than its main competitors, with competitive computation time, and identifies interesting clusters of mesh nodes that possess physical significance, such as satisfying boundary conditions. An R package for the proposed methodology is provided in an open repository.
We prove sharp bounds on certain impedance-to-impedance maps (and their compositions) for the Helmholtz equation with large wavenumber (i.e., at high-frequency) using semiclassical defect measures. The paper [GGGLS] (Gong-Gander-Graham-Lafontaine-Spence, 2022) recently showed that the behaviour of these impedance-to-impedance maps (and their compositions) dictates the convergence of the parallel overlapping Schwarz domain-decomposition method with impedance boundary conditions on the subdomain boundaries. For a model decomposition with two subdomains and sufficiently-large overlap, the results of this paper combined with those in [GGGLS] show that the parallel Schwarz method is power contractive, independent of the wavenumber. For strip-type decompositions with many subdomains, the results of this paper show that the composite impedance-to-impedance maps, in general, behave "badly" with respect to the wavenumber; nevertheless, by proving results about the composite maps applied to a restricted class of data, we give insight into the wavenumber-robustness of the parallel Schwarz method observed in the numerical experiments in [GGGLS].
The volume function V(t) of a compact set S\in R^d is just the Lebesgue measure of the set of points within a distance to S not larger than t. According to some classical results in geometric measure theory, the volume function turns out to be a polynomial, at least in a finite interval, under a quite intuitive, easy to interpret, sufficient condition (called ``positive reach'') which can be seen as an extension of the notion of convexity. However, many other simple sets, not fulfilling the positive reach condition, have also a polynomial volume function. To our knowledge, there is no general, simple geometric description of such sets. Still, the polynomial character of $V(t)$ has some relevant consequences since the polynomial coefficients carry some useful geometric information. In particular, the constant term is the volume of S and the first order coefficient is the boundary measure (in Minkowski's sense). This paper is focused on sets whose volume function is polynomial on some interval starting at zero, whose length (that we call ``polynomial reach'') might be unknown. Our main goal is to approximate such polynomial reach by statistical means, using only a large enough random sample of points inside S. The practical motivation is simple: when the value of the polynomial reach , or rather a lower bound for it, is approximately known, the polynomial coefficients can be estimated from the sample points by using standard methods in polynomial approximation. As a result, we get a quite general method to estimate the volume and boundary measure of the set, relying only on an inner sample of points and not requiring the use any smoothing parameter. This paper explores the theoretical and practical aspects of this idea.
We describe an algorithm which, given two essential curves on a surface $S$, computes their distance in the curve graph of $S$, up to multiplicative and additive errors. As an application, we present an algorithm to decide the Nielsen-Thurston type (periodic, reducible, or pseudo-Anosov) of a mapping class of $S$. The novelty of our algorithms lies in the fact that their running time is polynomial in the size of the input and in the complexity of $S$ -- say, its Euler characteristic. This is in contrast with previously known algorithms, which run in polynomial time in the size of the input for any fixed surface $S$.
In this paper we develop a classical algorithm of complexity $O(K \, 2^n)$ to simulate parametrized quantum circuits (PQCs) of $n$ qubits, where $K$ is the total number of one-qubit and two-qubit control gates. The algorithm is developed by finding $2$-sparse unitary matrices of order $2^n$ explicitly corresponding to any single-qubit and two-qubit control gates in an $n$-qubit system. Finally, we determine analytical expression of Hamiltonians for any such gate and consequently a local Hamiltonian decomposition of any PQC is obtained. All results are validated with numerical simulations.
In this paper we study the Cayley graph $\mathrm{Cay}(S_n,T)$ of the symmetric group $S_n$ generated by a set of transpositions $T$. We show that for $n\geq 5$ the Cayley graph is normal. As a corollary, we show that its automorphism group is a direct product of $S_n$ and the automorphism group of the transposition graph associated to $T$. This provides an affirmative answer to a conjecture raised by Ganesan in arXiv:1703.08109, showing that $\mathrm{Cay}(S_n,T)$ is normal if and only if the transposition graph is not $C_4$ or $K_n$.
The {\em acyclic chromatic number} of a graph is the least number of colors needed to properly color its vertices so that none of its cycles has only two colors. The {\em acyclic chromatic index} is the analogous graph parameter for edge colorings. We first show that the acyclic chromatic index is at most $2\Delta-1$, where $\Delta$ is the maximum degree of the graph. We then show that for all $\epsilon >0$ and for $\Delta$ large enough (depending on $\epsilon$), the acyclic chromatic number of the graph is at most $\lceil(4^{-1/3} +\epsilon) {\Delta}^{4/3} \rceil +\Delta+ 1$. Both results improve long chains of previous successive advances. Both are algorithmic, in the sense that the colorings are generated by randomized algorithms. Previous randomized algorithms assume the availability of enough colors to guarantee properness deterministically and use additional colors in dealing with the bichromatic cycles in a randomized fashion. In contrast, our algorithm initially generates colorings that are not necessarily proper; it only aims at avoiding cycles where all pairs of edges, or vertices, that are one edge, or vertex, apart in a traversal of the cycle are homochromatic (of the same color). When this goal is reached, the algorithm checks for properness and if necessary it repeats until properness is attained. Thus savings in the number of colors is attained.