A new generalization of multiquadric functions $\phi(x)=\sqrt{c^{2d}+||x||^{2d}}$, where $x\in\mathbb{R}^n$, $c\in \mathbb{R}$, $d\in \mathbb{N}$, is presented to increase the accuracy of quasi-interpolation further. With the restriction to Euclidean spaces of odd dimensionality, the generalization can be used to generate a quasi-Lagrange operator that reproduces all polynomials of degree $2d-1$. In contrast to the classical multiquadric, the convergence rate of the quasi-interpolation operator can be significantly improved by a factor $h^{2d-n-1}$, where $h>0$ represents the grid spacing. Among other things, we compute the generalized Fourier transform of this new multiquadric function. Finally, an infinite regular grid is employed to analyse the properties of the aforementioned generalization in detail.
An approach for solving a variety of inverse coefficient problems for the Sturm-Liouville equation -y''+q(x)y={\lambda}y with a complex valued potential q(x) is presented. It is based on Neumann series of Bessel functions representations for solutions. With their aid the problem is reduced to a system of linear algebraic equations for the coefficients of the representations. The potential is recovered from an arithmetic combination of the first two coefficients. Special cases of the considered problems include the recovery of the potential from a Weyl function, inverse two-spectra Sturm-Liouville problems, as well as the inverse scattering problem on a finite interval. The approach leads to efficient numerical algorithms for solving coefficient inverse problems. Numerical efficiency is illustrated by several examples.
The Fast Fourier Transform (FFT) over a finite field $\mathbb{F}_q$ computes evaluations of a given polynomial of degree less than $n$ at a specifically chosen set of $n$ distinct evaluation points in $\mathbb{F}_q$. If $q$ or $q-1$ is a smooth number, then the divide-and-conquer approach leads to the fastest known FFT algorithms. Depending on the type of group that the set of evaluation points forms, these algorithms are classified as multiplicative (Math of Comp. 1965) and additive (FOCS 2014) FFT algorithms. In this work, we provide a unified framework for FFT algorithms that include both multiplicative and additive FFT algorithms as special cases, and beyond: our framework also works when $q+1$ is smooth, while all known results require $q$ or $q-1$ to be smooth. For the new case where $q+1$ is smooth (this new case was not considered before in literature as far as we know), we show that if $n$ is a divisor of $q+1$ that is $B$-smooth for a real $B>0$, then our FFT needs $O(Bn\log n)$ arithmetic operations in $\mathbb{F}_q$. Our unified framework is a natural consequence of introducing the algebraic function fields into the study of FFT.
We study several polygonal curve problems under the Fr\'{e}chet distance via algebraic geometric methods. Let $\mathbb{X}_m^d$ and $\mathbb{X}_k^d$ be the spaces of all polygonal curves of $m$ and $k$ vertices in $\mathbb{R}^d$, respectively. We assume that $k \leq m$. Let $\mathcal{R}^d_{k,m}$ be the set of ranges in $\mathbb{X}_m^d$ for all possible metric balls of polygonal curves in $\mathbb{X}_k^d$ under the Fr\'{e}chet distance. We prove a nearly optimal bound of $O(dk\log (km))$ on the VC dimension of the range space $(\mathbb{X}_m^d,\mathcal{R}_{k,m}^d)$, improving on the previous $O(d^2k^2\log(dkm))$ upper bound and approaching the current $\Omega(dk\log k)$ lower bound. Our upper bound also holds for the weak Fr\'{e}chet distance. We also obtain exact solutions that are hitherto unknown for curve simplification, range searching, nearest neighbor search, and distance oracle.
Constructing a similarity graph from a set $X$ of data points in $\mathbb{R}^d$ is the first step of many modern clustering algorithms. However, typical constructions of a similarity graph have high time complexity, and a quadratic space dependency with respect to $|X|$. We address this limitation and present a new algorithmic framework that constructs a sparse approximation of the fully connected similarity graph while preserving its cluster structure. Our presented algorithm is based on the kernel density estimation problem, and is applicable for arbitrary kernel functions. We compare our designed algorithm with the well-known implementations from the scikit-learn library and the FAISS library, and find that our method significantly outperforms the implementation from both libraries on a variety of datasets.
$ \newcommand{\epsA}{\Mh{\delta}} \newcommand{\Re}{\mathbb{R}} \newcommand{\reals}{\mathbb{R}} \newcommand{\SetX}{\mathsf{X}} \renewcommand{\P}{P} \newcommand{\diam}{\Delta} \newcommand{\Mh}[1]{#1} \newcommand{\query}{q} \newcommand{\eps}{\varepsilon} \newcommand{\VorX}[1]{\mathcal{V} \pth{#1}} \newcommand{\IntRange}[1]{[ #1 ]} \newcommand{\Space}{\overline{\mathsf{m}}} \newcommand{\pth}[2][\!]{#1\left({#2}\right)} \newcommand{\polylog}{\mathrm{polylog}} \newcommand{\N}{\mathbb N} \newcommand{\Z}{\mathbb Z} \newcommand{\pt}{p} \newcommand{\distY}[2]{\left\| {#1} - {#2} \right\|} \newcommand{\PP}{P} \newcommand{\ptq}{q} \newcommand{\pts}{s}$Given a set $P \subset \Re^d$ of $n$ points, with diameter $\diam$, and a parameter $\epsA \in (0,1)$, it is known that there is a partition of $P$ into sets $P_1, \ldots, P_t$, each of size $O(1/\epsA^2)$, such that their convex-hulls all intersect a common ball of radius $\epsA \diam$. We prove that a random partition, with a simple alteration step, yields the desired partition, resulting in a (randomized) linear time algorithm. We also provide a deterministic algorithm with running time $O( dn \log n)$. Previous proofs were either existential (i.e., at least exponential time), or required much bigger sets. In addition, the algorithm and its proof of correctness are significantly simpler than previous work, and the constants are slightly better. We also include a number of applications and extensions using the same central ideas. For example, we provide a linear time algorithm for computing a ``fuzzy'' centerpoint, and prove a no-dimensional weak $\eps$-net theorem with an improved constant.
A periodic temporal graph $\mathcal{G}=(G_0, G_1, \dots, G_{p-1})^*$ is an infinite periodic sequence of graphs $G_i=(V,E_i)$ where $G=(V,\cup_i E_i)$ is called the footprint. Recently, the arena where the Cops and Robber game is played has been extended from a graph to a periodic graph; in this case, the copnumber is also the minimum number of cops sufficient for capturing the robber. We study the connections and distinctions between the copnumber $c(\mathcal{G})$ of a periodic graph $\mathcal{G}$ and the copnumber $c(G)$ of its footprint $G$ and establish several facts. For instance, we show that the smallest periodic graph with $c(\mathcal{G}) = 3$ has at most $8$ nodes; in contrast, the smallest graph $G$ with $c(G) = 3$ has $10$ nodes. We push this investigation by generating multiple examples showing how the copnumbers of a periodic graph $\mathcal{G}$, the subgraphs $G_i$ and its footprint $G$ can be loosely tied. Based on these results, we derive upper bounds on the copnumber of a periodic graph from properties of its footprint such as its treewidth.
We give a strongly explicit construction of $\varepsilon$-approximate $k$-designs for the orthogonal group $\mathrm{O}(N)$ and the unitary group $\mathrm{U}(N)$, for $N=2^n$. Our designs are of cardinality $\mathrm{poly}(N^k/\varepsilon)$ (equivalently, they have seed length $O(nk + \log(1/\varepsilon)))$; up to the polynomial, this matches the number of design elements used by the construction consisting of completely random matrices.
A code $C \subseteq \{0, 1, 2\}^n$ of length $n$ is called trifferent if for any three distinct elements of $C$ there exists a coordinate in which they all differ. By $T(n)$ we denote the maximum cardinality of trifferent codes with length. $T(5)=10$ and $T(6)=13$ were recently determined. Here we determine $T(7)=16$, $T(8)=20$, and $T(9)=27$. For the latter case $n=9$ there also exist linear codes attaining the maximum possible cardinality $27$.
We derive general bounds on the probability that the empirical first-passage time $\overline{\tau}_n\equiv \sum_{i=1}^n\tau_i/n$ of a reversible ergodic Markov process inferred from a sample of $n$ independent realizations deviates from the true mean first-passage time by more than any given amount in either direction. We construct non-asymptotic confidence intervals that hold in the elusive small-sample regime and thus fill the gap between asymptotic methods and the Bayesian approach that is known to be sensitive to prior belief and tends to underestimate uncertainty in the small-sample setting. We prove sharp bounds on extreme first-passage times that control uncertainty even in cases where the mean alone does not sufficiently characterize the statistics. Our concentration-of-measure-based results allow for model-free error control and reliable error estimation in kinetic inference, and are thus important for the analysis of experimental and simulation data in the presence of limited sampling.
Generalising the concept of a complete permutation polynomial over a finite field, we define completness to level $k$ for $k\ge1$ in fields of odd characteristic. We construct two families of polynomials that satisfy the condition of high level completeness for all finite fields, and two more families complete to the maximum level a possible for large collection of finite fields. Under the binary operation of composition of functions one family of polynomials is an abelian group isomorphic to the additive group, while the other is isomorphic to the multiplicative group.