The DTW Barycenter Averaging (DBA) algorithm is a widely used algorithm for estimating the mean of a given set of point sequences. In this context, the mean is defined as a point sequence that minimises the sum of dynamic time warping distances (DTW). The algorithm is similar to the $k$-means algorithm in the sense that it alternately repeats two steps: (1) computing an optimal assignment to the points of the current mean, and (2) computing an optimal mean under the current assignment. The popularity of DBA can be attributed to the fact that it works well in practice, despite any theoretical guarantees to be known. In our paper, we aim to initiate a theoretical study of the number of iterations that DBA performs until convergence. We assume the algorithm is given $n$ sequences of $m$ points in $\mathbb{R}^d$ and a parameter $k$ that specifies the length of the mean sequence to be computed. We show that, in contrast to its fast running time in practice, the number of iterations can be exponential in $k$ in the worst case - even if the number of input sequences is $n=2$. We complement these findings with experiments on real-world data that suggest this worst-case behaviour is likely degenerate. To better understand the performance of the algorithm on non-degenerate input, we study DBA in the model of smoothed analysis, upper-bounding the expected number of iterations in the worst case under random perturbations of the input. Our smoothed upper bound is polynomial in $k$, $n$ and $d$, and for constant $n$, it is also polynomial in $m$. For our analysis, we adapt the set of techniques that were developed for analysing $k$-means and observe that this set of techniques is not sufficient to obtain tight bounds for general $n$.
We prove explicit uniform two-sided bounds for the phase functions of Bessel functions and of their derivatives. As a consequence, we obtain new enclosures for the zeros of Bessel functions and their derivatives in terms of inverse values of some elementary functions. These bounds are valid, with a few exceptions, for all zeros and all Bessel functions with non-negative indices. We provide numerical evidence showing that our bounds either improve or closely match the best previously known ones.
We propose a high order numerical scheme for time-dependent first order Hamilton--Jacobi--Bellman equations. In particular we propose to combine a semi-Lagrangian scheme with a Central Weighted Non-Oscillatory reconstruction. We prove a convergence result in the case of state- and time-independent Hamiltonians. Numerical simulations are presented in space dimensions one and two, also for more general state- and time-dependent Hamiltonians, demonstrating superior performance in terms of CPU time gain compared with a semi-Lagrangian scheme coupled with Weighted Non-Oscillatory reconstructions.
Motivated by the problem of matching two correlated random geometric graphs, we study the problem of matching two Gaussian geometric models correlated through a latent node permutation. Specifically, given an unknown permutation $\pi^*$ on $\{1,\ldots,n\}$ and given $n$ i.i.d. pairs of correlated Gaussian vectors $\{X_{\pi^*(i)},Y_i\}$ in $\mathbb{R}^d$ with noise parameter $\sigma$, we consider two types of (correlated) weighted complete graphs with edge weights given by $A_{i,j}=\langle X_i,X_j \rangle$, $B_{i,j}=\langle Y_i,Y_j \rangle$. The goal is to recover the hidden vertex correspondence $\pi^*$ based on the observed matrices $A$ and $B$. For the low-dimensional regime where $d=O(\log n)$, Wang, Wu, Xu, and Yolou [WWXY22+] established the information thresholds for exact and almost exact recovery in matching correlated Gaussian geometric models. They also conducted numerical experiments for the classical Umeyama algorithm. In our work, we prove that this algorithm achieves exact recovery of $\pi^*$ when the noise parameter $\sigma=o(d^{-3}n^{-2/d})$, and almost exact recovery when $\sigma=o(d^{-3}n^{-1/d})$. Our results approach the information thresholds up to a $\operatorname{poly}(d)$ factor in the low-dimensional regime.
We present convergence estimates of two types of greedy algorithms in terms of the metric entropy of underlying compact sets. In the first part, we measure the error of a standard greedy reduced basis method for parametric PDEs by the metric entropy of the solution manifold in Banach spaces. This contrasts with the classical analysis based on the Kolmogorov n-widths and enables us to obtain direct comparisons between the greedy algorithm error and the entropy numbers, where the multiplicative constants are explicit and simple. The entropy-based convergence estimate is sharp and improves upon the classical width-based analysis of reduced basis methods for elliptic model problems. In the second part, we derive a novel and simple convergence analysis of the classical orthogonal greedy algorithm for nonlinear dictionary approximation using the metric entropy of the symmetric convex hull of the dictionary. This also improves upon existing results by giving a direct comparison between the algorithm error and the metric entropy.
Three types of the Parikh mapping are introduced, namely, alphabetic, alphabetic-basis and basis. Explicit expressions for attractors of the k-th order in bases n >= 8, including countable ones, are found. Properties for the alphabetic, alphabetic-basis and basis Parikh vectors are given at each step of the Parikh mapping. The maximum number of iterations leading to attractors of the k-th order in the basis n is determined.
The expansion of a polytope is an important parameter for the analysis of the random walks on its graph. A conjecture of Mihai and Vazirani states that all $0/1$-polytopes have expansion at least 1. We show that the generalization to half-integral polytopes does not hold by constructing $d$-dimensional half-integral polytopes whose expansion decreases exponentially fast with $d$. We also prove that the expansion of half-integral zonotopes is uniformly bounded away from $0$. As an intermediate result, we show that half-integral zonotopes are always graphical.
By studying the existing higher order derivation formulas of rational B\'{e}zier curves, we find that they fail when the order of the derivative exceeds the degree of the curves. In this paper, we present a new derivation formula for rational B\'{e}zier curves that overcomes this drawback and show that the $k$th degree derivative of a $n$th degree rational B\'{e}zier curve can be written in terms of a $(2^kn)$th degree rational B\'{e}zier curve.we also consider the properties of the endpoints and the bounds of the derivatives.
Genome assembly is a prominent problem studied in bioinformatics, which computes the source string using a set of its overlapping substrings. Classically, genome assembly uses assembly graphs built using this set of substrings to compute the source string efficiently, having a tradeoff between scalability and avoiding information loss. The scalable de Bruijn graphs come at the price of losing crucial overlap information. The complete overlap information is stored in overlap graphs using quadratic space. Hierarchical overlap graphs [IPL20] (HOG) overcome these limitations, avoiding information loss despite using linear space. After a series of suboptimal improvements, Khan and Park et al. simultaneously presented two optimal algorithms [CPM2021], where only the former was seemingly practical. We empirically analyze all the practical algorithms for computing HOG, where the optimal algorithm [CPM2021] outperforms the previous algorithms as expected, though at the expense of extra memory. However, it uses non-intuitive approach and non-trivial data structures. We present arguably the most intuitive algorithm, using only elementary arrays, which is also optimal. Our algorithm empirically proves even better for both time and memory over all the algorithms, highlighting its significance in both theory and practice. We further explore the applications of hierarchical overlap graphs to solve various forms of suffix-prefix queries on a set of strings. Loukides et al. [CPM2023] recently presented state-of-the-art algorithms for these queries. However, these algorithms require complex black-box data structures and are seemingly impractical. Our algorithms, despite failing to match the state-of-the-art algorithms theoretically, answer different queries ranging from 0.01-100 milliseconds for a data set having around a billion characters.
This paper is concerned with an inverse wave-number-dependent/frequency-dependent source problem for the Helmholtz equation. In d-dimensions (d = 2,3), the unknown source term is supposed to be compactly supported in spatial variables but independent on x_d. The dependance of the source function on k is supposed to be unknown. Based on the Dirichlet-Laplacian method and the Fourier-Transform method, we develop two effcient non-iterative numerical algorithms to recover the wave-number-dependent source. Uniqueness and increasing stability analysis are proved. Numerical experiments are conducted to illustrate the effctiveness and effciency of the proposed method.
The goal of explainable Artificial Intelligence (XAI) is to generate human-interpretable explanations, but there are no computationally precise theories of how humans interpret AI generated explanations. The lack of theory means that validation of XAI must be done empirically, on a case-by-case basis, which prevents systematic theory-building in XAI. We propose a psychological theory of how humans draw conclusions from saliency maps, the most common form of XAI explanation, which for the first time allows for precise prediction of explainee inference conditioned on explanation. Our theory posits that absent explanation humans expect the AI to make similar decisions to themselves, and that they interpret an explanation by comparison to the explanations they themselves would give. Comparison is formalized via Shepard's universal law of generalization in a similarity space, a classic theory from cognitive science. A pre-registered user study on AI image classifications with saliency map explanations demonstrate that our theory quantitatively matches participants' predictions of the AI.