We derive an explicit formula, valid for all integers $r,d\ge 0$, for the dimension of the vector space $C^r_d(\Delta)$ of piecewise polynomial functions continuously differentiable to order $r$ and whose constituents have degree at most $d$, where $\Delta$ is a planar triangulation that has a single totally interior edge. This extends previous results of Toh\v{a}neanu, Min\'{a}\v{c}, and Sorokina. Our result is a natural successor of Schumaker's 1979 dimension formula for splines on a planar vertex star. Indeed, there has not been a dimension formula in this level of generality (valid for all integers $d,r\ge 0$ and any vertex coordinates) since Schumaker's result. We derive our results using commutative algebra.
The complex Helmholtz equation $(\Delta + k^2)u=f$ (where $k\in{\mathbb R},u(\cdot),f(\cdot)\in{\mathbb C}$) is a mainstay of computational wave simulation. Despite its apparent simplicity, efficient numerical methods are challenging to design and, in some applications, regarded as an open problem. Two sources of difficulty are the large number of degrees of freedom and the indefiniteness of the matrices arising after discretisation. Seeking to meet them within the novel framework of probabilistic domain decomposition, we set out to rewrite the Helmholtz equation into a form amenable to the Feynman-Kac formula for elliptic boundary value problems. We consider two typical scenarios, the scattering of a plane wave and the propagation inside a cavity, and recast them as a sequence of Poisson equations. By means of stochastic arguments, we find a sufficient and simulatable condition for the convergence of the iterations. Upon discretisation a necessary condition for convergence can be derived by adding up the iterates using the harmonic series for the matrix inverse -- we illustrate the procedure in the case of finite differences. From a practical point of view, our results are ultimately of limited scope. Nonetheless, this unexpected -- even paradoxical -- new direction of attack on the Helmholtz equation proposed by this work offers a fresh perspective on this classical and difficult problem. Our results show that there indeed exists a predictable range $k<k_{max}$ in which this new ansatz works with $k_{max}$ being far below the challenging situation.
We study empirical variants of the halfspace (Tukey) depth of a probability measure $\mu$, which are obtained by replacing $\mu$ with the corresponding weighted empirical measure. We prove analogues of the Marcinkiewicz--Zygmund strong law of large numbers and of the law of the iterated logarithm in terms of set inclusions and for the Hausdorff distance between the theoretical and empirical variants of depth trimmed regions. In the special case of $\mu$ being the uniform distribution on a convex body $K$, the depth trimmed regions are convex floating bodies of $K$, and we obtain strong limit theorems for their empirical estimators.
We give necessary and sufficient condition for an infinite collection of axis-parallel boxes in $\mathbb{R}^{d}$ to be pierceable by finitely many axis-parallel $k$-flats, where $0 \leq k < d$. We also consider colorful generalizations of the above result and establish their feasibility. The problem considered in this paper is an infinite variant of the Hadwiger-Debrunner $(p,q)$-problem.
In this paper, we describe an algorithm for approximating functions of the form $f(x)=\int_{a}^{b} x^{\mu} \sigma(\mu) \, d \mu$ over $[0,1] \subset \mathbb{R}$, where $0<a<b<\infty$ and $\sigma(\mu)$ is some signed Radon measure over $[a,b]$ or some distribution supported on $[a,b]$. Given the desired accuracy $\epsilon$ and the values of $a$ and $b$, our method determines a priori a collection of non-integer powers $\{t_j\}_{j=1}^N$, so that the functions are approximated by series of the form $f(x)\approx \sum_{j=1}^N c_j x^{t_j}$, where the expansion coefficients can be found by solving a square, low-dimensional Vandermonde-like linear system using the collocation points $\{x_j\}_{j=1}^N$, also determined a priori by $\epsilon$ and the values of $a$ and $b$. We prove that our method has a small uniform approximation error which is proportional to $\epsilon$ multiplied by some small constants. We demonstrate the performance of our algorithm with several numerical experiments, and show that the number of singular powers and collocation points grows as $N=O(\log{\frac{1}{\epsilon}})$.
In this study, a novel preconditioner based on the absolute-value block $\alpha$-circulant matrix approximation is developed, specifically designed for nonsymmetric dense block lower triangular Toeplitz (BLTT) systems that emerge from the numerical discretization of evolutionary equations. Our preconditioner is constructed by taking an absolute-value of a block $\alpha$-circulant matrix approximation to the BLTT matrix. To apply our preconditioner, the original BLTT linear system is converted into a symmetric form by applying a time-reversing permutation transformation. Then, with our preconditioner, the preconditioned minimal residual method (MINRES) solver is employed to solve the symmetrized linear system. With properly chosen $\alpha$, the eigenvalues of the preconditioned matrix are proven to be clustered around $\pm1$ without any significant outliers. With the clustered spectrum, we show that the preconditioned MINRES solver for the preconditioned system has a convergence rate independent of system size. To the best of our knowledge, this is the first preconditioned MINRES method with size-independent convergence rate for the dense BLTT system. The efficacy of the proposed preconditioner is corroborated by our numerical experiments, which reveal that it attains optimal convergence.
In this paper, we study the smallest non-zero eigenvalue of the sample covariance matrices $\mathcal{S}(Y)=YY^*$, where $Y=(y_{ij})$ is an $M\times N$ matrix with iid mean $0$ variance $N^{-1}$ entries. We prove a phase transition for its distribution, induced by the fatness of the tail of $y_{ij}$'s. More specifically, we assume that $y_{ij}$ is symmetrically distributed with tail probability $\mathbb{P}(|\sqrt{N}y_{ij}|\geq x)\sim x^{-\alpha}$ when $x\to \infty$, for some $\alpha\in (2,4)$. We show the following conclusions: (i). When $\alpha>\frac83$, the smallest eigenvalue follows the Tracy-Widom law on scale $N^{-\frac23}$; (ii). When $2<\alpha<\frac83$, the smallest eigenvalue follows the Gaussian law on scale $N^{-\frac{\alpha}{4}}$; (iii). When $\alpha=\frac83$, the distribution is given by an interpolation between Tracy-Widom and Gaussian; (iv). In case $\alpha\leq \frac{10}{3}$, in addition to the left edge of the MP law, a deterministic shift of order $N^{1-\frac{\alpha}{2}}$ shall be subtracted from the smallest eigenvalue, in both the Tracy-Widom law and the Gaussian law. Overall speaking, our proof strategy is inspired by \cite{ALY} which is originally done for the bulk regime of the L\'{e}vy Wigner matrices. In addition to various technical complications arising from the bulk-to-edge extension, two ingredients are needed for our derivation: an intermediate left edge local law based on a simple but effective matrix minor argument, and a mesoscopic CLT for the linear spectral statistic with asymptotic expansion for its expectation.
We consider the problem of approximating the solution to $A(\mu) x(\mu) = b$ for many different values of the parameter $\mu$. Here we assume $A(\mu)$ is large, sparse, and nonsingular with a nonlinear dependence on $\mu$. Our method is based on a companion linearization derived from an accurate Chebyshev interpolation of $A(\mu)$ on the interval $[-a,a]$, $a \in \mathbb{R}$. The solution to the linearization is approximated in a preconditioned BiCG setting for shifted systems, where the Krylov basis matrix is formed once. This process leads to a short-term recurrence method, where one execution of the algorithm produces the approximation to $x(\mu)$ for many different values of the parameter $\mu \in [-a,a]$ simultaneously. In particular, this work proposes one algorithm which applies a shift-and-invert preconditioner exactly as well as an algorithm which applies the preconditioner inexactly. The competitiveness of the algorithms are illustrated with large-scale problems arising from a finite element discretization of a Helmholtz equation with parameterized material coefficient. The software used in the simulations is publicly available online, and thus all our experiments are reproducible.
The best column approximation in the Frobenius norm with $r$ columns has an error at most $\sqrt{r+1}$ times larger than the truncated singular value decomposition. Reaching this bound in practice involves either expensive random volume sampling or at least $r$ executions of singular value decomposition. In this paper it will be shown that the same column approximation bound can be reached with only a single SVD (which can also be replaced with approximate SVD). As a corollary, it will be shown how to find a highly nondegenerate submatrix in $r$ rows of size $N$ in just $O(Nr^2)$ operations, which mostly has the same properties as the maximum volume submatrix.
We show that the VC-dimension of a graph can be computed in time $n^{\log d+1} d^{O(d)}$, where $d$ is the degeneracy of the input graph. The core idea of our algorithm is a data structure to efficiently query the number of vertices that see a specific subset of vertices inside of a (small) query set. The construction of this data structure takes time $O(d2^dn)$, afterwards queries can be computed efficiently using fast M\"obius inversion. This data structure turns out to be useful for a range of tasks, especially for finding bipartite patterns in degenerate graphs, and we outline an efficient algorithms for counting the number of times specific patterns occur in a graph. The largest factor in the running time of this algorithm is $O(n^c)$, where $c$ is a parameter of the pattern we call its left covering number. Concrete applications of this algorithm include counting the number of (non-induced) bicliques in linear time, the number of co-matchings in quadratic time, as well as a constant-factor approximation of the ladder index in linear time. Finally, we supplement our theoretical results with several implementations and run experiments on more than 200 real-world datasets -- the largest of which has 8 million edges -- where we obtain interesting insights into the VC-dimension of real-world networks.
We give an algorithm that takes as input an $n$-vertex graph $G$ and an integer $k$, runs in time $2^{O(k^2)} n^{O(1)}$, and outputs a tree decomposition of $G$ of width at most $k$, if such a decomposition exists. This resolves the long-standing open problem of whether there is a $2^{o(k^3)} n^{O(1)}$ time algorithm for treewidth. In particular, our algorithm is the first improvement on the dependency on $k$ in algorithms for treewidth since the $2^{O(k^3)} n^{O(1)}$ time algorithm given by Bodlaender and Kloks [ICALP 1991] and Lagergren and Arnborg [ICALP 1991]. We also give an algorithm that given an $n$-vertex graph $G$, an integer $k$, and a rational $\varepsilon \in (0,1)$, in time $k^{O(k/\varepsilon)} n^{O(1)}$ either outputs a tree decomposition of $G$ of width at most $(1+\varepsilon)k$ or determines that the treewidth of $G$ is larger than $k$. Prior to our work, no approximation algorithms for treewidth with approximation ratio less than $2$, other than the exact algorithms, were known. Both of our algorithms work in polynomial space.