We study sublinear time algorithms for estimating the size of maximum matching. After a long line of research, the problem was finally settled by Behnezhad [FOCS'22], in the regime where one is willing to pay an approximation factor of $2$. Very recently, Behnezhad et al.[SODA'23] improved the approximation factor to $(2-\frac{1}{2^{O(1/\gamma)}})$ using $n^{1+\gamma}$ time. This improvement over the factor $2$ is, however, minuscule and they asked if even $1.99$-approximation is possible in $n^{2-\Omega(1)}$ time. We give a strong affirmative answer to this open problem by showing $(1.5+\epsilon)$-approximation algorithms that run in $n^{2-\Theta(\epsilon^{2})}$ time. Our approach is conceptually simple and diverges from all previous sublinear-time matching algorithms: we show a sublinear time algorithm for computing a variant of the edge-degree constrained subgraph (EDCS), a concept that has previously been exploited in dynamic [Bernstein Stein ICALP'15, SODA'16], distributed [Assadi et al. SODA'19] and streaming [Bernstein ICALP'20] settings, but never before in the sublinear setting. Independent work: Behnezhad, Roghani and Rubinstein [BRR'23] independently showed sublinear algorithms similar to our Theorem 1.2 in both adjacency list and matrix models. Furthermore, in [BRR'23], they show additional results on strictly better-than-1.5 approximate matching algorithms in both upper and lower bound sides.
We propose a class of randomized quantum algorithms for the task of sampling from matrix functions, without the use of quantum block encodings or any other coherent oracle access to the matrix elements. As such, our use of qubits is purely algorithmic, and no additional qubits are required for quantum data structures. For $N\times N$ Hermitian matrices, the space cost is $\log(N)+1$ qubits and depending on the structure of the matrices, the gate complexity can be comparable to state-of-the-art methods that use quantum data structures of up to size $O(N^2)$, when considering equivalent end-to-end problems. Within our framework, we present a quantum linear system solver that allows one to sample properties of the solution vector, as well as an algorithm for sampling properties of ground states of Hamiltonians. As a concrete application, we combine these two sub-routines to present a scheme for calculating Green's functions of quantum many-body systems.
We consider many-to-one matching problems, where one side consists of students and the other side of schools with capacity constraints. We study how to optimally increase the capacities of the schools so as to obtain a stable and perfect matching (i.e., every student is matched) or a matching that is stable and Pareto-efficient for the students. We consider two common optimality criteria, one aiming to minimize the sum of capacity increases of all schools (abbrv. as MinSum) and the other aiming to minimize the maximum capacity increase of any school (abbrv. as MinMax). We obtain a complete picture in terms of computational complexity: Except for stable and perfect matchings using the MinMax criteria which is polynomial-time solvable, all three remaining problems are NP-hard. We further investigate the parameterized complexity and approximability and find that achieving stable and Pareto-efficient matchings via minimal capacity increases is much harder than achieving stable and perfect matchings.
In this note, we prove that the following function space with absolutely convergent Fourier series \[ F_d:=\left\{ f\in L^2([0,1)^d)\:\middle| \: \|f\|:=\sum_{\boldsymbol{k}\in \mathbb{Z}^d}|\hat{f}(\boldsymbol{k})| \max\left(1,\min_{j\in \mathrm{supp}(\boldsymbol{k})}\log |k_j|\right) <\infty \right\}\] with $\hat{f}(\boldsymbol{k})$ being the $\boldsymbol{k}$-th Fourier coefficient of $f$ and $\mathrm{supp}(\boldsymbol{k}):=\{j\in \{1,\ldots,d\}\mid k_j\neq 0\}$ is polynomially tractable for multivariate integration in the worst-case setting. Here polynomial tractability means that the minimum number of function evaluations required to make the worst-case error less than or equal to a tolerance $\varepsilon$ grows only polynomially with respect to $\varepsilon^{-1}$ and $d$. It is important to remark that the function space $F_d$ is unweighted, that is, all variables contribute equally to the norm of functions. Our tractability result is in contrast to those for most of the unweighted integration problems studied in the literature, in which polynomial tractability does not hold and the problem suffers from the curse of dimensionality. Our proof is constructive in the sense that we provide an explicit quasi-Monte Carlo rule that attains a desired worst-case error bound.
We revisit the Heaviest Induced Ancestors (HIA) problem that was introduced by Gagie, Gawrychowski, and Nekrich [CCCG 2013] and has a number of applications in string algorithms. Let $T_1$ and $T_2$ be two rooted trees whose nodes have weights that are increasing in all root-to-leaf paths, and labels on the leaves, such that no two leaves of a tree have the same label. A pair of nodes $(u, v)\in T_1 \times T_2$ is \emph{induced} if and only if there is a label shared by leaf-descendants of $u$ and $v$. In an HIA query, given nodes $x \in T_1$ and $y \in T_2$, the goal is to find an induced pair of nodes $(u, v)$ of the maximum total weight such that $u$ is an ancestor of~$x$ and $v$ is an ancestor of $y$. Let $n$ be the upper bound on the sizes of the two trees. It is known that no data structure of size $\tilde{\mathcal{O}}(n)$ can answer HIA queries in $o(\log n / \log \log n)$ time [Charalampopoulos, Gawrychowski, Pokorski; ICALP 2020]. This (unconditional) lower bound is a $\operatorname{polyloglog} n$ factor away from the query time of the fastest $\tilde{\mathcal{O}}(n)$-size data structure known to date for the HIA problem [Abedin, Hooshmand, Ganguly, Thankachan; Algorithmica 2022]. In this work, we resolve the query-time complexity of the HIA problem for the near-linear space regime by presenting a data structure that can be built in $\tilde{\mathcal{O}}(n)$ time and answers HIA queries in $\mathcal{O}(\log n/\log\log n)$ time. As a direct corollary, we obtain an $\tilde{\mathcal{O}}(n)$-size data structure that maintains the LCS of a static string and a dynamic string, both of length at most $n$, in time optimal for this space regime. The main ingredients of our approach are fractional cascading and the utilization of an $\mathcal{O}(\log n/ \log\log n)$-depth tree decomposition.
Multiple Tensor-Times-Matrix (Multi-TTM) is a key computation in algorithms for computing and operating with the Tucker tensor decomposition, which is frequently used in multidimensional data analysis. We establish communication lower bounds that determine how much data movement is required to perform the Multi-TTM computation in parallel. The crux of the proof relies on analytically solving a constrained, nonlinear optimization problem. We also present a parallel algorithm to perform this computation that organizes the processors into a logical grid with twice as many modes as the input tensor. We show that with correct choices of grid dimensions, the communication cost of the algorithm attains the lower bounds and is therefore communication optimal. Finally, we show that our algorithm can significantly reduce communication compared to the straightforward approach of expressing the computation as a sequence of tensor-times-matrix operations.
The problem of String Matching to Labeled Graphs (SMLG) asks to find all the paths in a labeled graph $G = (V, E)$ whose spellings match that of an input string $S \in \Sigma^m$. SMLG can be solved in quadratic $O(m|E|)$ time [Amir et al., JALG], which was proven to be optimal by a recent lower bound conditioned on SETH [Equi et al., ICALP 2019]. The lower bound states that no strongly subquadratic time algorithm exists, even if restricted to directed acyclic graphs (DAGs). In this work we present the first parameterized algorithms for SMLG in DAGs. Our parameters capture the topological structure of $G$. All our results are derived from a generalization of the Knuth-Morris-Pratt algorithm [Park and Kim, CPM 1995] optimized to work in time proportional to the number of prefix-incomparable matches. To obtain the parameterization in the topological structure of $G$, we first study a special class of DAGs called funnels [Millani et al., JCO] and generalize them to $k$-funnels and the class $ST_k$. We present several novel characterizations and algorithmic contributions on both funnels and their generalizations.
The Shapley value is arguably the most popular approach for assigning a meaningful contribution value to players in a cooperative game, which has recently been used intensively in various areas of machine learning, most notably in explainable artificial intelligence. The meaningfulness is due to axiomatic properties that only the Shapley value satisfies, which, however, comes at the expense of an exact computation growing exponentially with the number of agents. Accordingly, a number of works are devoted to the efficient approximation of the Shapley values, all of which revolve around the notion of an agent's marginal contribution. In this paper, we propose with SVARM and Stratified SVARM two parameter-free and domain-independent approximation algorithms based on a representation of the Shapley value detached from the notion of marginal contributions. We prove unmatched theoretical guarantees regarding their approximation quality and provide satisfying empirical results.
Identifying Blaschke-Santal\'o diagrams is an important topic that essentially consists in determining the image $Y=F(X)$ of a map $F:X\to{\mathbb{R}}^d$, where the dimension of the source space $X$ is much larger than the one of the target space. In some cases, that occur for instance in shape optimization problems, $X$ can even be a subset of an infinite-dimensional space. The usual Monte Carlo method, consisting in randomly choosing a number $N$ of points $x_1,\dots,x_N$ in $X$ and plotting them in the target space ${\mathbb{R}}^d$, produces in many cases areas in $Y$ of very high and very low concentration leading to a rather rough numerical identification of the image set. On the contrary, our goal is to choose the points $x_i$ in an appropriate way that produces a uniform distribution in the target space. In this way we may obtain a good representation of the image set $Y$ by a relatively small number $N$ of samples which is very useful when the dimension of the source space $X$ is large (or even infinite) and the evaluation of $F(x_i)$ is costly. Our method consists in a suitable use of {\it Centroidal Voronoi Tessellations} which provides efficient numerical results. Simulations for two and three dimensional examples are shown in the paper.
Approximate Message Passing (AMP) algorithms are a class of iterative procedures for computationally-efficient estimation in high-dimensional inference and estimation tasks. Due to the presence of an 'Onsager' correction term in its iterates, for $N \times M$ design matrices $\mathbf{A}$ with i.i.d. Gaussian entries, the asymptotic distribution of the estimate at any iteration of the algorithm can be exactly characterized in the large system limit as $M/N \rightarrow \delta \in (0, \infty)$ via a scalar recursion referred to as state evolution. In this paper, we show that appropriate functionals of the iterates, in fact, concentrate around their limiting values predicted by these asymptotic distributions with rates exponentially fast in $N$ for a large class of AMP-style algorithms, including those that are used when high-dimensional generalized linear regression models are assumed to be the data-generating process, like the generalized AMP algorithm, or those that are used when the measurement matrix is assumed to be right rotationally invariant instead of i.i.d. Gaussian, like vector AMP and generalized vector AMP. In practice, these more general AMP algorithms have many applications, for example in in communications or imaging, and this work provides the first study of finite sample behavior of such algorithms.
Click-through rate (CTR) prediction plays a critical role in recommender systems and online advertising. The data used in these applications are multi-field categorical data, where each feature belongs to one field. Field information is proved to be important and there are several works considering fields in their models. In this paper, we proposed a novel approach to model the field information effectively and efficiently. The proposed approach is a direct improvement of FwFM, and is named as Field-matrixed Factorization Machines (FmFM, or $FM^2$). We also proposed a new explanation of FM and FwFM within the FmFM framework, and compared it with the FFM. Besides pruning the cross terms, our model supports field-specific variable dimensions of embedding vectors, which acts as soft pruning. We also proposed an efficient way to minimize the dimension while keeping the model performance. The FmFM model can also be optimized further by caching the intermediate vectors, and it only takes thousands of floating-point operations (FLOPs) to make a prediction. Our experiment results show that it can out-perform the FFM, which is more complex. The FmFM model's performance is also comparable to DNN models which require much more FLOPs in runtime.