We consider the pebble game on DAGs with bounded fan-in introduced in [Paterson and Hewitt '70] and the reversible version of this game in [Bennett '89], and study the question of how hard it is to decide exactly or approximately the number of pebbles needed for a given DAG in these games. We prove that the problem of eciding whether $s$~pebbles suffice to reversibly pebble a DAG $G$ is PSPACE-complete, as was previously shown for the standard pebble game in [Gilbert, Lengauer and Tarjan '80]. Via two different graph product constructions we then strengthen these results to establish that both standard and reversible pebbling space are PSPACE-hard to approximate to within any additive constant. To the best of our knowledge, these are the first hardness of approximation results for pebble games in an unrestricted setting (even for polynomial time). Also, since [Chan '13] proved that reversible pebbling is equivalent to the games in [Dymond and Tompa '85] and [Raz and McKenzie '99], our results apply to the Dymond--Tompa and Raz--McKenzie games as well, and from the same paper it follows that resolution depth is PSPACE-hard to determine up to any additive constant. We also obtain a multiplicative logarithmic separation between reversible and standard pebbling space. This improves on the additive logarithmic separation previously known and could plausibly be tight, although we are not able to prove this. We leave as an interesting open problem whether our additive hardness of approximation result could be strengthened to a multiplicative bound if the computational resources are decreased from polynomial space to the more common setting of polynomial time.
The utilization of finite field multipliers is pervasive in contemporary digital systems, with hardware implementation for bit parallel operation often necessitating millions of logic gates. However, various digital design issues, whether natural or stemming from soft errors, can result in gate malfunction, ultimately leading to erroneous multiplier outputs. Thus, to prevent susceptibility to error, it is imperative to employ an effective finite field multiplier implementation that boasts a robust fault detection capability. This study proposes a novel fault detection scheme for a recent bit-parallel polynomial basis multiplier over GF(2m), intended to achieve optimal fault detection performance for finite field multipliers while simultaneously maintaining a low-complexity implementation, a favored attribute in resource-constrained applications like smart cards. The primary concept behind the proposed approach is centered on the implementation of a BCH decoder that utilizes re-encoding technique and FIBM algorithm in its first and second sub-modules, respectively. This approach serves to address hardware complexity concerns while also making use of Berlekamp-Rumsey-Solomon (BRS) algorithm and Chien search method in the third sub-module of the decoder to effectively locate errors with minimal delay. The results of our synthesis indicate that our proposed error detection and correction architecture for a 45-bit multiplier with 5-bit errors achieves a 37% and 49% reduction in critical path delay compared to existing designs. Furthermore, the hardware complexity associated with a 45-bit multiplicand that contains 5 errors is confined to a mere 80%, which is significantly lower than the most exceptional BCH-based fault recognition methodologies, including TMR, Hamming's single error correction, and LDPC-based procedures within the realm of finite field multiplication.
We study the problem of maintaining a differentially private decaying sum under continual observation. We give a unifying framework and an efficient algorithm for this problem for \emph{any sufficiently smooth} function. Our algorithm is the first differentially private algorithm that does not have a multiplicative error for polynomially-decaying weights. Our algorithm improves on all prior works on differentially private decaying sums under continual observation and recovers exactly the additive error for the special case of continual counting from Henzinger et al. (SODA 2023) as a corollary. Our algorithm is a variant of the factorization mechanism whose error depends on the $\gamma_2$ and $\gamma_F$ norm of the underlying matrix. We give a constructive proof for an almost exact upper bound on the $\gamma_2$ and $\gamma_F$ norm and an almost tight lower bound on the $\gamma_2$ norm for a large class of lower-triangular matrices. This is the first non-trivial lower bound for lower-triangular matrices whose non-zero entries are not all the same. It includes matrices for all continual decaying sums problems, resulting in an upper bound on the additive error of any differentially private decaying sums algorithm under continual observation. We also explore some implications of our result in discrepancy theory and operator algebra. Given the importance of the $\gamma_2$ norm in computer science and the extensive work in mathematics, we believe our result will have further applications.
We investigate a fundamental vertex-deletion problem called (Induced) Subgraph Hitting: given a graph $G$ and a set $\mathcal{F}$ of forbidden graphs, the goal is to compute a minimum-sized set $S$ of vertices of $G$ such that $G-S$ does not contain any graph in $\mathcal{F}$ as an (induced) subgraph. This is a generic problem that encompasses many well-known problems that were extensively studied on their own, particularly (but not only) from the perspectives of both approximation and parameterization. In this paper, we study the approximability of the problem on a large variety of graph classes. Our first result is a linear-time $(1+\varepsilon)$-approximation reduction from (Induced) Subgraph Hitting on any graph class $\mathcal{G}$ of bounded expansion to the same problem on bounded degree graphs within $\mathcal{G}$. This directly yields linear-size $(1+\varepsilon)$-approximation lossy kernels for the problems on any bounded-expansion graph classes. Our second result is a linear-time approximation scheme for (Induced) Subgraph Hitting on any graph class $\mathcal{G}$ of polynomial expansion, based on the local-search framework of Har-Peled and Quanrud [SICOMP 2017]. This approximation scheme can be applied to a more general family of problems that aim to hit all subgraphs satisfying a certain property $\pi$ that is efficiently testable and has bounded diameter. Both of our results have applications to Subgraph Hitting (not induced) on wide classes of geometric intersection graphs, resulting in linear-size lossy kernels and (near-)linear time approximation schemes for the problem.
Selection of a group of representatives satisfying certain fairness constraints, is a commonly occurring scenario. Motivated by this, we initiate a systematic algorithmic study of a \emph{fair} version of \textsc{Hitting Set}. In the classical \textsc{Hitting Set} problem, the input is a universe $\mathcal{U}$, a family $\mathcal{F}$ of subsets of $\mathcal{U}$, and a non-negative integer $k$. The goal is to determine whether there exists a subset $S \subseteq \mathcal{U}$ of size $k$ that \emph{hits} (i.e., intersects) every set in $\mathcal{F}$. Inspired by several recent works, we formulate a fair version of this problem, as follows. The input additionally contains a family $\mathcal{B}$ of subsets of $\mathcal{U}$, where each subset in $\mathcal{B}$ can be thought of as the group of elements of the same \emph{type}. We want to find a set $S \subseteq \mathcal{U}$ of size $k$ that (i) hits all sets of $\mathcal{F}$, and (ii) does not contain \emph{too many} elements of each type. We call this problem \textsc{Fair Hitting Set}, and chart out its tractability boundary from both classical as well as multivariate perspective. Our results use a multitude of techniques from parameterized complexity including classical to advanced tools, such as, methods of representative sets for matroids, FO model checking, and a generalization of best known kernels for \textsc{Hitting Set}.
We consider the classic budgeted maximum weight independent set (BMWIS) problem. The input is a graph $G = (V,E)$, a weight function $w:V \rightarrow \mathbb{R}_{\geq 0}$, a cost function $c:V \rightarrow \mathbb{R}_{\geq 0}$, and a budget $B \in \mathbb{R}_{\geq 0}$. The goal is to find an independent set $S \subseteq V$ in $G$ such that $\sum_{v \in S} c(v) \leq B$, which maximizes the total weight $\sum_{v \in S} w(v)$. Since the problem on general graphs cannot be approximated within ratio $|V|^{1-\varepsilon}$ for any $\varepsilon>0$, BMWIS has attracted significant attention on graph families for which a maximum weight independent set can be computed in polynomial time. Two notable such graph families are bipartite and perfect graphs. BMWIS is known to be NP-hard on both of these graph families; however, the best possible approximation guarantees for these graphs are wide open. In this paper, we give a tight $2$-approximation for BMWIS on perfect graphs and bipartite graphs. In particular, we give We a $(2-\varepsilon)$ lower bound for BMWIS on bipartite graphs, already for the special case where the budget is replaced by a cardinality constraint, based on the Small Set Expansion Hypothesis (SSEH). For the upper bound, we design a $2$-approximation for BMWIS on perfect graphs using a Lagrangian relaxation based technique. Finally, we obtain a tight lower bound for the capacitated maximum weight independent set (CMWIS) problem, the special case of BMWIS where $w(v) = c(v)~\forall v \in V$. We show that CMWIS on bipartite and perfect graphs is unlikely to admit an efficient polynomial-time approximation scheme (EPTAS). Thus, the existing PTAS for CMWIS is essentially the best we can expect.
This work considers the problem of the noisy binary search in a sorted array. The noise is modeled by a parameter $p$ that dictates that a comparison can be incorrect with probability $p$, independently of other queries. We state two types of upper bounds on the number of queries: the worst-case and expected query complexity scenarios. The bounds improve the ones known to date, i.e., our algorithms require fewer queries. Additionally, they have simpler statements, and work for the full range of parameters. All query complexities for the expected query scenarios are tight up to lower order terms. For the problem where target prior is uniform over all possible inputs, we provide algorithm with expected complexity upperbounded by $(\log_2 n + \log_2 \delta^{-1} + 3)/I(p)$, where $n$ is the domain size, $0\le p < 1/2$ is the noise ratio, and $\delta>0$ is the failure probability, and $I(p)$ is the information gain function. As a side-effect, we close some correctness issues regarding previous work. Also, en route, we obtain new and improved query complexities for the search generalized to arbitrary graphs. This paper continues and improves upon the lines of research of Burnashev-Zigangirov [Prob. Per. Informatsii, 1974], Ben-Or and Hassidim [FOCS 2008], Gu and Xu [STOC 2023], and Emamjomeh-Zadeh et al. [STOC 2016], Dereniowski et al. [SOSA@SODA 2019].
We consider a generalized poset sorting problem (GPS), in which we are given a query graph $G = (V, E)$ and an unknown poset $\mathcal{P}(V, \prec)$ that is defined on the same vertex set $V$, and the goal is to make as few queries as possible to edges in $G$ in order to fully recover $\mathcal{P}$, where each query $(u, v)$ returns the relation between $u, v$, i.e., $u \prec v$, $v \prec u$ or $u \not \sim v$. This generalizes both the poset sorting problem [Faigle et al., SICOMP 88] and the generalized sorting problem [Huang et al., FOCS 11]. We give algorithms with $\tilde{O}(n\cdot \mathrm{poly}(k))$ query complexity when $G$ is a complete bipartite graph or $G$ is stochastic under the \ER model, where $k$ is the \emph{width} of the poset, and these generalize [Daskalakis et al., SICOMP 11] which only studies complete graph $G$. Both results are based on a unified framework that reduces the poset sorting to partitioning the vertices with respect to a given pivot element, which may be of independent interest. Our study of GPS also leads to a new $\tilde{O}(n^{1 - 1 / (2W)})$ competitive ratio for the so-called weighted generalized sorting problem where $W$ is the number of distinct weights in the query graph. This problem was considered as an open question in [Charikar et al., JCSS 02], and our result makes important progress as it yields the first nontrivial sublinear ratio for general weighted query graphs (for any bounded $W$). We obtain this via an $\tilde{O}(nk + n^{1.5})$ query complexity algorithm for the case where every edge in $G$ is guaranteed to be comparable in the poset, which generalizes a $\tilde{O}(n^{1.5})$ bound for generalized sorting [Huang et al., FOCS 11].
Computing the diameter of a graph, i.e. the largest distance, is a fundamental problem that is central in fine-grained complexity. In undirected graphs, the Strong Exponential Time Hypothesis (SETH) yields a lower bound on the time vs. approximation trade-off that is quite close to the upper bounds. In \emph{directed} graphs, however, where only some of the upper bounds apply, much larger gaps remain. Since $d(u,v)$ may not be the same as $d(v,u)$, there are multiple ways to define the problem, the two most natural being the \emph{(one-way) diameter} ($\max_{(u,v)} d(u,v)$) and the \emph{roundtrip diameter} ($\max_{u,v} d(u,v)+d(v,u)$). In this paper we make progress on the outstanding open question for each of them. -- We design the first algorithm for diameter in sparse directed graphs to achieve $n^{1.5-\varepsilon}$ time with an approximation factor better than $2$. The new upper bound trade-off makes the directed case appear more similar to the undirected case. Notably, this is the first algorithm for diameter in sparse graphs that benefits from fast matrix multiplication. -- We design new hardness reductions separating roundtrip diameter from directed and undirected diameter. In particular, a $1.5$-approximation in subquadratic time would refute the All-Nodes $k$-Cycle hypothesis, and any $(2-\varepsilon)$-approximation would imply a breakthrough algorithm for approximate $\ell_{\infty}$-Closest-Pair. Notably, these are the first conditional lower bounds for diameter that are not based on SETH.
We present a combination technique based on mixed differences of both spatial approximations and quadrature formulae for the stochastic variables to solve efficiently a class of Optimal Control Problems (OCPs) constrained by random partial differential equations. The method requires to solve the OCP for several low-fidelity spatial grids and quadrature formulae for the objective functional. All the computed solutions are then linearly combined to get a final approximation which, under suitable regularity assumptions, preserves the same accuracy of fine tensor product approximations, while drastically reducing the computational cost. The combination technique involves only tensor product quadrature formulae, thus the discretized OCPs preserve the convexity of the continuous OCP. Hence, the combination technique avoids the inconveniences of Multilevel Monte Carlo and/or sparse grids approaches, but remains suitable for high dimensional problems. The manuscript presents an a-priori procedure to choose the most important mixed differences and an asymptotic complexity analysis, which states that the asymptotic complexity is exclusively determined by the spatial solver. Numerical experiments validate the results.
One of the most studied extensions of the famous Traveling Salesperson Problem (TSP) is the {\sc Multiple TSP}: a set of $m\geq 1$ salespersons collectively traverses a set of $n$ cities by $m$ non-trivial tours, to minimize the total length of their tours. This problem can also be considered to be a variant of {\sc Uncapacitated Vehicle Routing} where the objective function is the sum of all tour lengths. When all $m$ tours start from a single common \emph{depot} $v_0$, then the metric {\sc Multiple TSP} can be approximated equally well as the standard metric TSP, as shown by Frieze (1983). The {\sc Multiple TSP} becomes significantly harder to approximate when there is a \emph{set} $D$ of $d \geq 1$ depots that form the starting and end points of the $m$ tours. For this case only a $(2-1/d)$-approximation in polynomial time is known, as well as a $3/2$-approximation for \emph{constant} $d$ which requires a prohibitive run time of $n^{\Theta(d)}$ (Xu and Rodrigues, \emph{INFORMS J. Comput.}, 2015). A recent work of Traub, Vygen and Zenklusen (STOC 2020) gives another approximation algorithm for {\sc Multiple TSP} running in time $n^{\Theta(d)}$ and reducing the problem to approximating TSP. In this paper we overcome the $n^{\Theta(d)}$ time barrier: we give the first efficient approximation algorithm for {\sc Multiple TSP} with a \emph{variable} number $d$ of depots that yields a better-than-2 approximation. Our algorithm runs in time $(1/\varepsilon)^{\mathcal O(d\log d)}\cdot n^{\mathcal O(1)}$, and produces a $(3/2+\varepsilon)$-approximation with constant probability. For the graphic case, we obtain a deterministic $3/2$-approximation in time $2^d\cdot n^{\mathcal O(1)}$.ithm for metric {\sc Multiple TSP} with run time $n^{\Theta(d)}$, which reduces the problem to approximating metric TSP.