We show that $VTC^0$, the basic theory of bounded arithmetic corresponding to the complexity class $\mathrm{TC}^0$, proves the $IMUL$ axiom expressing the totality of iterated multiplication satisfying its recursive definition, by formalizing a suitable version of the $\mathrm{TC}^0$ iterated multiplication algorithm by Hesse, Allender, and Barrington. As a consequence, $VTC^0$ can also prove the integer division axiom, and (by our previous results) the RSUV-translation of induction and minimization for sharply bounded formulas. Similar consequences hold for the related theories $\Delta^b_1$-$CR$ and $C^0_2$. As a side result, we also prove that there is a well-behaved $\Delta_0$ definition of modular powering in $I\Delta_0+WPHP(\Delta_0)$.
The central open question in Descriptive Complexity is whether there is a logic that characterizes deterministic polynomial time (PTIME) on relational structures. Towards this goal, we define a logic that is obtained from first-order logic with fixed points, FO(FP), by a series of transformations that include restricting logical connectives and adding a dynamic version of Hilbert's Choice operator Epsilon. The formalism can be viewed, simultaneously, as an algebra of binary relations and as a linear-time modal dynamic logic, where algebraic expressions describing ``proofs'' or ``programs'' appear inside the modalities. We show how counting, reachability and ``mixed'' examples (that include linear equations modulo two) are axiomatized in the logic, and how an arbitrary PTIME Turing machine can be encoded. For each fixed Choice function, the data complexity of model checking is in PTIME. However, there can be exponentially many such functions. A crucial question is under what syntactic conditions on algebraic terms checking just one Choice function is sufficient. Answering this question requires a study of symmetries among computations. This paper sets mathematical foundations towards such a study via algebraic and automata-theoretic techniques.
Majority-SAT is the problem of determining whether an input $n$-variable formula in conjunctive normal form (CNF) has at least $2^{n-1}$ satisfying assignments. Majority-SAT and related problems have been studied extensively in various AI communities interested in the complexity of probabilistic planning and inference. Although Majority-SAT has been known to be PP-complete for over 40 years, the complexity of a natural variant has remained open: Majority-$k$SAT, where the input CNF formula is restricted to have clause width at most $k$. We prove that for every $k$, Majority-$k$SAT is in P. In fact, for any positive integer $k$ and rational $\rho \in (0,1)$ with bounded denominator, we give an algorithm that can determine whether a given $k$-CNF has at least $\rho \cdot 2^n$ satisfying assignments, in deterministic linear time (whereas the previous best-known algorithm ran in exponential time). Our algorithms have interesting positive implications for counting complexity and the complexity of inference, significantly reducing the known complexities of related problems such as E-MAJ-$k$SAT and MAJ-MAJ-$k$SAT. At the heart of our approach is an efficient method for solving threshold counting problems by extracting sunflowers found in the corresponding set system of a $k$-CNF. We also show that the tractability of Majority-$k$SAT is somewhat fragile. For the closely related GtMajority-SAT problem (where we ask whether a given formula has greater than $2^{n-1}$ satisfying assignments) which is known to be PP-complete, we show that GtMajority-$k$SAT is in P for $k\le 3$, but becomes NP-complete for $k\geq 4$. These results are counterintuitive, because the ``natural'' classifications of these problems would have been PP-completeness, and because there is a stark difference in the complexity of GtMajority-$k$SAT and Majority-$k$SAT for all $k\ge 4$.
This paper investigates the problem of correcting multiple criss-cross insertions and deletions in arrays. More precisely, we study the unique recovery of $n \times n$ arrays affected by $t$-criss-cross deletions defined as any combination of $t_r$ row and $t_c$ column deletions such that $t_r + t_c = t$ for a given $t$. We show an equivalence between correcting $t$-criss-cross deletions and $t$-criss-cross insertions and show that a code correcting $t$-criss-cross insertions/deletions has redundancy at least $tn + t \log n - \log(t!)$. Then, we present an existential construction of $t$-criss-cross insertion/deletion correcting code with redundancy bounded from above by $tn + \mathcal{O}(t^2 \log^2 n)$. The main ingredients of the presented code construction are systematic binary $t$-deletion correcting codes and Gabidulin codes. The first ingredient helps locating the indices of the inserted/deleted rows and columns, thus transforming the insertion/deletion-correction problem into a row/column erasure-correction problem which is then solved using the second ingredient.
We study the classical expander codes, introduced by Sipser and Spielman \cite{SS96}. Given any constants $0< \alpha, \varepsilon < 1/2$, and an arbitrary bipartite graph with $N$ vertices on the left, $M < N$ vertices on the right, and left degree $D$ such that any left subset $S$ of size at most $\alpha N$ has at least $(1-\varepsilon)|S|D$ neighbors, we show that the corresponding linear code given by parity checks on the right has distance at least roughly $\frac{\alpha N}{2 \varepsilon }$. This is strictly better than the best known previous result of $2(1-\varepsilon ) \alpha N$ \cite{Sudan2000note, Viderman13b} whenever $\varepsilon < 1/2$, and improves the previous result significantly when $\varepsilon $ is small. Furthermore, we show that this distance is tight in general, thus providing a complete characterization of the distance of general expander codes. Next, we provide several efficient decoding algorithms, which vastly improve previous results in terms of the fraction of errors corrected, whenever $\varepsilon < \frac{1}{4}$. Finally, we also give a bound on the list-decoding radius of general expander codes, which beats the classical Johnson bound in certain situations (e.g., when the graph is almost regular and the code has a high rate). Our techniques exploit novel combinatorial properties of bipartite expander graphs. In particular, we establish a new size-expansion tradeoff, which may be of independent interests.
Submodular function minimization (SFM) and matroid intersection are fundamental discrete optimization problems with applications in many fields. It is well known that both of these can be solved making $\mathrm{poly}(N)$ queries to a relevant oracle (evaluation oracle for SFM and rank oracle for matroid intersection), where $N$ denotes the universe size. However, all known polynomial query algorithms are highly adaptive, requiring at least $N$ rounds of querying the oracle. A natural question is whether these can be efficiently solved in a highly parallel manner, namely, with $\mathrm{poly}(N)$ queries using only poly-logarithmic rounds of adaptivity. An important step towards understanding the adaptivity needed for efficient parallel SFM was taken recently in the work of Balkanski and Singer who showed that any SFM algorithm making $\mathrm{poly}(N)$ queries necessarily requires $\Omega(\log N/\log \log N)$ rounds. This left open the possibility of efficient SFM algorithms in poly-logarithmic rounds. For matroid intersection, even the possibility of a constant round, $\mathrm{poly}(N)$ query algorithm was not hitherto ruled out. In this work, we prove that any, possibly randomized, algorithm for submodular function minimization or matroid intersection making $\mathrm{poly}(N)$ queries requires $\tilde{\Omega}\left(N^{1/3}\right)$ rounds of adaptivity. In fact, we show a polynomial lower bound on the number of rounds of adaptivity even for algorithms that make at most $2^{N^{1-\delta}}$ queries, for any constant $\delta> 0$. Therefore, even though SFM and matroid intersection are efficiently solvable, they are not highly parallelizable in the oracle model.
In the Non-Uniform $k$-Center problem, a generalization of the famous $k$-center clustering problem, we want to cover the given set of points in a metric space by finding a placement of balls with specified radii. In $t$-NU$k$C Problem, we assume that the number of distinct radii is equal to $t$, and we are allowed to use $k_i$ balls of radius $r_i$, for $1 \le i \le t$. This problem was introduced by Chakrabarty et al. [ACM Trans. Alg. 16(4):46:1-46:19], who showed that a constant approximation for $t$-NU$k$C is not possible if $t$ is unbounded. On the other hand, they gave a bicriteria approximation that violates the number of allowed balls as well as the given radii by a constant factor. They also conjectured that a constant approximation for $t$-NU$k$C should be possible if $t$ is a fixed constant. Since then, there has been steady progress towards resolving this conjecture -- currently, a constant approximation for $3$-NU$k$C is known via the results of Chakrabarty and Negahbani [IPCO 2021], and Jia et al. [To appear in SOSA 2022]. We push the horizon by giving an $O(1)$-approximation for the Non-Uniform $k$-Center for $4$ distinct types of radii. Our result is obtained via a novel combination of tools and techniques from the $k$-center literature, which also demonstrates that the different generalizations of $k$-center involving non-uniform radii, and multiple coverage constraints (i.e., colorful $k$-center), are closely interlinked with each other. We hope that our ideas will contribute towards a deeper understanding of the $t$-NU$k$C problem, eventually bringing us closer to the resolution of the CGK conjecture.
Given an $n$-vertex planar embedded digraph $G$ with non-negative edge weights and a face $f$ of $G$, Klein presented a data structure with $O(n\log n)$ space and preprocessing time which can answer any query $(u,v)$ for the shortest path distance in $G$ from $u$ to $v$ or from $v$ to $u$ in $O(\log n)$ time, provided $u$ is on $f$. This data structure is a key tool in a number of state-of-the-art algorithms and data structures for planar graphs. Klein's data structure relies on dynamic trees and the persistence technique as well as a highly non-trivial interaction between primal shortest path trees and their duals. The construction of our data structure follows a completely different and in our opinion very simple divide-and-conquer approach that solely relies on Single-Source Shortest Path computations and contractions in the primal graph. Our space and preprocessing time bound is $O(n\log |f|)$ and query time is $O(\log |f|)$ which is an improvement over Klein's data structure when $f$ has small size.
Over the last decades, images have become an important source of information in many domains, thus their high quality has become necessary to acquire better information. One of the important issues that arise is image denoising, which means recovering a signal from inaccurately and/or partially measured samples. This interpretation is highly correlated to the compressive sensing theory, which is a revolutionary technology and implies that if a signal is sparse then the original signal can be obtained from a few measured values, which are much less, than the ones suggested by other used theories like Shannon's sampling theories. A strong factor in Compressive Sensing (CS) theory to achieve the sparsest solution and the noise removal from the corrupted image is the selection of the basis dictionary. In this paper, Discrete Cosine Transform (DCT) and moment transform (Tchebichef, Krawtchouk) are compared in order to achieve image denoising of Gaussian additive white noise based on compressive sensing and sparse approximation theory. The experimental results revealed that the basis dictionaries constructed by the moment transform perform competitively to the traditional DCT. The latter transform shows a higher PSNR of 30.82 dB and the same 0.91 SSIM value as the Tchebichef transform. Moreover, from the sparsity point of view, Krawtchouk moments provide approximately 20-30% more sparse results than DCT.
We revisit the (block-angular) min-max resource sharing problem, which is a well-known generalization of fractional packing and the maximum concurrent flow problem. It consists of finding an $\ell_{\infty}$-minimal element in a Minkowski sum $\mathcal{X}= \sum_{C \in \mathcal{C}} X_C$ of non-empty closed convex sets $X_C \subseteq \mathbb{R}^{\mathcal{R}}_{\geq 0}$, where $\mathcal{C}$ and $\mathcal{R}$ are finite sets. We assume that an oracle for approximate linear minimization over $X_C$ is given. In this setting, the currently fastest known FPTAS is due to M\"uller, Radke, and Vygen. For $\delta \in (0,1]$, it computes a $\sigma(1+\delta)$-approximately optimal solution using $\mathcal{O}((|\mathcal{C}|+|\mathcal{R}|)\log |\mathcal{R}| (\delta^{-2} + \log \log |\mathcal{R}|))$ oracle calls, where $\sigma$ is the approximation ratio of the oracle. We describe an extension of their algorithm and improve on previous results in various ways. Our FPTAS, which, as previous approaches, is based on the multiplicative weight update method, computes close to optimal primal and dual solutions using $\mathcal{O}\left(\frac{|\mathcal{C}|+ |\mathcal{R}|}{\delta^2} \log |\mathcal{R}|\right)$ oracle calls. We prove that our running time is optimal under certain assumptions, implying that no warm-start analysis of the algorithm is possible. A major novelty of our analysis is the concept of local weak duality, which illustrates that the algorithm optimizes (close to) independent parts of the instance separately. Interestingly, this implies that the computed solution is not only approximately $\ell_{\infty}$-minimal, but among such solutions, also its second-highest entry is approximately minimal. We prove that this statement cannot be extended to the third-highest entry.
This paper considers the basic problem of scheduling jobs online with preemption to maximize the number of jobs completed by their deadline on $m$ identical machines. The main result is an $O(1)$ competitive deterministic algorithm for any number of machines $m >1$.