In this paper, we study upper bounds on the minimum length of frameproof codes introduced by Boneh and Shaw to protect copyrighted materials. A $q$-ary $(k,n)$-frameproof code of length $t$ is a $t \times n$ matrix having entries in $\{0,1,\ldots, q-1\}$ and with the property that for any column $\mathbf{c}$ and any other $k$ columns, there exists a row where the symbols of the $k$ columns are all different from the corresponding symbol (in the same row) of the column $\mathbf{c}$. In this paper, we show the existence of $q$-ary $(k,n)$-frameproof codes of length $t = O(\frac{k^2}{q} \log n)$ for $q \leq k$, using the Lov\'asz Local Lemma, and of length $t = O(\frac{k}{\log(q/k)}\log(n/k))$ for $q > k$ using the expurgation method. Remarkably, for the practical case of $q \leq k$ our findings give codes whose length almost matches the lower bound $\Omega(\frac{k^2}{q\log k} \log n)$ on the length of any $q$-ary $(k,n)$-frameproof code and, more importantly, allow us to derive an algorithm of complexity $O(t n^2)$ for the construction of such codes.
We introduce a family of graph parameters, called induced multipartite graph parameters, and study their computational complexity. First, we consider the following decision problem: an instance is an induced multipartite graph parameter $p$ and a given graph $G$, and for natural numbers $k\geq2$ and $\ell$, we must decide whether the maximum value of $p$ over all induced $k$-partite subgraphs of $G$ is at most $\ell$. We prove that this problem is W[1]-hard. Next, we consider a variant of this problem, where we must decide whether the given graph $G$ contains a sufficiently large induced $k$-partite subgraph $H$ such that $p(H)\leq\ell$. We show that for certain parameters this problem is para-NP-hard, while for others it is fixed-parameter tractable.
In [Meurant, Pape\v{z}, Tich\'y; Numerical Algorithms 88, 2021], we presented an adaptive estimate for the energy norm of the error in the conjugate gradient (CG) method. In this paper, we extend the estimate to algorithms for solving linear approximation problems with a general, possibly rectangular matrix that are based on applying CG to a system with a positive (semi-)definite matrix build from the original matrix. We show that the resulting estimate preserves its key properties: it can be very cheaply evaluated, and it is numerically reliable in finite-precision arithmetic under some mild assumptions. We discuss algorithms based on Hestenes-Stiefel-like implementation (often called CGLS and CGNE in the literature) as well as on bidiagonalization (LSQR and CRAIG), and both unpreconditioned and preconditioned variants. The numerical experiments confirm the robustness and very satisfactory behaviour of the estimate.
We study the time complexity of the discrete $k$-center problem and related (exact) geometric set cover problems when $k$ or the size of the cover is small. We obtain a plethora of new results: - We give the first subquadratic algorithm for rectilinear discrete 3-center in 2D, running in $\widetilde{O}(n^{3/2})$ time. - We prove a lower bound of $\Omega(n^{4/3-\delta})$ for rectilinear discrete 3-center in 4D, for any constant $\delta>0$, under a standard hypothesis about triangle detection in sparse graphs. - Given $n$ points and $n$ weighted axis-aligned unit squares in 2D, we give the first subquadratic algorithm for finding a minimum-weight cover of the points by 3 unit squares, running in $\widetilde{O}(n^{8/5})$ time. We also prove a lower bound of $\Omega(n^{3/2-\delta})$ for the same problem in 2D, under the well-known APSP Hypothesis. For arbitrary axis-aligned rectangles in 2D, our upper bound is $\widetilde{O}(n^{7/4})$. - We prove a lower bound of $\Omega(n^{2-\delta})$ for Euclidean discrete 2-center in 13D, under the Hyperclique Hypothesis. This lower bound nearly matches the straightforward upper bound of $\widetilde{O}(n^\omega)$, if the matrix multiplication exponent $\omega$ is equal to 2. - We similarly prove an $\Omega(n^{k-\delta})$ lower bound for Euclidean discrete $k$-center in $O(k)$ dimensions for any constant $k\ge 3$, under the Hyperclique Hypothesis. This lower bound again nearly matches known upper bounds if $\omega=2$. - We also prove an $\Omega(n^{2-\delta})$ lower bound for the problem of finding 2 boxes to cover the largest number of points, given $n$ points and $n$ boxes in 12D. This matches the straightforward near-quadratic upper bound.
In this paper, we present a low-diameter decomposition algorithm in the LOCAL model of distributed computing that succeeds with probability $1 - 1/poly(n)$. Specifically, we show how to compute an $\left(\epsilon, O\left(\frac{\log n}{\epsilon}\right)\right)$ low-diameter decomposition in $O\left(\frac{\log^3(1/\epsilon)\log n}{\epsilon}\right)$ round Further developing our techniques, we show new distributed algorithms for approximating general packing and covering integer linear programs in the LOCAL model. For packing problems, our algorithm finds an $(1-\epsilon)$-approximate solution in $O\left(\frac{\log^3 (1/\epsilon) \log n}{\epsilon}\right)$ rounds with probability $1 - 1/poly(n)$. For covering problems, our algorithm finds an $(1+\epsilon)$-approximate solution in $O\left(\frac{\left(\log \log n + \log (1/\epsilon)\right)^3 \log n}{\epsilon}\right)$ rounds with probability $1 - 1/poly(n)$. These results improve upon the previous $O\left(\frac{\log^3 n}{\epsilon}\right)$-round algorithm by Ghaffari, Kuhn, and Maus [STOC 2017] which is based on network decompositions. Our algorithms are near-optimal for many fundamental combinatorial graph optimization problems in the LOCAL model, such as minimum vertex cover and minimum dominating set, as their $(1\pm \epsilon)$-approximate solutions require $\Omega\left(\frac{\log n}{\epsilon}\right)$ rounds to compute.
The acyclic chromatic number of a graph is the least number of colors needed to properly color its vertices so that none of its cycles has only two colors. The acyclic chromatic index is the analogous graph parameter for edge colorings. We first show that the acyclic chromatic index is at most $2\Delta-1$, where $\Delta$ is the maximum degree of the graph. We then show that for all $\epsilon >0$ and for $\Delta$ large enough (depending on $\epsilon$), the acyclic chromatic number of the graph is at most $\lceil(2^{-1/3} +\epsilon) {\Delta}^{4/3} \rceil +\Delta+ 1$. Both results improve long chains of previous successive advances. Both are algorithmic, in the sense that the colorings are generated by randomized algorithms. However, in contrast with extant approaches, where the randomized algorithms assume the availability of enough colors to guarantee properness deterministically, and use additional colors for randomization in dealing with the bichromatic cycles, our algorithms may initially generate colorings that are not necessarily proper; they only aim at avoiding cycles where all pairs of edges, or vertices, that are one edge, or vertex, apart in a traversal of the cycle are homochromatic (of the same color). When this goal is reached, they check for properness and if necessary they repeat until properness is attained.
We consider the adversarial linear contextual bandit setting, which allows for the loss functions associated with each of $K$ arms to change over time without restriction. Assuming the $d$-dimensional contexts are drawn from a fixed known distribution, the worst-case expected regret over the course of $T$ rounds is known to scale as $\tilde O(\sqrt{Kd T})$. Under the additional assumption that the density of the contexts is log-concave, we obtain a second-order bound of order $\tilde O(K\sqrt{d V_T})$ in terms of the cumulative second moment of the learner's losses $V_T$, and a closely related first-order bound of order $\tilde O(K\sqrt{d L_T^*})$ in terms of the cumulative loss of the best policy $L_T^*$. Since $V_T$ or $L_T^*$ may be significantly smaller than $T$, these improve over the worst-case regret whenever the environment is relatively benign. Our results are obtained using a truncated version of the continuous exponential weights algorithm over the probability simplex, which we analyse by exploiting a novel connection to the linear bandit setting without contexts.
Given a basic block of instructions, finding a schedule that requires the minimum number of registers for evaluation is a well-known problem. The problem is NP-complete when the dependences among instructions form a directed-acyclic graph instead of a tree. We are striving to find efficient approximation algorithms for this problem not simply because it is an interesting graph optimization problem in theory. A good solution to this problem is also an essential component in solving the more complex instruction scheduling problem on GPU. In this paper, we start with explanations on why this problem is important in GPU instruction scheduling. We then explore two different approaches to tackling this problem. First we model this problem as a constraint-programming problem. Using a state-of-the-art CP-SAT solver, we can find optimal answers for much larger cases than previous works on a modest desktop PC. Second, guided by the optimal answers, we design and evaluate heuristics that can be applied to the polynomial-time list scheduling algorithms. A combination of those heuristics can achieve the register-pressure results that are about 17\% higher than the optimal minimum on average. However, there are still near 6\% cases in which the register pressure by the heuristic approach is 50\% higher than the optimal minimum.
Rueppel's conjecture on the linear complexity of the first $n$ terms of the sequence $(1,1,0,1,0^3,1,0^7,1,0^{15},\ldots)$ was first proved by Dai using the Euclidean algorithm. We have previously shown that we can attach a homogeneous (annihilator) ideal of $F[x,z]$ to the first $n$ terms of a sequence over a field $F$ and construct a pair of generating forms for it. This approach gives another proof of Rueppel's conjecture. We also prove additional properties of these forms and deduce the outputs of the LFSR synthesis algorithm applied to the first $n$ terms. Further, dehomogenising the leading generators yields the minimal polynomials of Dai.
In this paper we obtain complexity bounds for computational problems on algebraic power series over several commuting variables. The power series are specified by systems of polynomial equations: a formalism closely related to weighted context-free grammars. We focus on three problems -- decide whether a given algebraic series is identically zero, determine whether all but finitely many coefficients are zero, and compute the coefficient of a specific monomial. We relate these questions to well-known computational problems on arithmetic circuits and thereby show that all three problems lie in the counting hierarchy. Our main result improves the best known complexity bound on deciding zeroness of an algebraic series. This problem is known to lie in PSPACE by reduction to the decision problem for the existential fragment of the theory of real closed fields. Here we show that the problem lies in the counting hierarchy by reduction to the problem of computing the degree of a polynomial given by an arithmetic circuit. As a corollary we obtain new complexity bounds on multiplicity equivalence of context-free grammars restricted to a bounded language, language inclusion of a nondeterministic finite automaton in an unambiguous context-free grammar, and language inclusion of a non-deterministic context-free grammar in an unambiguous finite automaton.
We revisit two well-studied problems, Bounded Degree Vertex Deletion and Defective Coloring, where the input is a graph $G$ and a target degree $\Delta$ and we are asked either to edit or partition the graph so that the maximum degree becomes bounded by $\Delta$. Both are known to be parameterized intractable for treewidth. We revisit the parameterization by treewidth, as well as several related parameters and present a more fine-grained picture of the complexity of both problems. Both admit straightforward DP algorithms with table sizes $(\Delta+2)^\mathrm{tw}$ and $(\chi_\mathrm{d}(\Delta+1))^{\mathrm{tw}}$ respectively, where tw is the input graph's treewidth and $\chi_\mathrm{d}$ the number of available colors. We show that both algorithms are optimal under SETH, even if we replace treewidth by pathwidth. Along the way, we also obtain an algorithm for Defective Coloring with complexity quasi-linear in the table size, thus settling the complexity of both problems for these parameters. We then consider the more restricted parameter tree-depth, and bridge the gap left by known lower bounds, by showing that neither problem can be solved in time $n^{o(\mathrm{td})}$ under ETH. In order to do so, we employ a recursive low tree-depth construction that may be of independent interest. Finally, we show that for both problems, an $\mathrm{vc}^{o(\mathrm{vc})}$ algorithm would violate ETH, thus already known algorithms are optimal. Our proof relies on a new application of the technique of $d$-detecting families introduced by Bonamy et al. Our results, although mostly negative in nature, paint a clear picture regarding the complexity of both problems in the landscape of parameterized complexity, since in all cases we provide essentially matching upper and lower bounds.