We show that for every fixed $k\geq 3$, the problem whether the termination/counter complexity of a given demonic VASS is $\mathcal{O}(n^k)$, $\Omega(n^{k})$, and $\Theta(n^{k})$ is coNP-complete, NP-complete, and DP-complete, respectively. We also classify the complexity of these problems for $k\leq 2$. This shows that the polynomial-time algorithm designed for strongly connected demonic VASS in previous works cannot be extended to the general case. Then, we prove that the same problems for VASS games are PSPACE-complete. Again, we classify the complexity also for $k\leq 2$. Interestingly, tractable subclasses of demonic VASS and VASS games are obtained by bounding certain structural parameters, which opens the way to applications in program analysis despite the presented lower complexity bounds.
Given a permutation $\pi:[k] \to [k]$, a function $f:[n] \to \mathbb{R}$ contains a $\pi$-appearance if there exists $1 \leq i_1 < i_2 < \dots < i_k \leq n$ such that for all $s,t \in [k]$, it holds that $f(i_s) < f(i_t)$ if and only if $\pi(s) < \pi(t)$. The function is $\pi$-free if it has no $\pi$-appearances. In this paper, we investigate the problem of testing whether an input function $f$ is $\pi$-free or whether at least $\varepsilon n$ values in $f$ need to be changed in order to make it $\pi$-free. This problem is a generalization of the well-studied monotonicity testing and was first studied by Newman, Rabinovich, Rajendraprasad and Sohler (Random Structures and Algorithms 2019). We show that for all constants $k \in \mathbb{N}$, $\varepsilon \in (0,1)$, and permutation $\pi:[k] \to [k]$, there is a one-sided error $\varepsilon$-testing algorithm for $\pi$-freeness of functions $f:[n] \to \mathbb{R}$ that makes $\tilde{O}(n^{o(1)})$ queries. We improve significantly upon the previous best upper bound $O(n^{1 - 1/(k-1)})$ by Ben-Eliezer and Canonne (SODA 2018). Our algorithm is adaptive, while the earlier best upper bound is known to be tight for nonadaptive algorithms.
We study the following problem: given an integer $k \ge 3$ and a simple graph $G$, sample a connected induced $k$-node subgraph of $G$ uniformly at random. This is a fundamental graph mining primitive with applications in social network analysis, bioinformatics, and more. Surprisingly, no efficient algorithm is known for uniform sampling; the only somewhat efficient algorithms available yield samples that are only approximately uniform, with running times that are unclear or suboptimal. In this work we provide: (i) a near-optimal mixing time bound for a well-known random walk technique, (ii) the first efficient algorithm for truly uniform graphlet sampling, and (iii) the first sublinear-time algorithm for $\epsilon$-uniform graphlet sampling.
Linear temporal logic (LTL) is a specification language for finite sequences (called traces) widely used in program verification, motion planning in robotics, process mining, and many other areas. We consider the problem of learning LTL formulas for classifying traces; despite a growing interest of the research community, existing solutions suffer from two limitations: they do not scale beyond small formulas, and they may exhaust computational resources without returning any result. We introduce a new algorithm addressing both issues: our algorithm is able to construct formulas an order of magnitude larger than previous methods, and it is anytime, meaning that it in most cases successfully outputs a formula, albeit possibly not of minimal size. We evaluate the performances of our algorithm using an open source implementation against publicly available benchmarks.
Given a set of $n$ input integers, the Equal Subset Sum problem asks us to find two distinct subsets with the same sum. In this paper we present an algorithm that runs in time $O^*(3^{0.387n})$ in the~average case, significantly improving over the $O^*(3^{0.488n})$ running time of the best known worst-case algorithm and the Meet-in-the-Middle benchmark of $O^*(3^{0.5n})$. Our algorithm generalizes to a number of related problems, such as the ``Generalized Equal Subset Sum'' problem, which asks us to assign a coefficient $c_i$ from a set $C$ to each input number $x_i$ such that $\sum_{i} c_i x_i = 0$. Our algorithm for the average-case version of this problem runs in~time $|C|^{(0.5-c_0/|C|)n}$ for some positive constant $c_0$, whenever $C=\{0, \pm 1, \dots, \pm d\}$ or $\{\pm 1, \dots, \pm d\}$ for some positive integer $d$ (with $O^*(|C|^{0.45n})$ when $|C|<10$). Our results extend to the~problem of finding ``nearly balanced'' solutions in which the target is a not-too-large nonzero offset $\tau$. Our approach relies on new structural results that characterize the probability that $\sum_{i} c_i x_i$ $=\tau$ has a solution $c \in C^n$ when $x_i$'s are chosen randomly; these results may be of independent interest. Our algorithm is inspired by the ``representation technique'' introduced by Howgrave-Graham and Joux. This requires several new ideas to overcome preprocessing hurdles that arise in the representation framework, as well as a novel application of dynamic programming in the solution recovery phase of the algorithm.
In this work we analyse the parameterised complexity of propositional inclusion (PINC) and independence logic (PIND). The problems of interest are model checking (MC) and satisfiability (SAT). The complexity of these problems is well understood in the classical (non-parameterised) setting. Mahmood and Meier (FoIKS 2020) recently studied the parameterised complexity of propositional dependence logic (PDL). As a continuation of their work, we classify inclusion and independence logic and thereby come closer to completing the picture with respect to the parametrised complexity for the three most studied logics in the propositional team semantics setting. We present results for each problem with respect to 8 different parameterisations. It turns out that for a team-based logic L such that L-atoms can be evaluated in polynomial time, then MC parameterised by teamsize is FPT. As a corollary, we get an FPT membership under the following parameterisations: formula-size, formula-depth, treewidth, and number of variables. The parameter teamsize shows interesting behavior for SAT. For PINC, the parameter teamsize is not meaningful, whereas for PDL and PIND the satisfiability is paraNP-complete. Finally, we prove that when parameterised by arity, both MC and SAT are paraNP-complete for each of the considered logics.
In this work, we study a range of constrained versions of the $k$-supplier and $k$-center problems such as: capacitated, fault-tolerant, fair, etc. These problems fall under a broad framework of constrained clustering. A unified framework for constrained clustering was proposed by Ding and Xu [SODA 2015] in context of the $k$-median and $k$-means objectives. In this work, we extend this framework to the $k$-supplier and $k$-center objectives. This unified framework allows us to obtain results simultaneously for the following constrained versions of the $k$-supplier problem: $r$-gather, $r$-capacity, balanced, chromatic, fault-tolerant, strongly private, $\ell$-diversity, and fair $k$-supplier problems, with and without outliers. We obtain the following results: We give $3$ and $2$ approximation algorithms for the constrained $k$-supplier and $k$-center problems, respectively, with $\mathsf{FPT}$ running time $k^{O(k)} \cdot n^{O(1)}$, where $n = |C \cup L|$. Moreover, these approximation guarantees are tight; that is, for any constant $\epsilon>0$, no algorithm can achieve $(3-\epsilon)$ and $(2-\epsilon)$ approximation guarantees for the constrained $k$-supplier and $k$-center problems in $\mathsf{FPT}$ time, assuming $\mathsf{FPT} \neq \mathsf{W}[2]$. Furthermore, we study these constrained problems in outlier setting. Our algorithm gives $3$ and $2$ approximation guarantees for the constrained outlier $k$-supplier and $k$-center problems, respectively, with $\mathsf{FPT}$ running time $(k+m)^{O(k)} \cdot n^{O(1)}$, where $n = |C \cup L|$ and $m$ is the number of outliers.
The framework outlined in [arXiv:2010.13024] provides an approximation algorithm for computing Nash equilibria of normal form games. Since NASH is a well-known PPAD-complete problem, this framework has potential applications to other $PPAD$ problems. The correctness of this framework has been empirically validated on 4 well-studied 2x2 games: Prisoner's Dilemma, Stag Hunt, Battle, and Chicken. In this paper, we provide the asymptotic time-complexities for these methods and in particular, verify that for 2x2 games the worst-case complexity is linear in the number of actions an agent can choose from.
We study the computational complexity of two well-known graph transversal problems, namely Subset Feedback Vertex Set and Subset Odd Cycle Transversal, by restricting the input to $H$-free graphs, that is, to graphs that do not contain some fixed graph~$H$ as an induced subgraph. By combining known and new results, we determine the computational complexity of both problems on $H$-free graphs for every graph $H$ except when $H=sP_1+P_4$ for some $s\geq 1$. As part of our approach, we introduce the Subset Vertex Cover problem and prove that it is polynomial-time solvable for $(sP_1+P_4)$-free graphs for every $s\geq 1$.
Many resource allocation problems in the cloud can be described as a basic Virtual Network Embedding Problem (VNEP): finding mappings of request graphs (describing the workloads) onto a substrate graph (describing the physical infrastructure). In the offline setting, the two natural objectives are profit maximization, i.e., embedding a maximal number of request graphs subject to the resource constraints, and cost minimization, i.e., embedding all requests at minimal overall cost. The VNEP can be seen as a generalization of classic routing and call admission problems, in which requests are arbitrary graphs whose communication endpoints are not fixed. Due to its applications, the problem has been studied intensively in the networking community. However, the underlying algorithmic problem is hardly understood. This paper presents the first fixed-parameter tractable approximation algorithms for the VNEP. Our algorithms are based on randomized rounding. Due to the flexible mapping options and the arbitrary request graph topologies, we show that a novel linear program formulation is required. Only using this novel formulation the computation of convex combinations of valid mappings is enabled, as the formulation needs to account for the structure of the request graphs. Accordingly, to capture the structure of request graphs, we introduce the graph-theoretic notion of extraction orders and extraction width and show that our algorithms have exponential runtime in the request graphs' maximal width. Hence, for request graphs of fixed extraction width, we obtain the first polynomial-time approximations. Studying the new notion of extraction orders we show that (i) computing extraction orders of minimal width is NP-hard and (ii) that computing decomposable LP solutions is in general NP-hard, even when restricting request graphs to planar ones.
In this paper, we study the optimal convergence rate for distributed convex optimization problems in networks. We model the communication restrictions imposed by the network as a set of affine constraints and provide optimal complexity bounds for four different setups, namely: the function $F(\xb) \triangleq \sum_{i=1}^{m}f_i(\xb)$ is strongly convex and smooth, either strongly convex or smooth or just convex. Our results show that Nesterov's accelerated gradient descent on the dual problem can be executed in a distributed manner and obtains the same optimal rates as in the centralized version of the problem (up to constant or logarithmic factors) with an additional cost related to the spectral gap of the interaction matrix. Finally, we discuss some extensions to the proposed setup such as proximal friendly functions, time-varying graphs, improvement of the condition numbers.