Stimulated by practical applications arising from viral marketing. This paper investigates a novel Budgeted $k$-Submodular Maximization problem defined as follows: Given a finite set $V$, a budget $B$ and a $k$-submodular function $f: (k+1)^V \mapsto \mathbb{R}_+$, the problem asks to find a solution $\s=(S_1, S_2, \ldots, S_k)$, each element $e \in V$ has a cost $c_i(e)$ to be put into $i$-th set $S_i$, with the total cost of $s$ does not exceed $B$ so that $f(\s)$ is maximized. To address this problem, we propose two streaming algorithms that provide approximation guarantees for the problem. In particular, in the case of each element $e$ has the same cost for all $i$-th sets, we propose a deterministic streaming algorithm which provides an approximation ratio of $\frac{1}{4}-\epsilon$ when $f$ is monotone and $\frac{1}{5}-\epsilon$ when $f$ is non-monotone. For the general case, we propose a random streaming algorithm that provides an approximation ratio of $\min\{\frac{\alpha}{2}, \frac{(1-\alpha)k}{(1+\beta)k-\beta} \}-\epsilon$ when $f$ is monotone and $\min\{\frac{\alpha}{2}, \frac{(1-\alpha)k}{(1+2\beta)k-2\beta} \}-\epsilon$ when $f$ is non-monotone in expectation, where $\beta=\max_{e\in V, i , j \in [k], i\neq j} \frac{c_i(e)}{c_j(e)}$ and $\epsilon, \alpha$ are fixed inputs.
We give an algorithm to find a minimum cut in an edge-weighted directed graph with $n$ vertices and $m$ edges in $\tilde O(n\cdot \max(m^{2/3}, n))$ time. This improves on the 30 year old bound of $\tilde O(nm)$ obtained by Hao and Orlin for this problem. Our main technique is to reduce the directed mincut problem to $\tilde O(\min(n/m^{1/3}, \sqrt{n}))$ calls of {\em any} maxflow subroutine. Using state-of-the-art maxflow algorithms, this yields the above running time. Our techniques also yield fast {\em approximation} algorithms for finding minimum cuts in directed graphs. For both edge and vertex weighted graphs, we give $(1+\epsilon)$-approximation algorithms that run in $\tilde O(n^2 / \epsilon^2)$ time.
e give two approximation algorithms solving the Stochastic Boolean Function Evaluation (SBFE) problem for symmetric Boolean functions. The first is an $O(\log n)$-approximation algorithm, based on the submodular goal-value approach of Deshpande, Hellerstein and Kletenik. Our second algorithm, which is simple, is based on the algorithm solving the SBFE problem for $k$-of-$n$ functions, due to Salloum, Breuer, and Ben-Dov. It achieves a $(B-1)$ approximation factor, where $B$ is the number of blocks of 0's and 1's in the standard vector representation of the symmetric Boolean function. As part of the design of the first algorithm, we prove that the goal value of any symmetric Boolean function is less than $n(n+1)/2$. Finally, we give an example showing that for symmetric Boolean functions, minimum expected verification cost and minimum expected evaluation cost are not necessarily equal. This contrasts with a previous result, given by Das, Jafarpour, Orlitsky, Pan and Suresh, which showed that equality holds in the unit-cost case.
Continuous DR-submodular functions are a class of generally non-convex/non-concave functions that satisfy the Diminishing Returns (DR) property, which implies that they are concave along non-negative directions. Existing work has studied monotone continuous DR-submodular maximization subject to a convex constraint and provided efficient algorithms with approximation guarantees. In many applications, such as computing the stability number of a graph, the monotone DR-submodular objective function has the additional property of being strongly concave along non-negative directions (i.e., strongly DR-submodular). In this paper, we consider a subclass of $L$-smooth monotone DR-submodular functions that are strongly DR-submodular and have a bounded curvature, and we show how to exploit such additional structure to obtain faster algorithms with stronger guarantees for the maximization problem. We propose a new algorithm that matches the provably optimal $1-\frac{c}{e}$ approximation ratio after only $\lceil\frac{L}{\mu}\rceil$ iterations, where $c\in[0,1]$ and $\mu\geq 0$ are the curvature and the strong DR-submodularity parameter. Furthermore, we study the Projected Gradient Ascent (PGA) method for this problem, and provide a refined analysis of the algorithm with an improved $\frac{1}{1+c}$ approximation ratio (compared to $\frac{1}{2}$ in prior works) and a linear convergence rate. Experimental results illustrate and validate the efficiency and effectiveness of our proposed algorithms.
Majority-SAT is the problem of determining whether an input $n$-variable formula in conjunctive normal form (CNF) has at least $2^{n-1}$ satisfying assignments. Majority-SAT and related problems have been studied extensively in various AI communities interested in the complexity of probabilistic planning and inference. Although Majority-SAT has been known to be PP-complete for over 40 years, the complexity of a natural variant has remained open: Majority-$k$SAT, where the input CNF formula is restricted to have clause width at most $k$. We prove that for every $k$, Majority-$k$SAT is in P. In fact, for any positive integer $k$ and rational $\rho \in (0,1)$ with bounded denominator, we give an algorithm that can determine whether a given $k$-CNF has at least $\rho \cdot 2^n$ satisfying assignments, in deterministic linear time (whereas the previous best-known algorithm ran in exponential time). Our algorithms have interesting positive implications for counting complexity and the complexity of inference, significantly reducing the known complexities of related problems such as E-MAJ-$k$SAT and MAJ-MAJ-$k$SAT. At the heart of our approach is an efficient method for solving threshold counting problems by extracting sunflowers found in the corresponding set system of a $k$-CNF. We also show that the tractability of Majority-$k$SAT is somewhat fragile. For the closely related GtMajority-SAT problem (where we ask whether a given formula has greater than $2^{n-1}$ satisfying assignments) which is known to be PP-complete, we show that GtMajority-$k$SAT is in P for $k\le 3$, but becomes NP-complete for $k\geq 4$. These results are counterintuitive, because the ``natural'' classifications of these problems would have been PP-completeness, and because there is a stark difference in the complexity of GtMajority-$k$SAT and Majority-$k$SAT for all $k\ge 4$.
We introduce a new algorithm to efficiently compute the functions belonging to a suitable set $\mathcal F$ defined as follows: $f\in \mathcal F$ means that $f(s,x)$, $s\in A\subset \mathbb R$ being fixed and $x>0$, has a power series expansion centred at $x_0=1$ with convergence radius greater or equal than $1$; moreover, it satisfies a difference equation of step $1$ and the Euler-Maclaurin summation formula can be applied to $f$. Denoting Euler's function as $\Gamma$, we will show, for $x>0$, that $\log \Gamma(x)$, the digamma function $\psi(x)$, the polygamma functions $\psi^{(w)}(x)$, $w\in \mathbb N$, $w\ge1$, and, for $s>1$ being fixed, the Hurwitz $\zeta(s,x)$-function and its first partial derivative $\frac{\partial\zeta}{\partial s}(s,x)$ are in $\mathcal F$. In all these cases the power series involved will depend on the values of $\zeta(u)$, $u>1$, where $\zeta$ is Riemann's function. As a by-product, we will also show how compute efficiently the Dirichlet $L$-functions $L(s,\chi)$ and $L^\prime(s,\chi)$, $s>1$, $\chi$ being a primitive Dirichlet character, by inserting the reflection formulae of $\zeta(s,x)$ and $\frac{\partial\zeta}{\partial s}(s,x)$ into the first step of the Fast Fourier Transform algorithm. Moreover, we will obtain some new formulae and algorithms for the Dirichlet $\beta$-function and for the Catalan constant $G$. Finally, we will study the case of the Bateman $G$-function. In the last section we will also describe some tests that show an important performance gain with respect to a standard multiprecision implementation of $\zeta(s,x)$ and $\frac{\partial\zeta}{\partial s}(s,x)$, $s>1$, $x>0$.
The hop-constrained Steiner tree problem (HSTP) is a generalization of the classical Steiner tree problem. It asks for a minimum cost subtree that spans some specified nodes of a given graph, such that the number of edges between each node of the tree and its root respects a given hop limit. This NP-hard problem has many variants, often modeled as integer linear programs. Two of the models are so-called assignment and partial-ordering based models, which yield (up to our knowledge) the best two state-of-the-art formulations for the variant Steiner tree problem with revenues, budgets, and hop constraints (STPRBH). The solution of the HSTP and its variants such as the STPRBH and the hop-constrained minimum spanning tree problem (HMSTP) is a hop-constrained tree, a rooted tree whose depth is bounded by a given hop limit. This paper provides some theoretical results that show the polyhedral advantages of the partial-ordering model over the assignment model for this class of problems. Computational results in this paper and the literature for the HSTP, STPRBH, and HMSTP show that the partial-ordering model outperforms the assignment model in practice, too; it has better linear programming relaxation and solves more instances.
We consider the Vector Scheduling problem on identical machines: we have m machines, and a set J of n jobs, where each job j has a processing-time vector $p_j\in \mathbb{R}^d_{\geq 0}$. The goal is to find an assignment $\sigma:J\to [m]$ of jobs to machines so as to minimize the makespan $\max_{i\in [m]}\max_{r\in [d]}( \sum_{j:\sigma(j)=i}p_{j,r})$. A natural lower bound on the optimal makespan is lb $:=\max\{\max_{j\in J,r\in [d]}p_{j,r},\max_{r\in [d]}(\sum_{j\in J}p_{j,r}/m)\}$. Our main result is a very simple O(log d)-approximation algorithm for vector scheduling with respect to the lower bound lb: we devise an algorithm that returns an assignment whose makespan is at most O(log d)*lb. As an application, we show that the above guarantee leads to an O(log log m)-approximation for Stochastic Minimum-Norm Load Balancing (StochNormLB). In StochNormLB, we have m identical machines, a set J of n independent stochastic jobs whose processing times are nonnegative random variables, and a monotone, symmetric norm $f:\mathbb{R}^m \to \mathbb{R}_{\geq 0}$. The goal is to find an assignment $\sigma:J\to [m]$ that minimizes the expected $f$-norm of the induced machine-load vector, where the load on machine i is the (random) total processing time assigned to it. Our O(log log m)-approximation guarantee is in fact much stronger: we obtain an assignment that is simultaneously an O(log log m)-approximation for StochNormLB with all monotone, symmetric norms. Next, this approximation factor significantly improves upon the O(log m/log log m)-approximation in (Ibrahimpur and Swamy, FOCS 2020) for StochNormLB, and is a consequence of a more-general black-box reduction that we present, showing that a $\gamma(d)$-approximation for d-dimensional vector scheduling with respect to the lower bound lb yields a simultaneous $\gamma(\log m)$-approximation for StochNormLB with all monotone, symmetric norms.
We consider the problem of maximizing submodular functions in single-pass streaming and secretaries-with-shortlists models, both with random arrival order. For cardinality constrained monotone functions, Agrawal, Shadravan, and Stein gave a single-pass $(1-1/e-\varepsilon)$-approximation algorithm using only linear memory, but their exponential dependence on $\varepsilon$ makes it impractical even for $\varepsilon=0.1$. We simplify both the algorithm and the analysis, obtaining an exponential improvement in the $\varepsilon$-dependence (in particular, $O(k/\varepsilon)$ memory). Extending these techniques, we also give a simple $(1/e-\varepsilon)$-approximation for non-monotone functions in $O(k/\varepsilon)$ memory. For the monotone case, we also give a corresponding unconditional hardness barrier of $1-1/e+\varepsilon$ for single-pass algorithms in randomly ordered streams, even assuming unlimited computation. Finally, we show that the algorithms are simple to implement and work well on real world datasets.
This paper studies the expressive power of artificial neural networks (NNs) with rectified linear units. To study them as a model of real-valued computation, we introduce the concept of Max-Affine Arithmetic Programs and show equivalence between them and NNs concerning natural complexity measures. We then use this result to show that two fundamental combinatorial optimization problems can be solved with polynomial-size NNs, which is equivalent to the existence of very special strongly polynomial time algorithms. First, we show that for any undirected graph with $n$ nodes, there is an NN of size $\mathcal{O}(n^3)$ that takes the edge weights as input and computes the value of a minimum spanning tree of the graph. Second, we show that for any directed graph with $n$ nodes and $m$ arcs, there is an NN of size $\mathcal{O}(m^2n^2)$ that takes the arc capacities as input and computes a maximum flow. These results imply in particular that the solutions of the corresponding parametric optimization problems where all edge weights or arc capacities are free parameters can be encoded in polynomial space and evaluated in polynomial time, and that such an encoding is provided by an NN.
For a graph class $\mathcal{C}$, the $\mathcal{C}$-Edge-Deletion problem asks for a given graph $G$ to delete the minimum number of edges from $G$ in order to obtain a graph in $\mathcal{C}$. We study the $\mathcal{C}$-Edge-Deletion problem for $\mathcal{C}$ the permutation graphs, interval graphs, and other related graph classes. It follows from Courcelle's Theorem that these problems are fixed parameter tractable when parameterized by treewidth. In this paper, we present concrete FPT algorithms for these problems. By giving explicit algorithms and analyzing these in detail, we obtain algorithms that are significantly faster than the algorithms obtained by using Courcelle's theorem.