Let $\alpha(G)$ denote the cardinality of a maximum independent set, while $\mu(G)$ be the size of a maximum matching in $G=\left( V,E\right) $. It is known that if $\alpha(G)+\mu(G)=\left\vert V\right\vert $, then $G$ is a K\"{o}nig-Egerv\'{a}ry graph. The critical difference $d(G)$ is $\max\{d(I):I\in\mathrm{Ind}(G)\}$, where $\mathrm{Ind}(G)$\ denotes the family of all independent sets of $G$. If $A\in\mathrm{Ind}(G)$ with $d\left( X\right) =d(G)$, then $A$ is a critical independent set. For a graph $G$, let $\mathrm{diadem}(G)=\bigcup\{S:S$ is a critical independent set in $G\}$, and $\varrho_{v}\left( G\right) $ denote the number of vertices $v\in V\left( G\right) $, such that $G-v$ is a K\"{o}nig-Egerv\'{a}ry graph. A graph is called almost bipartite if it has a unique odd cycle. In this paper, we show that if $G$ is an almost bipartite non-K\"{o}nig-Egerv\'{a}ry graph with the unique odd cycle $C$, then the following assertions are true: 1. every maximum matching of $G$ contains $\left\lfloor {V(C)}/{2}\right\rfloor $ edges belonging to $C$; 2. $V(C)\cup N_{G}\left[ \mathrm{diadem}\left( G\right) \right] =V$ and $V(C)\cap N_{G}\left[ \mathrm{diadem}\left( G\right) \right] =\emptyset$; 3. $\varrho_{v}\left( G\right) =\left\vert \mathrm{corona}\left( G\right) \right\vert -\left\vert \mathrm{diadem}\left( G\right) \right\vert $, where $\mathrm{corona}\left( G\right) $ is the union of all maximum independent sets of $G$; 4. $\varrho_{v}\left( G\right) =\left\vert V\right\vert $ if and only if $G=C_{2k+1}$ for some integer $k\geq1$.
In batch Kernel Density Estimation (KDE) for a kernel function $f$, we are given as input $2n$ points $x^{(1)}, \cdots, x^{(n)}, y^{(1)}, \cdots, y^{(n)}$ in dimension $m$, as well as a vector $v \in \mathbb{R}^n$. These inputs implicitly define the $n \times n$ kernel matrix $K$ given by $K[i,j] = f(x^{(i)}, y^{(j)})$. The goal is to compute a vector $v$ which approximates $K w$ with $|| Kw - v||_\infty < \varepsilon ||w||_1$. A recent line of work has proved fine-grained lower bounds conditioned on SETH. Backurs et al. first showed the hardness of KDE for Gaussian-like kernels with high dimension $m = \Omega(\log n)$ and large scale $B = \Omega(\log n)$. Alman et al. later developed new reductions in roughly this same parameter regime, leading to lower bounds for more general kernels, but only for very small error $\varepsilon < 2^{- \log^{\Omega(1)} (n)}$. In this paper, we refine the approach of Alman et al. to show new lower bounds in all parameter regimes, closing gaps between the known algorithms and lower bounds. In the setting where $m = C\log n$ and $B = o(\log n)$, we prove Gaussian KDE requires $n^{2-o(1)}$ time to achieve additive error $\varepsilon < \Omega(m/B)^{-m}$, matching the performance of the polynomial method up to low-order terms. In the low dimensional setting $m = o(\log n)$, we show that Gaussian KDE requires $n^{2-o(1)}$ time to achieve $\varepsilon$ such that $\log \log (\varepsilon^{-1}) > \tilde \Omega ((\log n)/m)$, matching the error bound achievable by FMM up to low-order terms. To our knowledge, no nontrivial lower bound was previously known in this regime. Our new lower bounds make use of an intricate analysis of a special case of the kernel matrix -- the `counting matrix'. As a key technical lemma, we give a novel approach to bounding the entries of its inverse by using Schur polynomials from algebraic combinatorics.
We devise a deterministic algorithm for minimum Steiner cut, which uses $(\log n)^{O(1)}$ maximum flow calls and additional near-linear time. This algorithm improves on Li and Panigrahi's (FOCS 2020) algorithm, which uses $(\log n)^{O(1/\epsilon^4)}$ maximum flow calls and additional $O(m^{1+\epsilon})$ time, for $\epsilon > 0$. Our algorithm thus shows that deterministic minimum Steiner cut can be solved in maximum flow time up to polylogarithmic factors, given any black-box deterministic maximum flow algorithm. Our main technical contribution is a novel deterministic graph decomposition method for terminal vertices that generalizes all existing $s$-strong partitioning methods, which we believe may have future applications.
We prove a lower bound on the communication complexity of computing the $n$-fold xor of an arbitrary function $f$, in terms of the communication complexity and rank of $f$. We prove that $D(f^{\oplus n}) \geq n \cdot \Big(\frac{\Omega(D(f))}{\log \mathsf{rk}(f)} -\log \mathsf{rk}(f)\Big )$, where here $D(f), D(f^{\oplus n})$ represent the deterministic communication complexity, and $\mathsf{rk}(f)$ is the rank of $f$. Our methods involve a new way to use information theory to reason about deterministic communication complexity.
The $N$-sum box protocol specifies a class of $\mathbb{F}_d$ linear functions $f(W_1,\cdots,W_K)=V_1W_1+V_2W_2+\cdots+V_KW_K\in\mathbb{F}_d^{m\times 1}$ that can be computed at information theoretically optimal communication cost (minimum number of qudits $\Delta_1,\cdots,\Delta_K$ sent by the transmitters Alice$_1$, Alice$_2$,$\cdots$, Alice$_K$, respectively, to the receiver, Bob, per computation instance) over a noise-free quantum multiple access channel (QMAC), when the input data streams $W_k\in\mathbb{F}_d^{m_k\times 1}, k\in[K]$, originate at the distributed transmitters, who share quantum entanglement in advance but are not otherwise allowed to communicate with each other. In prior work this set of optimally computable functions is identified in terms of a strong self-orthogonality (SSO) condition on the transfer function of the $N$-sum box. In this work we consider an `inverted' scenario, where instead of a feasible $N$-sum box transfer function, we are given an arbitrary $\mathbb{F}_d$ linear function, i.e., arbitrary matrices $V_k\in\mathbb{F}_d^{m\times m_k}$ are specified, and the goal is to characterize the set of all feasible communication cost tuples $(\Delta_1,\cdots,\Delta_K)$, not just based on $N$-sum box protocols, but across all possible quantum coding schemes. As our main result, we fully solve this problem for $K=3$ transmitters ($K\geq 4$ settings remain open). Coding schemes based on the $N$-sum box protocol (along with elementary ideas such as treating qudits as classical dits, time-sharing and batch-processing) are shown to be information theoretically optimal in all cases. As an example, in the symmetric case where rk$(V_1)$=rk$(V_2)$=rk$(V_3) \triangleq r_1$, rk$([V_1, V_2])$=rk$([V_2, V_3])$=rk$([V_3, V_1])\triangleq r_2$, and rk$([V_1, V_2, V_3])\triangleq r_3$ (rk = rank), the minimum total-download cost is $\max \{1.5r_1 + 0.75(r_3 - r_2), r_3\}$.
We present a pseudopolynomial-time algorithm for the Knapsack problem that has running time $\widetilde{O}(n + t\sqrt{p_{\max}})$, where $n$ is the number of items, $t$ is the knapsack capacity, and $p_{\max}$ is the maximum item profit. This improves over the $\widetilde{O}(n + t \, p_{\max})$-time algorithm based on the convolution and prediction technique by Bateni et al.~(STOC 2018). Moreover, we give some evidence, based on a strengthening of the Min-Plus Convolution Hypothesis, that our running time might be optimal. Our algorithm uses two new technical tools, which might be of independent interest. First, we generalize the $\widetilde{O}(n^{1.5})$-time algorithm for bounded monotone min-plus convolution by Chi et al.~(STOC 2022) to the \emph{rectangular} case where the range of entries can be different from the sequence length. Second, we give a reduction from general knapsack instances to \emph{balanced} instances, where all items have nearly the same profit-to-weight ratio, up to a constant factor. Using these techniques, we can also obtain algorithms that run in time $\widetilde{O}(n + OPT\sqrt{w_{\max}})$, $\widetilde{O}(n + (nw_{\max}p_{\max})^{1/3}t^{2/3})$, and $\widetilde{O}(n + (nw_{\max}p_{\max})^{1/3} OPT^{2/3})$, where $OPT$ is the optimal total profit and $w_{\max}$ is the maximum item weight.
Given a set $P$ of $n$ points and a set $S$ of $n$ weighted disks in the plane, the disk coverage problem is to compute a subset of disks of smallest total weight such that the union of the disks in the subset covers all points of $P$. The problem is NP-hard. In this paper, we consider a line-separable unit-disk version of the problem where all disks have the same radius and their centers are separated from the points of $P$ by a line $\ell$. We present an $O(n^{3/2}\log^2 n)$ time algorithm for the problem. This improves the previously best work of $O(n^2\log n)$ time. Our result leads to an algorithm of $O(n^{{7}/{2}}\log^2 n)$ time for the halfplane coverage problem (i.e., using $n$ weighted halfplanes to cover $n$ points), an improvement over the previous $O(n^4\log n)$ time solution. If all halfplanes are lower ones, our algorithm runs in $O(n^{{3}/{2}}\log^2 n)$ time, while the previous best algorithm takes $O(n^2\log n)$ time. Using duality, the hitting set problems under the same settings can be solved with similar time complexities.
We propose and discuss a paradigm that allows for expressing \emph{data-parallel} rendering with the classically non-parallel ANARI API. We propose this as a new standard for data-parallel sci-vis rendering, describe two different implementations of this paradigm, and use multiple sample integrations into existing apps to show how easy it is to adopt this paradigm, and what can be gained from doing so.
Given a (multi)graph $G$ which contains a bipartite subgraph with $\rho$ edges, what is the largest triangle-free subgraph of $G$ that can be found efficiently? We present an SDP-based algorithm that finds one with at least $0.8823 \rho$ edges, thus improving on the subgraph with $0.878 \rho$ edges obtained by the classic Max-Cut algorithm of Goemans and Williamson. On the other hand, by a reduction from Hastad's 3-bit PCP we show that it is NP-hard to find a triangle-free subgraph with $(25 / 26 + \epsilon) \rho \approx (0.961 + \epsilon) \rho$ edges. As an application, we classify the Maximum Promise Constraint Satisfaction Problem MaxPCSP($G$,$H$) for all bipartite $G$: Given an input (multi)graph $X$ which admits a $G$-colouring satisfying $\rho$ edges, find an $H$-colouring of $X$ that satisfies $\rho$ edges. This problem is solvable in polynomial time, apart from trivial cases, if $H$ contains a triangle, and is NP-hard otherwise.
Sorting has a natural generalization where the input consists of: (1) a ground set $X$ of size $n$, (2) a partial oracle $O_P$ specifying some fixed partial order $P$ on $X$ and (3) a linear oracle $O_L$ specifying a linear order $L$ that extends $P$. The goal is to recover the linear order $L$ on $X$ using the fewest number of linear oracle queries. In this problem, we measure algorithmic complexity through three metrics: oracle queries to $O_L$, oracle queries to $O_P$, and the time spent. Any algorithm requires worst-case $\log_2 e(P)$ linear oracle queries to recover the linear order on $X$. Kahn and Saks presented the first algorithm that uses $\Theta(\log e(P))$ linear oracle queries (using $O(n^2)$ partial oracle queries and exponential time). The state-of-the-art for the general problem is by Cardinal, Fiorini, Joret, Jungers and Munro who at STOC'10 manage to separate the linear and partial oracle queries into a preprocessing and query phase. They can preprocess $P$ using $O(n^2)$ partial oracle queries and $O(n^{2.5})$ time. Then, given $O_L$, they uncover the linear order on $X$ in $\Theta(\log e(P))$ linear oracle queries and $O(n + \log e(P))$ time -- which is worst-case optimal in the number of linear oracle queries but not in the time spent. For $c \geq 1$, our algorithm can preprocess $O_P$ using $O(n^{1 + \frac{1}{c}})$ queries and time. Given $O_L$, we uncover $L$ using $\Theta(c \log e(P))$ queries and time. We show a matching lower bound, as there exist positive constants $(\alpha, \beta)$ where for any constant $c \geq 1$, any algorithm that uses at most $\alpha \cdot n^{1 + \frac{1}{c}}$ preprocessing must use worst-case at least $\beta \cdot c \log e(P)$ linear oracle queries. Thus, we solve the problem of sorting under partial information through an algorithm that is asymptotically tight across all three metrics.
We consider the problem of sampling from the posterior distribution of a $d$-dimensional coefficient vector $\boldsymbol{\theta}$, given linear observations $\boldsymbol{y} = \boldsymbol{X}\boldsymbol{\theta}+\boldsymbol{\varepsilon}$. In general, such posteriors are multimodal, and therefore challenging to sample from. This observation has prompted the exploration of various heuristics that aim at approximating the posterior distribution. In this paper, we study a different approach based on decomposing the posterior distribution into a log-concave mixture of simple product measures. This decomposition allows us to reduce sampling from a multimodal distribution of interest to sampling from a log-concave one, which is tractable and has been investigated in detail. We prove that, under mild conditions on the prior, for random designs, such measure decomposition is generally feasible when the number of samples per parameter $n/d$ exceeds a constant threshold. We thus obtain a provably efficient (polynomial time) sampling algorithm in a regime where this was previously not known. Numerical simulations confirm that the algorithm is practical, and reveal that it has attractive statistical properties compared to state-of-the-art methods.