Our input is a directed, rooted graph $G = (V \cup \{r\},E)$ where each vertex in $V$ has a partial order preference over its incoming edges. The preferences of a vertex extend naturally to preferences over arborescences rooted at $r$. We seek a popular arborescence in $G$, i.e., one for which there is no "more popular" arborescence. Popular arborescences have applications in liquid democracy or collective decision making; however, they need not exist in every input instance. The popular arborescence problem is to decide if a given input instance admits a popular arborescence or not. We show a polynomial-time algorithm for this problem, whose computational complexity was not known previously. Our algorithm is combinatorial, and can be regarded as a primal-dual algorithm. It searches for an arborescence along with its dual certificate, a chain of subsets of $E$, witnessing its popularity. In fact, our algorithm solves the more general popular common base problem in the intersection of two matroids, where one matroid is the partition matroid defined by any partition $E = \bigcup_{v\in V} \delta(v)$ and the other is an arbitrary matroid on $E$ of rank $|V|$, with each $v \in V$ having a partial order over elements in $\delta(v)$. We extend our algorithm to the case with forced or forbidden edges. We also study the related popular colorful forest (or more generally, the popular common independent set) problem where edges are partitioned into color classes, and the task is to find a colorful forest that is popular within the set of all colorful forests. For the case with weak rankings, we formulate the popular colorful forest polytope, and thus show that a minimum-cost popular colorful forest can be computed efficiently. By contrast, we prove that it is NP-hard to compute a minimum-cost popular arborescence, even when rankings are strict.
A pervasive methodological error is the post-hoc interpretation of $p$-values. A $p$-value $p$ is not the level at which we reject the null, it is the level at which we would have rejected the null had we chosen level $p$. We introduce the notion of a post-hoc $p$-value, that does admit this interpretation. We show that $p$ is a post-hoc $p$-value if and only if $1/p$ is an $e$-value. Among other things, this implies that the product of independent post-hoc $p$-values is a post-hoc $p$-value. Moreover, we generalize post-hoc validity to a sequential setting and find that $(p_t)_{t \geq 1}$ is a post-hoc anytime valid $p$-process if and only if $(1/p_t)_{t \geq 1}$ is an $e$-process. In addition, we show that if we admit randomized procedures, any non-randomized post-hoc $p$-value can be trivially improved. In fact, we find that this in some sense characterizes non-randomized post-hoc $p$-values. Finally, we argue that we need to go beyond $e$-values if we want to consider randomized post-hoc inference in its full generality.
Complexity classes such as $\#\mathbf{P}$, $\oplus\mathbf{P}$, $\mathbf{GapP}$, $\mathbf{OptP}$, $\mathbf{NPMV}$, or the class of fuzzy languages realised by polynomial-time fuzzy nondeterministic Turing machines, can all be described in terms of a class $\mathbf{NP}[S]$ for a suitable semiring $S$, defined via weighted Turing machines over $S$ similarly as $\mathbf{NP}$ is defined via the classical nondeterministic Turing machines. Other complexity classes of decision problems can be lifted to the quantitative world using the same recipe as well, and the resulting classes relate to the original ones in the same way as weighted automata or logics relate to their unweighted counterparts. The article surveys these too-little-known connexions between weighted automata theory and computational complexity theory implicit in the existing literature, suggests a systematic approach to the study of weighted complexity classes, and presents several new observations strengthening the relation between both fields. In particular, it is proved that a natural extension of the Boolean satisfiability problem to weighted propositional logic is complete for the class $\mathbf{NP}[S]$ when $S$ is a finitely generated semiring. Moreover, a class of semiring-valued functions $\mathbf{FP}[S]$ is introduced for each semiring $S$ as a counterpart to the class $\mathbf{P}$, and the relations between $\mathbf{FP}[S]$ and $\mathbf{NP}[S]$ are considered.
Normal numbers were introduced by Borel. Normality is certainly a weak notion of randomness; for instance, there are computable numbers which are absolutely normal. In the present paper, we introduce a relativization of normality to a fixed representation system. When we require normality with respect to large sets of such systems, we find variants of normality that imply randomness notions much stronger than absolute normality. The primary classes of numbers investigated in this paper are the supernormal numbers and the highly normal numbers, which we will define. These are relativizations of normality which are robust to all reasonable changes of representation. Among other results, we give a proof that the highly normal numbers are exactly those of computable dimension 1, which we think gives a more natural characterization than was previously known of this interesting class.
We present a modular approach to \emph{reinforcement learning} (RL) in environments consisting of simpler components evolving in parallel. A monolithic view of such modular environments may be prohibitively large to learn, or may require unrealizable communication between the components in the form of a centralized controller. Our proposed approach is based on the assume-guarantee paradigm where the optimal control for the individual components is synthesized in isolation by making \emph{assumptions} about the behaviors of neighboring components, and providing \emph{guarantees} about their own behavior. We express these \emph{assume-guarantee contracts} as regular languages and provide automatic translations to scalar rewards to be used in RL. By combining local probabilities of satisfaction for each component, we provide a lower bound on the probability of satisfaction of the complete system. By solving a Markov game for each component, RL can produce a controller for each component that maximizes this lower bound. The controller utilizes the information it receives through communication, observations, and any knowledge of a coarse model of other agents. We experimentally demonstrate the efficiency of the proposed approach on a variety of case studies.
For a permutation $\pi: [K]\rightarrow [K]$, a sequence $f: \{1,2,\cdots, n\}\rightarrow \mathbb R$ contains a $\pi$-pattern of size $K$, if there is a sequence of indices $(i_1, i_2, \cdots, i_K)$ ($i_1<i_2<\cdots<i_K$), satisfying that $f(i_a)<f(i_b)$ if $\pi(a)<\pi(b)$, for $a,b\in [K]$. Otherwise, $f$ is referred to as $\pi$-free. For the special case where $\pi = (1,2,\cdots, K)$, it is referred to as the monotone pattern. \cite{newman2017testing} initiated the study of testing $\pi$-freeness with one-sided error. They focused on two specific problems, testing the monotone permutations and the $(1,3,2)$ permutation. For the problem of testing monotone permutation $(1,2,\cdots,K)$, \cite{ben2019finding} improved the $(\log n)^{O(K^2)}$ non-adaptive query complexity of \cite{newman2017testing} to $O((\log n)^{\lfloor \log_{2} K\rfloor})$. Further, \cite{ben2019optimal} proposed an adaptive algorithm with $O(\log n)$ query complexity. However, no progress has yet been made on the problem of testing $(1,3,2)$-freeness. In this work, we present an adaptive algorithm for testing $(1,3,2)$-freeness. The query complexity of our algorithm is $O(\epsilon^{-2}\log^4 n)$, which significantly improves over the $O(\epsilon^{-7}\log^{26}n)$-query adaptive algorithm of \cite{newman2017testing}. This improvement is mainly achieved by the proposal of a new structure embedded in the patterns.
Segment anything model (SAM) addresses two practical yet challenging segmentation tasks: \textbf{segment anything (SegAny)}, which utilizes a certain point to predict the mask for a single object of interest, and \textbf{segment everything (SegEvery)}, which predicts the masks for all objects on the image. What makes SegAny slow for SAM is its heavyweight image encoder, which has been addressed by MobileSAM via decoupled knowledge distillation. The efficiency bottleneck of SegEvery with SAM, however, lies in its mask decoder because it needs to first generate numerous masks with redundant grid-search prompts and then perform filtering to obtain the final valid masks. We propose to improve its efficiency by directly generating the final masks with only valid prompts, which can be obtained through object discovery. Our proposed approach not only helps reduce the total time on the mask decoder by at least 16 times but also achieves superior performance. Specifically, our approach yields an average performance boost of 3.6\% (42.5\% \textit{v.s.} 38.9\%) for zero-shot object proposal on the LVIS dataset with the mask AR@$K$ metric. Qualitative results show that our approach generates fine-grained masks while avoiding over-segmenting things. This project targeting faster SegEvery than the original SAM is termed MobileSAMv2 to differentiate from MobileSAM which targets faster SegAny. Moreover, we demonstrate that our new prompt sampling is also compatible with the distilled image encoders in MobileSAM, contributing to a unified framework for efficient SegAny and SegEvery. The code is available at the same link as MobileSAM Project \href{//github.com/ChaoningZhang/MobileSAM}{\textcolor{red}{//github.com/ChaoningZhang/MobileSAM}}. \end{abstract}
Given a graph $G$, an integer $k\geq 0$, and a non-negative integral function $f:V(G) \rightarrow \mathcal{N}$, the {\sc Vector Domination} problem asks whether a set $S$ of vertices, of cardinality $k$ or less, exists in $G$ so that every vertex $v \in V(G)-S$ has at least $f(v)$ neighbors in $S$. The problem generalizes several domination problems and it has also been shown to generalize Bounded-Degree Vertex Deletion. In this paper, the parameterized version of Vector Domination is studied when the input graph is planar. A linear problem kernel is presented.
We study various aspects of Dyck words appearing in binary sequences, where $0$ is treated as a left parenthesis and $1$ as a right parenthesis. We show that binary words that are $7/3$-power-free have bounded nesting level, but this no longer holds for larger repetition exponents. We give an explicit characterization of the factors of the Thue-Morse word that are Dyck, and show how to count them. We also prove tight upper and lower bounds on $f(n)$, the number of Dyck factors of Thue-Morse of length $2n$.
We provide an algorithm that maintains, against an adaptive adversary, a $(1-\varepsilon)$-approximate maximum matching in $n$-node $m$-edge general (not necessarily bipartite) undirected graph undergoing edge deletions with high probability with (amortized) $O(\mathrm{poly}(\varepsilon^{-1}, \log n))$ time per update. We also obtain the same update time for maintaining a fractional approximate weighted matching (and hence an approximation to the value of the maximum weight matching) and an integral approximate weighted matching in dense graphs. Our unweighted result improves upon the prior state-of-the-art which includes a $\mathrm{poly}(\log{n}) \cdot 2^{O(1/\varepsilon^2)}$ update time [Assadi-Bernstein-Dudeja 2022] and an $O(\sqrt{m} \varepsilon^{-2})$ update time [Gupta-Peng 2013], and our weighted result improves upon the $O(\sqrt{m}\varepsilon^{-O(1/\varepsilon)}\log{n})$ update time due to [Gupta-Peng 2013]. To obtain our results, we generalize a recent optimization approach to dynamic algorithms from [Jambulapati-Jin-Sidford-Tian 2022]. We show that repeatedly solving entropy-regularized optimization problems yields a lazy updating scheme for fractional decremental problems with a near-optimal number of updates. To apply this framework we develop optimization methods compatible with it and new dynamic rounding algorithms for the matching polytope.
In this paper, we consider the problem of maintaining a $(1-\varepsilon)$-approximate maximum weight matching in a dynamic graph $G$, while the adversary makes changes to the edges of the graph. In the fully dynamic setting, where both edge insertions and deletions are allowed, Gupta and Peng gave an algorithm for this problem with an update time of $\tilde{O}_{\varepsilon}(\sqrt{m})$. We study a natural relaxation of this problem, namely the decremental model, where the adversary is only allowed to delete edges. For the cardinality version of this problem in general (possibly, non-bipartite) graphs, Assadi, Bernstein, and Dudeja gave a decremental algorithm with update time $O_{\varepsilon}(\text{poly}(\log n))$. However, beating $\tilde{O}_{\varepsilon}(\sqrt{m})$ update time remained an open problem for the \emph{weighted} version in \emph{general graphs}. In this paper, we bridge the gap between unweighted and weighted general graphs for the decremental setting. We give a $O_{\varepsilon}(\text{poly}(\log n))$ update time algorithm that maintains a $(1-\varepsilon)$-approximate maximum weight matching under adversarial deletions. Like the decremental algorithm of Assadi, Bernstein, and Dudeja, our algorithm is randomized, but works against an adaptive adversary. It also matches the time bound for the cardinality version upto dependencies on $\varepsilon$ and a $\log R$ factor, where $R$ is the ratio between the maximum and minimum edge weight in $G$.