We show that every Borel graph $G$ of subexponential growth has a Borel proper edge-coloring with $\Delta(G) + 1$ colors. We deduce this from a stronger result, namely that an $n$-vertex (finite) graph $G$ of subexponential growth can be properly edge-colored using $\Delta(G) + 1$ colors by an $O(\log^\ast n)$-round deterministic distributed algorithm in the $\mathsf{LOCAL}$ model, where the implied constants in the $O(\cdot)$ notation are determined by a bound on the growth rate of $G$.
A $c$-labeling $\phi: V(G) \rightarrow \{1, 2, \hdots, c \}$ of graph $G$ is distinguishing if, for every non-trivial automorphism $\pi$ of $G$, there is some vertex $v$ so that $\phi(v) \neq \phi(\pi(v))$. The distinguishing number of $G$, $D(G)$, is the smallest $c$ such that $G$ has a distinguishing $c$-labeling. We consider a compact version of Tyshkevich's graph decomposition theorem where trivial components are maximally combined to form a complete graph or a graph of isolated vertices. Suppose the compact canonical decomposition of $G$ is $G_{k} \circ G_{k-1} \circ \cdots \circ G_1 \circ G_0$. We prove that $\phi$ is a distinguishing labeling of $G$ if and only if $\phi$ is a distinguishing labeling of $G_i$ when restricted to $V(G_i)$ for $i = 0, \hdots, k$. Thus, $D(G) = \max \{D(G_i), i = 0, \hdots, k \}$. We then present an algorithm that computes the distinguishing number of a unigraph in linear time.
The Natural Language Processing(NLP) community has been using crowd sourcing techniques to create benchmark datasets such as General Language Understanding and Evaluation(GLUE) for training modern Language Models such as BERT. GLUE tasks measure the reliability scores using inter annotator metrics i.e. Cohens Kappa. However, the reliability aspect of LMs has often been overlooked. To counter this problem, we explore a knowledge-guided LM ensembling approach that leverages reinforcement learning to integrate knowledge from ConceptNet and Wikipedia as knowledge graph embeddings. This approach mimics human annotators resorting to external knowledge to compensate for information deficits in the datasets. Across nine GLUE datasets, our research shows that ensembling strengthens reliability and accuracy scores, outperforming state of the art.
We solve a problem of Dujmovi\'c and Wood (2007) by showing that a complete convex geometric graph on $n$ vertices cannot be decomposed into fewer than $n-1$ star-forests, each consisting of noncrossing edges. This bound is clearly tight. We also discuss similar questions for abstract graphs.
Adversarial examples (AEs) for DNNs have been shown to be transferable: AEs that successfully fool white-box surrogate models can also deceive other black-box models with different architectures. Although a bunch of empirical studies have provided guidance on generating highly transferable AEs, many of these findings lack explanations and even lead to inconsistent advice. In this paper, we take a further step towards understanding adversarial transferability, with a particular focus on surrogate aspects. Starting from the intriguing little robustness phenomenon, where models adversarially trained with mildly perturbed adversarial samples can serve as better surrogates, we attribute it to a trade-off between two predominant factors: model smoothness and gradient similarity. Our investigations focus on their joint effects, rather than their separate correlations with transferability. Through a series of theoretical and empirical analyses, we conjecture that the data distribution shift in adversarial training explains the degradation of gradient similarity. Building on these insights, we explore the impacts of data augmentation and gradient regularization on transferability and identify that the trade-off generally exists in the various training mechanisms, thus building a comprehensive blueprint for the regulation mechanism behind transferability. Finally, we provide a general route for constructing better surrogates to boost transferability which optimizes both model smoothness and gradient similarity simultaneously, e.g., the combination of input gradient regularization and sharpness-aware minimization (SAM), validated by extensive experiments. In summary, we call for attention to the united impacts of these two factors for launching effective transfer attacks, rather than optimizing one while ignoring the other, and emphasize the crucial role of manipulating surrogate models.
Shannon proved that almost all Boolean functions require a circuit of size $\Theta(2^n/n)$. We prove a quantum analog of this classical result. Unlike in the classical case the number of quantum circuits of any fixed size that we allow is uncountably infinite. Our main tool is a classical result in real algebraic geometry bounding the number of realizable sign conditions of any finite set of real polynomials in many variables.
Given a closed simple polygon $P$, we say two points $p,q$ see each other if the segment $pq$ is fully contained in $P$. The art gallery problem seeks a minimum size set $G\subset P$ of guards that sees $P$ completely. The only currently correct algorithm to solve the art gallery problem exactly uses algebraic methods and is attributed to Sharir. As the art gallery problem is ER-complete, it seems unlikely to avoid algebraic methods, without additional assumptions. In this paper, we introduce the notion of vision stability. In order to describe vision stability consider an enhanced guard that can see "around the corner" by an angle of $\delta$ or a diminished guard whose vision is by an angle of $\delta$ "blocked" by reflex vertices. A polygon $P$ has vision stability $\delta$ if the optimal number of enhanced guards to guard $P$ is the same as the optimal number of diminished guards to guard $P$. We will argue that most relevant polygons are vision stable. We describe a one-shot vision stable algorithm that computes an optimal guard set for visionstable polygons using polynomial time and solving one integer program. It guarantees to find the optimal solution for every vision stable polygon. We implemented an iterative visionstable algorithm and show its practical performance is slower, but comparable with other state of the art algorithms. Our iterative algorithm is inspired and follows closely the one-shot algorithm. It delays several steps and only computes them when deemed necessary. Given a chord $c$ of a polygon, we denote by $n(c)$ the number of vertices visible from $c$. The chord-width of a polygon is the maximum $n(c)$ over all possible chords $c$. The set of vision stable polygons admits an FPT algorithm when parametrized by the chord-width. Furthermore, the one-shot algorithm runs in FPT time, when parameterized by the number of reflex vertices.
We consider the Shortest Odd Path problem, where given an undirected graph $G$, a weight function on its edges, and two vertices $s$ and $t$ in $G$, the aim is to find an $(s,t)$-path with odd length and, among all such paths, of minimum weight. For the case when the weight function is conservative, i.e., when every cycle has non-negative total weight, the complexity of the Shortest Odd Path problem had been open for 20 years, and was recently shown to be NP-hard. We give a polynomial-time algorithm for the special case when the weight function is conservative and the set $E^-$ of negative-weight edges forms a single tree. Our algorithm exploits the strong connection between Shortest Odd Path and the problem of finding two internally vertex-disjoint paths between two terminals in an undirected edge-weighted graph. It also relies on solving an intermediary problem variant called Shortest Parity-Constrained Odd Path where for certain edges we have parity constraints on their position along the path. Also, we exhibit two FPT algorithms for solving Shortest Odd Path in graphs with conservative weight functions. The first FPT algorithm is parameterized by $|E^-|$, the number of negative edges, or more generally, by the maximum size of a matching in the subgraph of $G$ spanned by $E^-$. Our second FPT algorithm is parameterized by the treewidth of $G$.
The modified Bessel function of the second kind K$\nu$ appears in a wide variety of applied scientific fields. While its use is greatly facilitated by an implementation in most numerical libraries, overflow issues can be encountered especially for large value of $\nu$. After giving some necessary and sufficient conditions for their occurrences, this technical note shows that they can mostly be avoided by directly computing the logarithm of K$\nu$ thanks to a simple and stable forward recursion. A statistical examples based on the Gil-Pelaez inversion formula is given to illustrate the recursive method.
In this letter, we propose a novel construction of type-II $Z$-complementary code set (ZCCS) having arbitrary sequence length using the Kronecker product between a complete complementary code (CCC) and mutually orthogonal uni-modular sequences. In this construction, Barker sequences are used to reduce row sequence peak-to-mean envelope power ratio (PMEPR) for some specific lengths sequence and column sequence PMEPR for some specific sizes of codes. The column sequence PMEPR of the proposed type-II ZCCS is upper bounded by a number smaller than $2$. The proposed construction also contributes new lengths of type-II $Z$-complementary pair (ZCP) and type-II $Z$-complementary set (ZCS). Furthermore, the PMEPR of these new type-II ZCPs is also lower than existing type-II ZCPs.
Visual Question Answering (VQA) models have struggled with counting objects in natural images so far. We identify a fundamental problem due to soft attention in these models as a cause. To circumvent this problem, we propose a neural network component that allows robust counting from object proposals. Experiments on a toy task show the effectiveness of this component and we obtain state-of-the-art accuracy on the number category of the VQA v2 dataset without negatively affecting other categories, even outperforming ensemble models with our single model. On a difficult balanced pair metric, the component gives a substantial improvement in counting over a strong baseline by 6.6%.