The proper conflict-free chromatic number, $\chi_{pcf}(G)$, of a graph $G$ is the least $k$ such that $G$ has a proper $k$-coloring in which for each non-isolated vertex there is a color appearing exactly once among its neighbors. The proper odd chromatic number, $\chi_{o}(G)$, of $G$ is the least $k$ such that $G$ has a proper coloring in which for every non-isolated vertex there is a color appearing an odd number of times among its neighbors. We say that a graph class $\mathcal{G}$ is $\chi_{pcf}$-bounded ($\chi_{o}$-bounded) if there is a function $f$ such that $\chi_{pcf}(G) \leq f(\chi(G))$ ($\chi_{o}(G) \leq f(\chi(G))$) for every $G \in \mathcal{G}$. Caro et al. (2022) asked for classes that are linearly $\chi_{pcf}$-bounded ($\chi_{pcf}$-bounded), and as a starting point, they showed that every claw-free graph $G$ satisfies $\chi_{pcf}(G) \le 2\Delta(G)+1$, which implies $\chi_{pcf}(G) \le 4\chi(G)+1$. They also conjectured that any graph $G$ with $\Delta(G) \ge 3$ satisfies $\chi_{pcf}(G) \le \Delta(G)+1$. In this paper, we improve the bound for claw-free graphs to a nearly tight bound by showing that such a graph $G$ satisfies $\chi_{pcf}(G) \le \Delta(G)+6$, and even $\chi_{pcf}(G) \le \Delta(G)+4$ if it is a quasi-line graph. Moreover, we show that convex-round graphs and permutation graphs are linearly $\chi_{pcf}$-bounded. For these last two results, we prove a lemma that reduces the problem of deciding if a hereditary class is linearly $\chi_{pcf}$-bounded to deciding if the bipartite graphs in the class are $\chi_{pcf}$-bounded by an absolute constant. This lemma complements a theorem of Liu (2022) and motivates us to further study boundedness in bipartite graphs. So among other results, we show that convex bipartite graphs are not $\chi_{o}$-bounded, and a class of bipartite circle graphs that is linearly $\chi_{o}$-bounded but not $\chi_{pcf}$-bounded.
This paper studies the extreme singular values of non-harmonic Fourier matrices. Such a matrix of size $m\times s$ can be written as $\Phi=[ e^{-2\pi i j x_k}]_{j=0,1,\dots,m-1, k=1,2,\dots,s}$ for some set $\mathcal{X}=\{x_k\}_{k=1}^s$. The main results provide explicit lower bounds for the smallest singular value of $\Phi$ under the assumption $m\geq 6s$ and without any restrictions on $\mathcal{X}$. They show that for an appropriate scale $\tau$ determined by a density criteria, interactions between elements in $\mathcal{X}$ at scales smaller than $\tau$ are most significant and depends on the multiscale structure of $\mathcal{X}$ at fine scales, while distances larger than $\tau$ are less important and only depend on the local sparsity of the far away points. Theoretical and numerical comparisons show that the main results significantly improve upon classical bounds and achieve the same rate that was previously discovered for more restrictive settings.
We prove tight bounds on the site percolation threshold for $k$-uniform hypergraphs of maximum degree $\Delta$ and for $k$-uniform hypergraphs of maximum degree $\Delta$ in which any pair of edges overlaps in at most $r$ vertices. The hypergraphs that achieve these bounds are hypertrees, but unlike in the case of graphs, there are many different $k$-uniform, $\Delta$-regular hypertrees. Determining the extremal tree for a given $k, \Delta, r$ involves an optimization problem, and our bounds arise from a convex relaxation of this problem. By combining our percolation bounds with the method of disagreement percolation we obtain improved bounds on the uniqueness threshold for the hard-core model on hypergraphs satisfying the same constraints. Our uniqueness conditions imply exponential weak spatial mixing, and go beyond the known bounds for rapid mixing of local Markov chains and existence of efficient approximate counting and sampling algorithms. Our results lead to natural conjectures regarding the aforementioned algorithmic tasks, based on the intuition that uniqueness thresholds for the extremal hypertrees for percolation determine computational thresholds.
We propose an original approach to investigate the linearity of Gray codes obtained from $\mathbb{Z}_{2^L}$-additive codes by introducing two related binary codes: the associated and concatenated. Once they are defined, one could perform a straightforward analysis of the Schur product between their codewords and determine the linearity of the respective Gray code. This work expands on earlier contributions from the literature, where the linearity was established with respect to the kernel of a code and/or operations on $\mathbb{Z}_{2^L}$. The $\mathbb{Z}_{2^L}$-additive codes we apply the Gray map and check the linearity are the well-known Hadamard, simplex, MacDonald, Kerdock, and Preparata codes. We also present a family of Reed-Muller codes that yield to linear Gray codes and perform a computational verification of our proposed method applied to other $\mathbb{Z}_{2^L}$-additive codes.
We show that under minimal assumptions on a random vector $X\in\mathbb{R}^d$ and with high probability, given $m$ independent copies of $X$, the coordinate distribution of each vector $(\langle X_i,\theta \rangle)_{i=1}^m$ is dictated by the distribution of the true marginal $\langle X,\theta \rangle$. Specifically, we show that with high probability, \[\sup_{\theta \in S^{d-1}} \left( \frac{1}{m}\sum_{i=1}^m \left|\langle X_i,\theta \rangle^\sharp - \lambda^\theta_i \right|^2 \right)^{1/2} \leq c \left( \frac{d}{m} \right)^{1/4},\] where $\lambda^{\theta}_i = m\int_{(\frac{i-1}{m}, \frac{i}{m}]} F_{ \langle X,\theta \rangle }^{-1}(u)\,du$ and $a^\sharp$ denotes the monotone non-decreasing rearrangement of $a$. Moreover, this estimate is optimal. The proof follows from a sharp estimate on the worst Wasserstein distance between a marginal of $X$ and its empirical counterpart, $\frac{1}{m} \sum_{i=1}^m \delta_{\langle X_i, \theta \rangle}$.
The problem of non-monotone $k$-submodular maximization under a knapsack constraint ($\kSMK$) over the ground set size $n$ has been raised in many applications in machine learning, such as data summarization, information propagation, etc. However, existing algorithms for the problem are facing questioning of how to overcome the non-monotone case and how to fast return a good solution in case of the big size of data. This paper introduces two deterministic approximation algorithms for the problem that competitively improve the query complexity of existing algorithms. Our first algorithm, $\LAA$, returns an approximation ratio of $1/19$ within $O(nk)$ query complexity. The second one, $\RLA$, improves the approximation ratio to $1/5-\epsilon$ in $O(nk)$ queries, where $\epsilon$ is an input parameter. Our algorithms are the first ones that provide constant approximation ratios within only $O(nk)$ query complexity for the non-monotone objective. They, therefore, need fewer the number of queries than state-of-the-the-art ones by a factor of $\Omega(\log n)$. Besides the theoretical analysis, we have evaluated our proposed ones with several experiments in some instances: Influence Maximization and Sensor Placement for the problem. The results confirm that our algorithms ensure theoretical quality as the cutting-edge techniques and significantly reduce the number of queries.
The zeros of type II multiple orthogonal polynomials can be used for quadrature formulas that approximate $r$ integrals of the same function $f$ with respect to $r$ measures $\mu_1,\ldots,\mu_r$ in the spirit of Gaussian quadrature. This was first suggested by Borges in 1994. We give a method to compute the quadrature nodes and the quadrature weights which extends the Golub-Welsch approach using the eigenvalues and left and right eigenvectors of a banded Hessenberg matrix. This method was already described by Coussement and Van Assche in 2005 but it seems to have gone unnoticed. We describe the result in detail for $r=2$ and give some examples.
A set of vertices of a graph $G$ is said to be decycling if its removal leaves an acyclic subgraph. The size of a smallest decycling set is the decycling number of $G$. Generally, at least $\lceil(n+2)/4\rceil$ vertices have to be removed in order to decycle a cubic graph on $n$ vertices. In 1979, Payan and Sakarovitch proved that the decycling number of a cyclically $4$-edge-connected cubic graph of order $n$ equals $\lceil (n+2)/4\rceil$. In addition, they characterised the structure of minimum decycling sets and their complements. If $n\equiv 2\pmod4$, then $G$ has a decycling set which is independent and its complement induces a tree. If $n\equiv 0\pmod4$, then one of two possibilities occurs: either $G$ has an independent decycling set whose complement induces a forest of two trees, or the decycling set is near-independent (which means that it induces a single edge) and its complement induces a tree. In this paper we strengthen the result of Payan and Sakarovitch by proving that the latter possibility (a near-independent set and a tree) can always be guaranteed. Moreover, we relax the assumption of cyclic $4$-edge-connectivity to a significantly weaker condition expressed through the canonical decomposition of 3-connected cubic graphs into cyclically $4$-edge-connected ones. Our methods substantially use a surprising and seemingly distant relationship between the decycling number and the maximum genus of a cubic graph.
We study how to verify specific frequency distributions when we observe a stream of $N$ data items taken from a universe of $n$ distinct items. We introduce the \emph{relative Fr\'echet distance} to compare two frequency functions in a homogeneous manner. We consider two streaming models: insertions only and sliding windows. We present a Tester for a certain class of functions, which decides if $f $ is close to $g$ or if $f$ is far from $g$ with high probability, when $f$ is given and $g$ is defined by a stream. If $f$ is uniform we show a space $\Omega(n)$ lower bound. If $f$ decreases fast enough, we then only use space $O(\log^2 n\cdot \log\log n)$. The analysis relies on the Spacesaving algorithm \cite{MAE2005,Z22} and on sampling the stream.
We prove that for any graph $G$ of maximum degree at most $\Delta$, the zeros of its chromatic polynomial $\chi_G(x)$ (in $\mathbb{C}$) lie inside the disc of radius $5.94 \Delta$ centered at $0$. This improves on the previously best known bound of approximately $6.91\Delta$. We also obtain improved bounds for graphs of high girth. We prove that for every $g$ there is a constant $K_g$ such that for any graph $G$ of maximum degree at most $\Delta$ and girth at least $g$, the zeros of its chromatic polynomial $\chi_G(x)$ lie inside the disc of radius $K_g \Delta$ centered at $0$, where $K_g$ is the solution to a certain optimization problem. In particular, $K_g < 5$ when $g \geq 5$ and $K_g < 4$ when $g \geq 25$ and $K_g$ tends to approximately $3.86$ as $g \to \infty$. Key to the proof is a classical theorem of Whitney which allows us to relate the chromatic polynomial of a graph $G$ to the generating function of so-called broken-circuit-free forests in $G$. We also establish a zero-free disc for the generating function of all forests in $G$ (aka the partition function of the arboreal gas) which may be of independent interest.
Let $X$ be a $p$-variate random vector and $\widetilde{X}$ a knockoff copy of $X$ (in the sense of \cite{CFJL18}). A new approach for constructing $\widetilde{X}$ (henceforth, NA) has been introduced in \cite{JSPI}. NA has essentially three advantages: (i) To build $\widetilde{X}$ is straightforward; (ii) The joint distribution of $(X,\widetilde{X})$ can be written in closed form; (iii) $\widetilde{X}$ is often optimal under various criteria. However, for NA to apply, $X_1,\ldots, X_p$ should be conditionally independent given some random element $Z$. Our first result is that any probability measure $\mu$ on $\mathbb{R}^p$ can be approximated by a probability measure $\mu_0$ of the form $$\mu_0\bigl(A_1\times\ldots\times A_p\bigr)=E\Bigl\{\prod_{i=1}^p P(X_i\in A_i\mid Z)\Bigr\}.$$ The approximation is in total variation distance when $\mu$ is absolutely continuous, and an explicit formula for $\mu_0$ is provided. If $X\sim\mu_0$, then $X_1,\ldots,X_p$ are conditionally independent. Hence, with a negligible error, one can assume $X\sim\mu_0$ and build $\widetilde{X}$ through NA. Our second result is a characterization of the knockoffs $\widetilde{X}$ obtained via NA. It is shown that $\widetilde{X}$ is of this type if and only if the pair $(X,\widetilde{X})$ can be extended to an infinite sequence so as to satisfy certain invariance conditions. The basic tool for proving this fact is de Finetti's theorem for partially exchangeable sequences. In addition to the quoted results, an explicit formula for the conditional distribution of $\widetilde{X}$ given $X$ is obtained in a few cases. In one of such cases, it is assumed $X_i\in\{0,1\}$ for all $i$.