In 1952, Dirac proved the following theorem about long cycles in graphs with large minimum vertex degrees: Every $n$-vertex $2$-connected graph $G$ with minimum vertex degree $\delta\geq 2$ contains a cycle with at least $\min\{2\delta,n\}$ vertices. In particular, if $\delta\geq n/2$, then $G$ is Hamiltonian. The proof of Dirac's theorem is constructive, and it yields an algorithm computing the corresponding cycle in polynomial time. The combinatorial bound of Dirac's theorem is tight in the following sense. There are 2-connected graphs that do not contain cycles of length more than $2\delta+1$. Also, there are non-Hamiltonian graphs with all vertices but one of degree at least $n/2$. This prompts naturally to the following algorithmic questions. For $k\geq 1$, (A) How difficult is to decide whether a 2-connected graph contains a cycle of length at least $\min\{2\delta+k,n\}$? (B) How difficult is to decide whether a graph $G$ is Hamiltonian, when at least $n - k$ vertices of $G$ are of degrees at least $n/2-k$? The first question was asked by Fomin, Golovach, Lokshtanov, Panolan, Saurabh, and Zehavi. The second question is due to Jansen, Kozma, and Nederlof. Even for a very special case of $k=1$, the existence of a polynomial-time algorithm deciding whether $G$ contains a cycle of length at least $\min\{2\delta+1,n\}$ was open. We resolve both questions by proving the following algorithmic generalization of Dirac's theorem: If all but $k$ vertices of a $2$-connected graph $G$ are of degree at least $\delta$, then deciding whether $G$ has a cycle of length at least $\min\{2\delta +k, n\}$ can be done in time $2^{\mathcal{O}(k)}\cdot n^{\mathcal{O}(1)}$. The proof of the algorithmic generalization of Dirac's theorem builds on new graph-theoretical results that are interesting on their own.
In 1982, Tuza conjectured that the size $\tau(G)$ of a minimum set of edges that intersects every triangle of a graph $G$ is at most twice the size $\nu(G)$ of a maximum set of edge-disjoint triangles of $G$. This conjecture was proved for several graph classes. In this paper, we present three results regarding Tuza's Conjecture for dense graphs. By using a probabilistic argument, Tuza proved its conjecture for graphs on $n$ vertices with minimum degree at least $\frac{7n}{8}$. We extend this technique to show that Tuza's conjecture is valid for split graphs with minimum degree at least $\frac{3n}{5}$; and that $\tau(G) < \frac{28}{15}\nu(G)$ for every tripartite graph with minimum degree more than $\frac{33n}{56}$. Finally, we show that $\tau(G)\leq \frac{3}{2}\nu(G)$ when $G$ is a complete 4-partite graph. Moreover, this bound is tight.
The classical theory of Kosambi-Cartan-Chern (KCC) developed in differential geometry provides a powerful method for analyzing the behaviors of dynamical systems. In the KCC theory, the properties of a dynamical system are described in terms of five geometrical invariants, of which the second corresponds to the so-called Jacobi stability of the system. Different from that of the Lyapunov stability that has been studied extensively in the literature, the analysis of the Jacobi stability has been investigated more recently using geometrical concepts and tools. It turns out that the existing work on the Jacobi stability analysis remains theoretical and the problem of algorithmic and symbolic treatment of Jacobi stability analysis has yet to be addressed. In this paper, we initiate our study on the problem for a class of ODE systems of arbitrary dimension and propose two algorithmic schemes using symbolic computation to check whether a nonlinear dynamical system may exhibit Jacobi stability. The first scheme, based on the construction of the complex root structure of a characteristic polynomial and on the method of quantifier elimination, is capable of detecting the existence of the Jacobi stability of the given dynamical system. The second algorithmic scheme exploits the method of semi-algebraic system solving and allows one to determine conditions on the parameters for a given dynamical system to have a prescribed number of Jacobi stable fixed points. Several examples are presented to demonstrate the effectiveness of the proposed algorithmic schemes.
Copyright infringement may occur when a generative model produces samples substantially similar to some copyrighted data that it had access to during the training phase. The notion of access usually refers to including copyrighted samples directly in the training dataset, which one may inspect to identify an infringement. We argue that such visual auditing largely overlooks a concealed copyright infringement, where one constructs a disguise that looks drastically different from the copyrighted sample yet still induces the effect of training Latent Diffusion Models on it. Such disguises only require indirect access to the copyrighted material and cannot be visually distinguished, thus easily circumventing the current auditing tools. In this paper, we provide a better understanding of such disguised copyright infringement by uncovering the disguises generation algorithm, the revelation of the disguises, and importantly, how to detect them to augment the existing toolbox. Additionally, we introduce a broader notion of acknowledgment for comprehending such indirect access.
We revisit the classical problem of multiclass classification with bandit feedback (Kakade, Shalev-Shwartz and Tewari, 2008), where each input classifies to one of $K$ possible labels and feedback is restricted to whether the predicted label is correct or not. Our primary inquiry is with regard to the dependency on the number of labels $K$, and whether $T$-step regret bounds in this setting can be improved beyond the $\smash{\sqrt{KT}}$ dependence exhibited by existing algorithms. Our main contribution is in showing that the minimax regret of bandit multiclass is in fact more nuanced, and is of the form $\smash{\widetilde{\Theta}\left(\min \left\{|\mathcal{H}| + \sqrt{T}, \sqrt{KT \log |{\mathcal{H}|}} \right\} \right) }$, where $\mathcal{H}$ is the underlying (finite) hypothesis class. In particular, we present a new bandit classification algorithm that guarantees regret $\smash{\widetilde{O}(|\mathcal{H}|+\sqrt{T})}$, improving over classical algorithms for moderately-sized hypothesis classes, and give a matching lower bound establishing tightness of the upper bounds (up to log-factors) in all parameter regimes.
This paper presents sufficient conditions for the stability and $\ell_2$-gain performance of recurrent neural networks (RNNs) with ReLU activation functions. These conditions are derived by combining Lyapunov/dissipativity theory with Quadratic Constraints (QCs) satisfied by repeated ReLUs. We write a general class of QCs for repeated RELUs using known properties for the scalar ReLU. Our stability and performance condition uses these QCs along with a "lifted" representation for the ReLU RNN. We show that the positive homogeneity property satisfied by a scalar ReLU does not expand the class of QCs for the repeated ReLU. We present examples to demonstrate the stability / performance condition and study the effect of the lifting horizon.
We present a simple algorithm for differentiable rendering of surfaces represented by Signed Distance Fields (SDF), which makes it easy to integrate rendering into gradient-based optimization pipelines. To tackle visibility-related derivatives that make rendering non-differentiable, existing physically based differentiable rendering methods often rely on elaborate guiding data structures or reparameterization with a global impact on variance. In this article, we investigate an alternative that embraces nonzero bias in exchange for low variance and architectural simplicity. Our method expands the lower-dimensional boundary integral into a thin band that is easy to sample when the underlying surface is represented by an SDF. We demonstrate the performance and robustness of our formulation in end-to-end inverse rendering tasks, where it obtains results that are competitive with or superior to existing work.
The NP-complete graph problem Cluster Editing seeks to transform a static graph into a disjoint union of cliques by making the fewest possible edits to the edges. We introduce a natural interpretation of this problem in temporal graphs, whose edge sets change over time. This problem is NP-complete even when restricted to temporal graphs whose underlying graph is a path, but we obtain two polynomial-time algorithms for restricted cases. In the static setting, it is well-known that a graph is a disjoint union of cliques if and only if it contains no induced copy of $P_3$; we demonstrate that no general characterisation involving sets of at most four vertices can exist in the temporal setting, but obtain a complete characterisation involving forbidden configurations on at most five vertices. This characterisation gives rise to an FPT algorithm parameterised simultaneously by the permitted number of modifications and the lifetime of the temporal graph.
This paper considers a massive connectivity setting in which a base-station (BS) aims to communicate sources $(X_1,\cdots,X_k)$ to a randomly activated subset of $k$ users, among a large pool of $n$ users, via a common downlink message. Although the identities of the $k$ active users are assumed to be known at the BS, each active user only knows whether itself is active and does not know the identities of the other active users. A naive coding strategy is to transmit the sources alongside the identities of the users for which the source information is intended, which would require $H(X_1,\cdots,X_k) + k\log(n)$ bits, because the cost of specifying the identity of a user is $\log(n)$ bits. For large $n$, this overhead can be significant. This paper shows that it is possible to develop coding techniques that eliminate the dependency of the overhead on $n$, if the source distribution follows certain symmetry. Specifically, if the source distribution is independent and identically distributed (i.i.d.) then the overhead can be reduced to at most $O(\log(k))$ bits, and in case of uniform i.i.d. sources, the overhead can be further reduced to $O(1)$ bits. For sources that follow a more general exchangeable distribution, the overhead is at most $O(k)$ bits, and in case of finite-alphabet exchangeable sources, the overhead can be further reduced to $O(\log(k))$ bits. The downlink massive random access problem is closely connected to the study of finite exchangeable sequences. The proposed coding strategy allows bounds on the relative entropy distance between finite exchangeable distributions and i.i.d. mixture distributions to be developed, and gives a new relative entropy version of the finite de Finetti theorem which is scaling optimal.
We popularize the question whether, for $m$ large enough, all $m$-uniform shift-chain hypergraphs are properly $2$-colorable. On the other hand, we show that for every $m$ some $m$-uniform shift-chains are not polychromatic $3$-colorable.
Airplane refueling problem is a nonlinear unconstrained optimization problem with $n!$ feasible solutions. Given a fleet of $n$ airplanes with mid-air refueling technique, the question is to find the best refueling policy to make the last remaining airplane travels the farthest. In order to deal with the large scale of airplanes refueling instances, we proposed the definition of sequential feasible solution by employing the refueling properties of data structure. We proved that if an airplanes refueling instance has feasible solutions, it must have the sequential feasible solutions; and the optimal feasible solution must be the optimal sequential feasible solution. Then we proposed the sequential search algorithm which consists of two steps. The first step of the sequential search algorithm aims to seek out all of the sequential feasible solutions. When the input size of $n$ is greater than an index number, we proved that the number of the sequential feasible solutions will change to grow at a polynomial rate. The second step of the sequential search algorithm aims to search for the maximal sequential feasible solution by bubble sorting all of the sequential feasible solutions. Moreover, we built an efficient computability scheme, according to which we could forecast within a polynomial time the computational complexity of the sequential search algorithm that runs on any given airplanes refueling instance. Thus we could provide a computational strategy for decision makers or algorithm users by considering with their available computing resources.