Let $(X, d)$ be a metric space and $\mathcal{C} \subseteq 2^X$ -- a collection of special objects. In the $(X,d,\mathcal{C})$-chasing problem, an online player receives a sequence of online requests $\{B_t\}_{t=1}^T \subseteq \mathcal{C}$ and responds with a trajectory $\{x_t\}_{t=1}^T$ such that $x_t \in B_t$. This response incurs a movement cost $\sum_{t=1}^T d(x_t, x_{t-1})$, and the online player strives to minimize the competitive ratio -- the worst case ratio over all input sequences between the online movement cost and the optimal movement cost in hindsight. Under this setup, we call the $(X,d,\mathcal{C})$-chasing problem $\textit{chaseable}$ if there exists an online algorithm with finite competitive ratio. In the case of Convex Body Chasing (CBC) over real normed vector spaces, (Bubeck et al. 2019) proved the chaseability of the problem. Furthermore, in the vector space setting, the dimension of the ambient space appears to be the factor controlling the size of the competitive ratio. Indeed, recently, (Sellke 2020) provided a $d-$competitive online algorithm over arbitrary real normed vector spaces $(\mathbb{R}^d, ||\cdot||)$, and we will shortly present a general strategy for obtaining novel lower bounds of the form $\Omega(d^c), \enspace c > 0$, for CBC in the same setting. In this paper, we also prove that the $\textit{doubling}$ and $\textit{Assouad}$ dimensions of a metric space exert no control on the hardness of ball chasing over the said metric space. More specifically, we show that for any large enough $\rho \in \mathbb{R}$, there exists a metric space $(X,d)$ of doubling dimension $\Theta(\rho)$ and Assouad dimension $\rho$ such that no online selector can achieve a finite competitive ratio in the general ball chasing regime.
The quantization problem aims to find the best possible approximation of probability measures on ${\mathbb{R}}^d$ using finite, discrete measures. The Wasserstein distance is a typical choice to measure the quality of the approximation. This contribution investigates the properties and robustness of the entropy-regularized quantization problem, which relaxes the standard quantization problem. The proposed approximation technique naturally adopts the softmin function, which is well known for its robustness in terms of theoretical and practicability standpoints. Moreover, we use the entropy-regularized Wasserstein distance to evaluate the quality of the soft quantization problem's approximation, and we implement a stochastic gradient approach to achieve the optimal solutions. The control parameter in our proposed method allows for the adjustment of the optimization problem's difficulty level, providing significant advantages when dealing with exceptionally challenging problems of interest. As well, this contribution empirically illustrates the performance of the method in various expositions.
Event-based sensors, distinguished by their high temporal resolution of 1$\mathrm{\mu s}$ and a dynamic range of 120$\mathrm{dB}$, stand out as ideal tools for deployment in fast-paced settings like vehicles and drones. Traditional object detection techniques that utilize Artificial Neural Networks (ANNs) face challenges due to the sparse and asynchronous nature of the events these sensors capture. In contrast, Spiking Neural Networks (SNNs) offer a promising alternative, providing a temporal representation that is inherently aligned with event-based data. This paper explores the unique membrane potential dynamics of SNNs and their ability to modulate sparse events. We introduce an innovative spike-triggered adaptive threshold mechanism designed for stable training. Building on these insights, we present a specialized spiking feature pyramid network (SpikeFPN) optimized for automotive event-based object detection. Comprehensive evaluations demonstrate that SpikeFPN surpasses both traditional SNNs and advanced ANNs enhanced with attention mechanisms. Evidently, SpikeFPN achieves a mean Average Precision (mAP) of 0.477 on the {GEN1 Automotive Detection (GAD)} benchmark dataset, marking a significant increase of 9.7\% over the previous best SNN. Moreover, the efficient design of SpikeFPN ensures robust performance while optimizing computational resources, attributed to its innate sparse computation capabilities.
We introduce two new classes of measures of information for statistical experiments which generalise and subsume $\phi$-divergences, integral probability metrics, $\mathfrak{N}$-distances (MMD), and $(f,\Gamma)$ divergences between two or more distributions. This enables us to derive a simple geometrical relationship between measures of information and the Bayes risk of a statistical decision problem, thus extending the variational $\phi$-divergence representation to multiple distributions in an entirely symmetric manner. The new families of divergence are closed under the action of Markov operators which yields an information processing equality which is a refinement and generalisation of the classical data processing inequality. This equality gives insight into the significance of the choice of the hypothesis class in classical risk minimization.
In the Bin Packing problem one is given $n$ items with weights $w_1,\ldots,w_n$ and $m$ bins with capacities $c_1,\ldots,c_m$. The goal is to find a partition of the items into sets $S_1,\ldots,S_m$ such that $w(S_j) \leq c_j$ for every bin $j$, where $w(X)$ denotes $\sum_{i \in X}w_i$. Bj\"orklund, Husfeldt and Koivisto (SICOMP 2009) presented an $\mathcal{O}^\star(2^n)$ time algorithm for Bin Packing. In this paper, we show that for every $m \in \mathbf{N}$ there exists a constant $\sigma_m >0$ such that an instance of Bin Packing with $m$ bins can be solved in $\mathcal{O}(2^{(1-\sigma_m)n})$ randomized time. Before our work, such improved algorithms were not known even for $m$ equals $4$. A key step in our approach is the following new result in Littlewood-Offord theory on the additive combinatorics of subset sums: For every $\delta >0$ there exists an $\varepsilon >0$ such that if $|\{ X\subseteq \{1,\ldots,n \} : w(X)=v \}| \geq 2^{(1-\varepsilon)n}$ for some $v$ then $|\{ w(X): X \subseteq \{1,\ldots,n\} \}|\leq 2^{\delta n}$.
Given $n$-vertex simple graphs $X$ and $Y$, the friends-and-strangers graph $\mathsf{FS}(X, Y)$ has as its vertices all $n!$ bijections from $V(X)$ to $V(Y)$, with bijections $\sigma, \tau$ adjacent if and only if they differ on two adjacent elements of $V(X)$ whose mappings are adjacent in $Y$. We consider the setting where $X$ and $Y$ are both edge-subgraphs of $K_{r,r}$: due to a parity obstruction, $\mathsf{FS}(X,Y)$ is always disconnected in this setting. Sharpening a result of Bangachev, we show that if $X$ and $Y$ respectively have minimum degrees $\delta(X)$ and $\delta(Y)$ and they satisfy $\delta(X) + \delta(Y) \geq \lfloor 3r/2 \rfloor + 1$, then $\mathsf{FS}(X,Y)$ has exactly two connected components. This proves that the cutoff for $\mathsf{FS}(X,Y)$ to avoid isolated vertices is equal to the cutoff for $\mathsf{FS}(X,Y)$ to have exactly two connected components. We also consider a probabilistic setup in which we fix $Y$ to be $K_{r,r}$, but randomly generate $X$ by including each edge in $K_{r,r}$ independently with probability $p$. Invoking a result of Zhu, we exhibit a phase transition phenomenon with threshold function $(\log r)/r$: below the threshold, $\mathsf{FS}(X,Y)$ has more than two connected components with high probability, while above the threshold, $\mathsf{FS}(X,Y)$ has exactly two connected components with high probability. Altogether, our results settle a conjecture and completely answer two problems of Alon, Defant, and Kravitz.
The combined universal probability $\mathbf{m}(D)$ of strings $x$ in sets $D$ is close to max $\m(x)$ over $x$ in $D$: their logs differ by at most $D$'s information $\mathbf{I}(D:\mathcal{H})$ about the halting sequence $\mathcal{H}$.
An adjacency sketching or implicit labeling scheme for a family $\cal F$ of graphs is a method that defines for any $n$ vertex $G \in \cal F$ an assignment of labels to each vertex in $G$, so that the labels of two vertices tell you whether or not they are adjacent. The goal is to come up with labeling schemes that use as few bits as possible to represent the labels. By using randomness when assigning labels, it is sometimes possible to produce adjacency sketches with much smaller label sizes, but this comes at the cost of introducing some probability of error. Both deterministic and randomized labeling schemes have been extensively studied, as they have applications for distributed data structures and deeper connections to universal graphs and communication complexity. The main question of interest is which graph families have schemes using short labels, usually $O(\log n)$ in the deterministic case or constant for randomized sketches. In this work we consider the resilience of probabilistic adjacency sketches against an adversary making adaptive queries to the labels. This differs from the previously analyzed probabilistic setting which is ``one shot". We show that in the adaptive adversarial case the size of the labels is tightly related to the maximal degree of the graphs in $\cal F$. This results in a stronger characterization compared to what is known in the non-adversarial setting. In more detail, we construct sketches that fail with probability $\varepsilon$ for graphs with maximal degree $d$ using $2d\log (1/\varepsilon)$ bit labels and show that this is roughly the best that can be done for any specific graph of maximal degree $d$, e.g.\ a $d$-ary tree.
Given a set $P$ of $n$ points and a set $S$ of $m$ disks in the plane, the disk coverage problem asks for a smallest subset of disks that together cover all points of $P$. The problem is NP-hard. In this paper, we consider a line-separable unit-disk version of the problem where all disks have the same radius and their centers are separated from the points of $P$ by a line $\ell$. We present an $m^{2/3}n^{2/3}2^{O(\log^*(m+n))} + O((n+m)\log (n+m))$ time algorithm for the problem. This improves the previously best result of $O(nm+ n\log n)$ time. Our techniques also solve the line-constrained version of the problem, where centers of all disks of $S$ are located on a line $\ell$ while points of $P$ can be anywhere in the plane. Our algorithm runs in $O(m\sqrt{n} + (n+m)\log(n+m))$ time, which improves the previously best result of $O(nm\log(m+n))$ time. In addition, our results lead to an algorithm of $n^{10/3}2^{O(\log^*n)}$ time for a half-plane coverage problem (given $n$ half-planes and $n$ points, find a smallest subset of half-planes covering all points); this improves the previously best algorithm of $O(n^4\log n)$ time. Further, if all half-planes are lower ones, our algorithm runs in $n^{4/3}2^{O(\log^*n)}$ time while the previously best algorithm takes $O(n^2\log n)$ time.
Given a continuous definable function $f: S \to \mathbb{R}$ on a definable set $S$, we study sublevel sets of the form $S^f_t = \{x \in S: f(x) \leq t\}$ for all $t \in \mathbb{R}$. Using o-minimal structures, we prove that the Euler characteristic of $S^f_t$ is right continuous with respect to $t$. Furthermore, when $S$ is compact, we show that $S^f_{t+\delta}$ deformation retracts to $S^f_t$ for all sufficiently small $\delta > 0$. Applying these results, we also characterize the relationship between the concepts of Euler characteristic transform and smooth Euler characteristic transform in topological data analysis.
We propose and analyze exact and inexact regularized Newton-type methods for finding a global saddle point of \emph{convex-concave} unconstrained min-max optimization problems. Compared to first-order methods, our understanding of second-order methods for min-max optimization is relatively limited, as obtaining global rates of convergence with second-order information is much more involved. In this paper, we examine how second-order information can be used to speed up extra-gradient methods, even under inexactness. Specifically, we show that the proposed algorithms generate iterates that remain within a bounded set and the averaged iterates converge to an $\epsilon$-saddle point within $O(\epsilon^{-2/3})$ iterations in terms of a restricted gap function. Our algorithms match the theoretically established lower bound in this context and our analysis provides a simple and intuitive convergence analysis for second-order methods without any boundedness requirements. Finally, we present a series of numerical experiments on synthetic and real data that demonstrate the efficiency of the proposed algorithms.