Leroux has proved that unreachability in Petri nets can be witnessed by a Presburger separator, i.e. if a marking $\vec{m}_\text{src}$ cannot reach a marking $\vec{m}_\text{tgt}$, then there is a formula $\varphi$ of Presburger arithmetic such that: $\varphi(\vec{m}_\text{src})$ holds; $\varphi$ is forward invariant, i.e., $\varphi(\vec{m})$ and $\vec{m} \rightarrow \vec{m}'$ imply $\varphi(\vec{m}'$); and $\neg \varphi(\vec{m}_\text{tgt})$ holds. While these separators could be used as explanations and as formal certificates of unreachability, this has not yet been the case due to their worst-case size, which is at least Ackermannian, and the complexity of checking that a formula is a separator, which is at least exponential (in the formula size). We show that, in continuous Petri nets, these two problems can be overcome. We introduce locally closed separators, and prove that: (a) unreachability can be witnessed by a locally closed separator computable in polynomial time; (b) checking whether a formula is a locally closed separator is in NC (so, simpler than unreachability, which is P-complete). We further consider the more general problem of (existential) set-to-set reachability, where two sets of markings are given as convex polytopes. We show that, while our approach does not extend directly, we can efficiently certify unreachability via an altered Petri net.
Regularising the primal formulation of optimal transport (OT) with a strictly convex term leads to enhanced numerical complexity and a denser transport plan. Many formulations impose a global constraint on the transport plan, for instance by relying on entropic regularisation. As it is more expensive to diffuse mass for outlier points compared to central ones, this typically results in a significant imbalance in the way mass is spread across the points. This can be detrimental for some applications where a minimum of smoothing is required per point. To remedy this, we introduce OT with Adaptive RegularIsation (OTARI), a new formulation of OT that imposes constraints on the mass going in or/and out of each point. We then showcase the benefits of this approach for domain adaptation.
We provide the first online algorithm for spectral hypergraph sparsification. In the online setting, hyperedges with positive weights are arriving in a stream, and upon the arrival of each hyperedge, we must irrevocably decide whether or not to include it in the sparsifier. Our algorithm produces an $(\epsilon, \delta)$-spectral sparsifier with multiplicative error $\epsilon$ and additive error $\delta$ that has $O(\epsilon^{-2} n (\log n)^2 \log(1 + \epsilon W/\delta n))$ hyperedges with high probability, where $\epsilon, \delta \in (0,1)$, $n$ is the number of nodes, and $W$ is the sum of edge weights. The space complexity of our algorithm is $O(n^2)$, while previous algorithms require the space complexity of $\Omega(m)$, where $m$ is the number of hyperedges. This provides an exponential improvement in the space complexity since $m$ can be exponential in $n$.
Neural networks often suffer from a feature preference problem, where they tend to overly rely on specific features to solve a task while disregarding other features, even if those neglected features are essential for the task. Feature preference problems have primarily been investigated in classification task. However, we observe that feature preference occurs in high-dimensional regression task, specifically, source separation. To mitigate feature preference in source separation, we propose FEAture BAlancing by Suppressing Easy feature (FEABASE). This approach enables efficient data utilization by learning hidden information about the neglected feature. We evaluate our method in a multi-channel source separation task, where feature preference between spatial feature and timbre feature appears.
The support of a flow $x$ in a network is the subdigraph induced by the arcs $ij$ for which $x_{ij}>0$. We discuss a number of results on flows in networks where we put certain restrictions on structure of the support of the flow. Many of these problems are NP-hard because they generalize linkage problems for digraphs. For example deciding whether a network ${\cal N}=(D,s,t,c)$ has a maximum flow $x$ such that the maximum out-degree of the support $D_x$ of $x$ is at most 2 is NP-complete as it contains the 2-linkage problem as a very special case. Another problem which is NP-complete for the same reason is that of deciding the maximum flow we can send from $s$ to $t$ along 2 paths (called a maximum 2-path-flow) in ${\cal N}$. Baier et al. (2005) gave a polynomial algorithm which finds a 2-path-flow $x$ whose value is at least $\frac{2}{3}$ of the value of a optimum 2-path-flow. This is best possible unless P=NP. They also obtained a $\frac{2}{p}$-approximation for the maximum value of a $p$-path-flow for every $p\geq 2$. In this paper we give an algorithm which gets within a factor $\frac{1}{H(p)}$ of the optimum solution, where $H(p)$ is the $p$'th harmonic number ($H(p) \sim \ln(p)$). This improves the approximation bound due to Baier et al. when $p\geq 5$. We show that in the case where the network is acyclic, we can find a maximum $p$-path-flow in polynomial time for every $p$. We determine the complexity of a number of related problems concerning the structure of flows. For the special case of acyclic digraphs, some of the results we obtain are in some sense best possible.
We develop a general theory to optimize the frequentist regret for sequential learning problems, where efficient bandit and reinforcement learning algorithms can be derived from unified Bayesian principles. We propose a novel optimization approach to generate "algorithmic beliefs" at each round, and use Bayesian posteriors to make decisions. The optimization objective to create "algorithmic beliefs," which we term "Algorithmic Information Ratio," represents an intrinsic complexity measure that effectively characterizes the frequentist regret of any algorithm. To the best of our knowledge, this is the first systematical approach to make Bayesian-type algorithms prior-free and applicable to adversarial settings, in a generic and optimal manner. Moreover, the algorithms are simple and often efficient to implement. As a major application, we present a novel algorithm for multi-armed bandits that achieves the "best-of-all-worlds" empirical performance in the stochastic, adversarial, and non-stationary environments. And we illustrate how these principles can be used in linear bandits, bandit convex optimization, and reinforcement learning.
We demonstrate possibility for consensus under the model and conditions used by Fischer, Lynch, and Patterson (FLP) to prove impossibility of binary consensus - in complete asynchrony and up to one unannounced process crash-fail. We also show that: i) assembling by every process a dataset containing the initial values of individual processes is an inevitable phase of binary consensus; and ii) agreeing on this dataset is sufficient for a quasi-binary consensus. Key findings: Direct causal relationship between complete asynchrony and the impossibility to solve consensus does not exist. The impossibility to solve consensus is caused only and entirely by the dependence of agreement on the content of the initial values.
We propose an efficient $\epsilon$-differentially private algorithm, that given a simple {\em weighted} $n$-vertex, $m$-edge graph $G$ with a \emph{maximum unweighted} degree $\Delta(G) \leq n-1$, outputs a synthetic graph which approximates the spectrum with $\widetilde{O}(\min\{\Delta(G), \sqrt{n}\})$ bound on the purely additive error. To the best of our knowledge, this is the first $\epsilon$-differentially private algorithm with a non-trivial additive error for approximating the spectrum of the graph. One of the subroutines of our algorithm also precisely simulates the exponential mechanism over a non-convex set, which could be of independent interest given the recent interest in sampling from a {\em log-concave distribution} defined over a convex set. Spectral approximation also allows us to approximate all possible $(S,T)$-cuts, but it incurs an error that depends on the maximum degree, $\Delta(G)$. We further show that using our sampler, we can also output a synthetic graph that approximates the sizes of all $(S,T)$-cuts on $n$ vertices weighted graph $G$ with $m$ edges while preserving $(\epsilon,\delta)$-differential privacy and an additive error of $\widetilde{O}(\sqrt{mn}/\epsilon)$. We also give a matching lower bound (with respect to all the parameters) on the private cut approximation for weighted graphs. This removes the gap of $\sqrt{W_{\mathsf{avg}}}$ in the upper and lower bound in Eli{\'a}{\v{s}}, Kapralov, Kulkarni, and Lee (SODA 2020), where $W_{\mathsf{avg}}$ is the average edge weight.
Humans perceive the world by concurrently processing and fusing high-dimensional inputs from multiple modalities such as vision and audio. Machine perception models, in stark contrast, are typically modality-specific and optimised for unimodal benchmarks, and hence late-stage fusion of final representations or predictions from each modality (`late-fusion') is still a dominant paradigm for multimodal video classification. Instead, we introduce a novel transformer based architecture that uses `fusion bottlenecks' for modality fusion at multiple layers. Compared to traditional pairwise self-attention, our model forces information between different modalities to pass through a small number of bottleneck latents, requiring the model to collate and condense the most relevant information in each modality and only share what is necessary. We find that such a strategy improves fusion performance, at the same time reducing computational cost. We conduct thorough ablation studies, and achieve state-of-the-art results on multiple audio-visual classification benchmarks including Audioset, Epic-Kitchens and VGGSound. All code and models will be released.
The notion of "in-domain data" in NLP is often over-simplistic and vague, as textual data varies in many nuanced linguistic aspects such as topic, style or level of formality. In addition, domain labels are many times unavailable, making it challenging to build domain-specific systems. We show that massive pre-trained language models implicitly learn sentence representations that cluster by domains without supervision -- suggesting a simple data-driven definition of domains in textual data. We harness this property and propose domain data selection methods based on such models, which require only a small set of in-domain monolingual data. We evaluate our data selection methods for neural machine translation across five diverse domains, where they outperform an established approach as measured by both BLEU and by precision and recall of sentence selection with respect to an oracle.
It is important to detect anomalous inputs when deploying machine learning systems. The use of larger and more complex inputs in deep learning magnifies the difficulty of distinguishing between anomalous and in-distribution examples. At the same time, diverse image and text data are available in enormous quantities. We propose leveraging these data to improve deep anomaly detection by training anomaly detectors against an auxiliary dataset of outliers, an approach we call Outlier Exposure (OE). This enables anomaly detectors to generalize and detect unseen anomalies. In extensive experiments on natural language processing and small- and large-scale vision tasks, we find that Outlier Exposure significantly improves detection performance. We also observe that cutting-edge generative models trained on CIFAR-10 may assign higher likelihoods to SVHN images than to CIFAR-10 images; we use OE to mitigate this issue. We also analyze the flexibility and robustness of Outlier Exposure, and identify characteristics of the auxiliary dataset that improve performance.