Liftings of endofunctors on sets to endofunctors on relations are commonly used to capture bisimulation of coalgebras. Lax versions have been used in those cases where strict lifting fails to capture bisimilarity, as well as in modeling other notions of simulation. This paper provides tools for defining and manipulating lax liftings. As a central result, we define a notion of a lax distributive law of a functor over the powerset monad, and show that there is an isomorphism between the lattice of lax liftings and the lattice of lax distributive laws. We also study two functors in detail: (i) we show that the lifting for monotone bisimilarity is the minimal lifting for the monotone neighbourhood functor, and (ii) we show that the lattice of liftings for the (ordinary) neighbourhood functor is isomorphic to P(4), the powerset of a 4-element set.
Following complex instructions in conversational assistants can be quite daunting due to the shorter attention and memory spans when compared to reading the same instructions. Hence, when conversational assistants walk users through the steps of complex tasks, there is a need to structure the task into manageable pieces of information of the right length and complexity. In this paper, we tackle the recipes domain and convert reading structured instructions into conversational structured ones. We annotated the structure of instructions according to a conversational scenario, which provided insights into what is expected in this setting. To computationally model the conversational step's characteristics, we tested various Transformer-based architectures, showing that a token-based approach delivers the best results. A further user study showed that users tend to favor steps of manageable complexity and length, and that the proposed methodology can improve the original web-based instructional text. Specifically, 86% of the evaluated tasks were improved from a conversational suitability point of view.
We develop a tractable model for studying strategic interactions between learning algorithms. We uncover a mechanism responsible for the emergence of algorithmic collusion. We observe that algorithms periodically coordinate on actions that are more profitable than static Nash equilibria. This novel collusive channel relies on an endogenous statistical linkage in the algorithms' estimates which we call spontaneous coupling. The model's parameters predict whether the statistical linkage will appear, and what market structures facilitate algorithmic collusion. We show that spontaneous coupling can sustain collusion in prices and market shares, complementing experimental findings in the literature. Finally, we apply our results to design algorithmic markets.
Given an array A[1: n] of n elements drawn from an ordered set, the sorted range selection problem is to build a data structure that can be used to answer the following type of queries efficiently: Given a pair of indices i, j $ (1\le i\le j \le n)$, and a positive integer k, report the k smallest elements from the sub-array A[i: j] in order. Brodal et al. (Brodal, G.S., Fagerberg, R., Greve, M., and L{\'o}pez-Ortiz, A., Online sorted range reporting. Algorithms and Computation (2009) pp. 173--182) introduced the problem and gave an optimal solution. After O(n log n) time for preprocessing, the query time is O(k). The space used is O(n). In this paper, we propose the only other possible optimal trade-off for the problem. We present a linear space solution to the problem that takes O(k log k) time to answer a range selection query. The preprocessing time is O(n). Moreover, the proposed algorithm reports the output elements one by one in non-decreasing order. Our solution is simple and practical. We also describe an extremely simple method for range minima queries (most of whose parts are known) which takes al most (but not exactly) linear time. We believe that this method may be, in practice, faster and easier to implement in most cases.
We consider a random geometric hypergraph model based on an underlying bipartite graph. Nodes and hyperedges are sampled uniformly in a domain, and a node is assigned to those hyperedges that lie with a certain radius. From a modelling perspective, we explain how the model captures higher order connections that arise in real data sets. Our main contribution is to study the connectivity properties of the model. In an asymptotic limit where the number of nodes and hyperedges grow in tandem we give a condition on the radius that guarantees connectivity.
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-$k$ retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
Doeblin coefficients are a classical tool for analyzing the ergodicity and exponential convergence rates of Markov chains. Propelled by recent works on contraction coefficients of strong data processing inequalities, we investigate whether Doeblin coefficients also exhibit some of the notable properties of canonical contraction coefficients. In this paper, we present several new structural and geometric properties of Doeblin coefficients. Specifically, we show that Doeblin coefficients form a multi-way divergence, exhibit tensorization, and possess an extremal trace characterization. We then show that they also have extremal coupling and simultaneously maximal coupling characterizations. By leveraging these characterizations, we demonstrate that Doeblin coefficients act as a nice generalization of the well-known total variation (TV) distance to a multi-way divergence, enabling us to measure the "distance" between multiple distributions rather than just two. We then prove that Doeblin coefficients exhibit contraction properties over Bayesian networks similar to other canonical contraction coefficients. We additionally derive some other results and discuss an application of Doeblin coefficients to distribution fusion. Finally, in a complementary vein, we introduce and discuss three new quantities: max-Doeblin coefficient, max-DeGroot distance, and min-DeGroot distance. The max-Doeblin coefficient shares a connection with the concept of maximal leakage in information security; we explore its properties and provide a coupling characterization. On the other hand, the max-DeGroot and min-DeGroot measures extend the concept of DeGroot distance to multiple distributions.
We develop a tractable model for studying strategic interactions between learning algorithms. We uncover a mechanism responsible for the emergence of algorithmic collusion. We observe that algorithms periodically coordinate on actions that are more profitable than static Nash equilibria. This novel collusive channel relies on an endogenous statistical linkage in the algorithms' estimates which we call spontaneous coupling. The model's parameters predict whether the statistical linkage will appear, and what market structures facilitate algorithmic collusion. We show that spontaneous coupling can sustain collusion in prices and market shares, complementing experimental findings in the literature. Finally, we apply our results to design algorithmic markets.
Recent contrastive representation learning methods rely on estimating mutual information (MI) between multiple views of an underlying context. E.g., we can derive multiple views of a given image by applying data augmentation, or we can split a sequence into views comprising the past and future of some step in the sequence. Contrastive lower bounds on MI are easy to optimize, but have a strong underestimation bias when estimating large amounts of MI. We propose decomposing the full MI estimation problem into a sum of smaller estimation problems by splitting one of the views into progressively more informed subviews and by applying the chain rule on MI between the decomposed views. This expression contains a sum of unconditional and conditional MI terms, each measuring modest chunks of the total MI, which facilitates approximation via contrastive bounds. To maximize the sum, we formulate a contrastive lower bound on the conditional MI which can be approximated efficiently. We refer to our general approach as Decomposed Estimation of Mutual Information (DEMI). We show that DEMI can capture a larger amount of MI than standard non-decomposed contrastive bounds in a synthetic setting, and learns better representations in a vision domain and for dialogue generation.
Embedding entities and relations into a continuous multi-dimensional vector space have become the dominant method for knowledge graph embedding in representation learning. However, most existing models ignore to represent hierarchical knowledge, such as the similarities and dissimilarities of entities in one domain. We proposed to learn a Domain Representations over existing knowledge graph embedding models, such that entities that have similar attributes are organized into the same domain. Such hierarchical knowledge of domains can give further evidence in link prediction. Experimental results show that domain embeddings give a significant improvement over the most recent state-of-art baseline knowledge graph embedding models.
Benefit from the quick development of deep learning techniques, salient object detection has achieved remarkable progresses recently. However, there still exists following two major challenges that hinder its application in embedded devices, low resolution output and heavy model weight. To this end, this paper presents an accurate yet compact deep network for efficient salient object detection. More specifically, given a coarse saliency prediction in the deepest layer, we first employ residual learning to learn side-output residual features for saliency refinement, which can be achieved with very limited convolutional parameters while keep accuracy. Secondly, we further propose reverse attention to guide such side-output residual learning in a top-down manner. By erasing the current predicted salient regions from side-output features, the network can eventually explore the missing object parts and details which results in high resolution and accuracy. Experiments on six benchmark datasets demonstrate that the proposed approach compares favorably against state-of-the-art methods, and with advantages in terms of simplicity, efficiency (45 FPS) and model size (81 MB).