We study deterministic distributed algorithms for broadcasting on multiple-access channels. Packet injection is modeled by leaky-bucket adversaries. There is a fixed set of stations attached to a channel. Additional features of the model of communication include an upper bound on the number of stations activated in a round, an individual injection rate, and randomness in generating and injecting packets. We demonstrate that some broadcast algorithms designed for ad-hoc channels have bounded latency for increased ranges of injection rates than in ad-hoc channels when executed on channels with a fixed number of stations against adversaries that can activate at most one station per round. Individual injection rates are shown to impact latency, as compared to the model of general leaky bucket adversaries. Outcomes of experiments are given that compare the performance of broadcast algorithms against randomized adversaries. The experiments include deterministic algorithms and randomized backoff algorithms.
Determining the approximate degree composition for Boolean functions remains a significant unsolved problem in Boolean function complexity. In recent decades, researchers have concentrated on proving that approximate degree composes for special types of inner and outer functions. An important and extensively studied class of functions are the recursive functions, i.e.~functions obtained by composing a base function with itself a number of times. Let $h^d$ denote the standard $d$-fold composition of the base function $h$. The main result of this work is to show that the approximate degree composes if either of the following conditions holds: \begin{itemize} \item The outer function $f:\{0,1\}^n\to \{0,1\}$ is a recursive function of the form $h^d$, with $h$ being any base function and $d= \Omega(\log\log n)$. \item The inner function is a recursive function of the form $h^d$, with $h$ being any constant arity base function (other than AND and OR) and $d= \Omega(\log\log n)$, where $n$ is the arity of the outer function. \end{itemize} In terms of proof techniques, we first observe that the lower bound for composition can be obtained by introducing majority in between the inner and the outer functions. We then show that majority can be \emph{efficiently eliminated} if the inner or outer function is a recursive function.
Argumentation is a formalism allowing to reason with contradictory information by modeling arguments and their interactions. There are now an increasing number of gradual semantics and impact measures that have emerged to facilitate the interpretation of their outcomes. An impact measure assesses, for each argument, the impact of other arguments on its score. In this paper, we refine an existing impact measure from Delobelle and Villata and introduce a new impact measure rooted in Shapley values. We introduce several principles to evaluate those two impact measures w.r.t. some well-known gradual semantics. This comprehensive analysis provides deeper insights into their functionality and desirability.
A dynamic graph algorithm is a data structure that supports edge insertions, deletions, and specific problem queries. While extensive research exists on dynamic algorithms for graph problems solvable in polynomial time, most of these algorithms have not been implemented or empirically evaluated. This work addresses the NP-complete maximum weight and cardinality independent set problems in a dynamic setting, applicable to areas like dynamic map-labeling and vehicle routing. Real-world instances can be vast, with millions of vertices and edges, making it challenging to find near-optimal solutions quickly. Exact solvers can find optimal solutions but have exponential worst-case runtimes. Conversely, heuristic algorithms use local search techniques to improve solutions by optimizing vertices. In this work, we introduce a novel local search technique called optimal neighborhood exploration. This technique creates independent subproblems that are solved to optimality, leading to improved overall solutions. Through numerous experiments, we assess the effectiveness of our approach and compare it with other state-of-the-art dynamic solvers. Our algorithm features a parameter, the subproblem size, that balances running time and solution quality. With this parameter, our configuration matches state-of-the-art performance for the cardinality independent set problem. By increasing the parameter, we significantly enhance solution quality.
We study a relational perspective of graph database querying. Such a perspective underlies various graph database systems but very few theoretical investigations have been conducted on it. This perspective offers a powerful and unified framework to study graph database querying, by which algorithms and complexity follow from classical results. We provide two concrete applications. The first is querying property graphs. The property graph data model supersedes previously proposed graph models and underlies the new standard GQL for graph query languages. We show that this standard can be, by and large, expressed by extensions of relational calculus with transitive closure operators (FO[TC]) and existential second-order quantifiers (ESO). With this, we obtain optimal data complexity bounds, along with extensions including schema validation. The second application is incorporating data from concrete domains (e.g., numbers) in graph database querying. We use embedded finite model theory and, by exploiting a generic Restricted Quantifier Collapse (RQC) result for FO[TC] and ESO, we obtain optimal data complexity bounds for GQL with arithmetics and comparisons. Moreover, we show that Regular Data Path Querying with operations on data (i.e. using register automata formalisms) can be captured in FO[TC] over embedded finite graphs while preserving nondeterministic logspace data complexity.
We study operations on fixpoint equation systems (FES) over arbitrary complete lattices. We investigate under which conditions these operations, such as substituting variables by their definition, and swapping the ordering of equations, preserve the solution of a FES. We provide rigorous, computer-checked proofs. Along the way, we list a number of known and new identities and inequalities on extremal fixpoints in complete lattices.
MLsub is a minimal language with a type system combining subtyping and parametric polymorphism and a type inference algorithm which infers compact principal types. Simple-sub is an alternative inference algorithm which can be implemented efficiently and is easier to understand. MLsub supports explicitly typed records which are not extensible. Here we extend Simple-sub with extensible records, meaning that we can add new fields to a previously defined record. For this we add row variables to our type language and extend the type constraint solving method of our type inference algorithm accordingly, keeping the decidability of type inference.
Geometric deep learning (GDL), which is based on neural network architectures that incorporate and process symmetry information, has emerged as a recent paradigm in artificial intelligence. GDL bears particular promise in molecular modeling applications, in which various molecular representations with different symmetry properties and levels of abstraction exist. This review provides a structured and harmonized overview of molecular GDL, highlighting its applications in drug discovery, chemical synthesis prediction, and quantum chemistry. Emphasis is placed on the relevance of the learned molecular features and their complementarity to well-established molecular descriptors. This review provides an overview of current challenges and opportunities, and presents a forecast of the future of GDL for molecular sciences.
Humans perceive the world by concurrently processing and fusing high-dimensional inputs from multiple modalities such as vision and audio. Machine perception models, in stark contrast, are typically modality-specific and optimised for unimodal benchmarks, and hence late-stage fusion of final representations or predictions from each modality (`late-fusion') is still a dominant paradigm for multimodal video classification. Instead, we introduce a novel transformer based architecture that uses `fusion bottlenecks' for modality fusion at multiple layers. Compared to traditional pairwise self-attention, our model forces information between different modalities to pass through a small number of bottleneck latents, requiring the model to collate and condense the most relevant information in each modality and only share what is necessary. We find that such a strategy improves fusion performance, at the same time reducing computational cost. We conduct thorough ablation studies, and achieve state-of-the-art results on multiple audio-visual classification benchmarks including Audioset, Epic-Kitchens and VGGSound. All code and models will be released.
Recent contrastive representation learning methods rely on estimating mutual information (MI) between multiple views of an underlying context. E.g., we can derive multiple views of a given image by applying data augmentation, or we can split a sequence into views comprising the past and future of some step in the sequence. Contrastive lower bounds on MI are easy to optimize, but have a strong underestimation bias when estimating large amounts of MI. We propose decomposing the full MI estimation problem into a sum of smaller estimation problems by splitting one of the views into progressively more informed subviews and by applying the chain rule on MI between the decomposed views. This expression contains a sum of unconditional and conditional MI terms, each measuring modest chunks of the total MI, which facilitates approximation via contrastive bounds. To maximize the sum, we formulate a contrastive lower bound on the conditional MI which can be approximated efficiently. We refer to our general approach as Decomposed Estimation of Mutual Information (DEMI). We show that DEMI can capture a larger amount of MI than standard non-decomposed contrastive bounds in a synthetic setting, and learns better representations in a vision domain and for dialogue generation.
Embedding entities and relations into a continuous multi-dimensional vector space have become the dominant method for knowledge graph embedding in representation learning. However, most existing models ignore to represent hierarchical knowledge, such as the similarities and dissimilarities of entities in one domain. We proposed to learn a Domain Representations over existing knowledge graph embedding models, such that entities that have similar attributes are organized into the same domain. Such hierarchical knowledge of domains can give further evidence in link prediction. Experimental results show that domain embeddings give a significant improvement over the most recent state-of-art baseline knowledge graph embedding models.