We introduce a notion of compatibility between constraint encoding and compositional structure. Phrased in the language of category theory, it is given by a "composable constraint encoding". We show that every composable constraint encoding can be used to construct an equivalent notion of a constrained category in which morphisms are supplemented with the constraints they satisfy. We further describe how to express the compatibility of constraints with additional categorical structures of their targets, such as parallel composition, compactness, and time-symmetry. We present a variety of concrete examples. Some are familiar in the study of quantum protocols and quantum foundations, such as signalling and sectorial constraints; others arise by construction from basic categorical notions. We use the language developed to discuss the notion of intersectability of constraints and the simplifications it allows for when present, and to show that any time-symmetric theory of relational constraints admits a faithful notion of intersection.
Orthology and paralogy relations are often inferred by methods based on gene similarity, which usually yield a graph depicting the relationships between gene pairs. Such relation graphs are known to frequently contain errors, as they cannot be explained via a gene tree that both contains the depicted orthologs/paralogs, and that is consistent with a species tree $S$. This idea of detecting errors through inconsistency with a species tree has mostly been studied in the presence of speciation and duplication events only. In this work, we ask: could the given set of relations be consistent if we allow lateral gene transfers in the evolutionary model? We formalize this question and provide a variety of algorithmic results regarding the underlying problems. Namely, we show that deciding if a relation graph $R$ is consistent with a given species network $N$ is NP-hard, and that it is W[1]-hard under the parameter "minimum number of transfers". However, we present an FPT algorithm based on the degree of the $DS$-tree associated with $R$. We also study analogous problems in the case that the transfer highways on a species tree are unknown.
Ultrafinitism postulates that we can only compute on relatively short objects, and numbers beyond certain value are not available. This approach would also forbid many forms of infinitary reasoning and allow to remove certain paradoxes stemming from enumeration theorems. However, philosophers still disagree of whether such a finitist logic would be consistent. We present preliminary work on a proof system based on Curry-Howard isomorphism. We also try to present some well-known theorems that stop being true in such systems, whereas opposite statements become provable. This approach presents certain impossibility results as logical paradoxes stemming from a profligate use of transfinite reasoning.
Learning symbolic music representations, especially disentangled representations with probabilistic interpretations, has been shown to benefit both music understanding and generation. However, most models are only applicable to short-term music, while learning long-term music representations remains a challenging task. We have seen several studies attempting to learn hierarchical representations directly in an end-to-end manner, but these models have not been able to achieve the desired results and the training process is not stable. In this paper, we propose a novel approach to learn long-term symbolic music representations through contextual constraints. First, we use contrastive learning to pre-train a long-term representation by constraining its difference from the short-term representation (extracted by an off-the-shelf model). Then, we fine-tune the long-term representation by a hierarchical prediction model such that a good long-term representation (e.g., an 8-bar representation) can reconstruct the corresponding short-term ones (e.g., the 2-bar representations within the 8-bar range). Experiments show that our method stabilizes the training and the fine-tuning steps. In addition, the designed contextual constraints benefit both reconstruction and disentanglement, significantly outperforming the baselines.
Explainable AI constitutes a fundamental step towards establishing fairness and addressing bias in algorithmic decision-making. Despite the large body of work on the topic, the benefit of solutions is mostly evaluated from a conceptual or theoretical point of view and the usefulness for real-world use cases remains uncertain. In this work, we aim to state clear user-centric desiderata for explainable AI reflecting common explainability needs experienced in statistical production systems of the European Central Bank. We link the desiderata to archetypical user roles and give examples of techniques and methods which can be used to address the user's needs. To this end, we provide two concrete use cases from the domain of statistical data production in central banks: the detection of outliers in the Centralised Securities Database and the data-driven identification of data quality checks for the Supervisory Banking data system.
Jacobs' hypernormalisation is a construction on finitely supported discrete probability distributions, obtained by generalising certain patterns occurring in quantitative information theory. In this paper, we generalise Jacobs' notion in turn, by describing a notion of hypernormalisation in the abstract setting of a symmetric monoidal category endowed with a linear exponential monad -- a structure arising in the categorical semantics of linear logic. We show that Jacobs' hypernormalisation arises in this fashion from the finitely supported probability measure monad on the category of sets, which can be seen as a linear exponential monad with respect to a non-standard monoidal structure on sets which we term the convex monoidal structure. We give the construction of this monoidal structure in terms of a quantum-algebraic notion known as a tricocycloid. Besides the motivating example, and its natural generalisations to the continuous context, we give a range of other instances of our abstract hypernormalisation, which swap out the side-effect of probabilistic choice for other important side-effects such as non-deterministic choice, logical choice via tests in a Boolean algebra, and input from a stream of values. Finally, we exploit our framework to describe a normalisation-by-trace-evaluation process for behaviours of various kinds of coalgebraic generative systems, including labelled transition systems, probabilistic generative systems, and stream processors.
We investigate the complexity of explicit construction problems, where the goal is to produce a particular object of size $n$ possessing some pseudorandom property in time polynomial in $n$. We give overwhelming evidence that $\bf{APEPP}$, defined originally by Kleinberg et al., is the natural complexity class associated with explicit constructions of objects whose existence follows from the probabilistic method, by placing a variety of such construction problems in this class. We then demonstrate that a result of Je\v{r}\'{a}bek on provability in Bounded Arithmetic, when reinterpreted as a reduction between search problems, shows that constructing a truth table of high circuit complexity is complete for $\bf{APEPP}$ under $\bf{P}^{\bf{NP}}$ reductions. This illustrates that Shannon's classical proof of the existence of hard boolean functions is in fact a $\textit{universal}$ probabilistic existence argument: derandomizing his proof implies a generic derandomization of the probabilistic method. As a corollary, we prove that $\bf{EXP}^{\bf{NP}}$ contains a language of circuit complexity $2^{n^{\Omega(1)}}$ if and only if it contains a language of circuit complexity $\frac{2^n}{2n}$. Finally, for several of the problems shown to lie in $\bf{APEPP}$, we demonstrate direct polynomial time reductions to the explicit construction of hard truth tables.
We study constrained reinforcement learning (CRL) from a novel perspective by setting constraints directly on state density functions, rather than the value functions considered by previous works. State density has a clear physical and mathematical interpretation, and is able to express a wide variety of constraints such as resource limits and safety requirements. Density constraints can also avoid the time-consuming process of designing and tuning cost functions required by value function-based constraints to encode system specifications. We leverage the duality between density functions and Q functions to develop an effective algorithm to solve the density constrained RL problem optimally and the constrains are guaranteed to be satisfied. We prove that the proposed algorithm converges to a near-optimal solution with a bounded error even when the policy update is imperfect. We use a set of comprehensive experiments to demonstrate the advantages of our approach over state-of-the-art CRL methods, with a wide range of density constrained tasks as well as standard CRL benchmarks such as Safety-Gym.
Attention Model has now become an important concept in neural networks that has been researched within diverse application domains. This survey provides a structured and comprehensive overview of the developments in modeling attention. In particular, we propose a taxonomy which groups existing techniques into coherent categories. We review salient neural architectures in which attention has been incorporated, and discuss applications in which modeling attention has shown a significant impact. Finally, we also describe how attention has been used to improve the interpretability of neural networks. We hope this survey will provide a succinct introduction to attention models and guide practitioners while developing approaches for their applications.
Developing classification algorithms that are fair with respect to sensitive attributes of the data has become an important problem due to the growing deployment of classification algorithms in various social contexts. Several recent works have focused on fairness with respect to a specific metric, modeled the corresponding fair classification problem as a constrained optimization problem, and developed tailored algorithms to solve them. Despite this, there still remain important metrics for which we do not have fair classifiers and many of the aforementioned algorithms do not come with theoretical guarantees; perhaps because the resulting optimization problem is non-convex. The main contribution of this paper is a new meta-algorithm for classification that takes as input a large class of fairness constraints, with respect to multiple non-disjoint sensitive attributes, and which comes with provable guarantees. This is achieved by first developing a meta-algorithm for a large family of classification problems with convex constraints, and then showing that classification problems with general types of fairness constraints can be reduced to those in this family. We present empirical results that show that our algorithm can achieve near-perfect fairness with respect to various fairness metrics, and that the loss in accuracy due to the imposed fairness constraints is often small. Overall, this work unifies several prior works on fair classification, presents a practical algorithm with theoretical guarantees, and can handle fairness metrics that were previously not possible.
Despite the impressive quality improvements yielded by neural machine translation (NMT) systems, controlling their translation output to adhere to user-provided terminology constraints remains an open problem. We describe our approach to constrained neural decoding based on finite-state machines and multi-stack decoding which supports target-side constraints as well as constraints with corresponding aligned input text spans. We demonstrate the performance of our framework on multiple translation tasks and motivate the need for constrained decoding with attentions as a means of reducing misplacement and duplication when translating user constraints.