The problem Level Planarity asks for a crossing-free drawing of a graph in the plane such that vertices are placed at prescribed y-coordinates (called levels) and such that every edge is realized as a y-monotone curve. In the variant Constrained Level Planarity (CLP), each level $y$ is equipped with a partial order $\prec_y$ on its vertices and in the desired drawing the left-to-right order of vertices on level $y$ has to be a linear extension of $\prec_y$. Ordered Level Planarity (OLP) corresponds to the special case of CLP where the given partial orders $\prec_y$ are total orders. Previous results by Br\"uckner and Rutter [SODA 2017] and Klemz and Rote [ACM Trans. Alg. 2019] state that both CLP and OLP are NP-hard even in severely restricted cases. In particular, they remain NP-hard even when restricted to instances whose width (the maximum number of vertices that may share a common level) is at most two. In this paper, we focus on the other dimension: we study the parameterized complexity of CLP and OLP with respect to the height (the number of levels). We show that OLP parameterized by the height is complete with respect to the complexity class XNLP, which was first studied by Elberfeld et al. [Algorithmica 2015] (under a different name) and recently made more prominent by Bodlaender et al. [FOCS 2021]. It contains all parameterized problems that can be solved nondeterministically in time $f(k) n^{O(1)}$ and space $f(k) \log n$ (where $f$ is a computable function, $n$ is the input size, and $k$ is the parameter). If a problem is XNLP-complete, it lies in XP, but is W[$t$]-hard for every $t$. In contrast to the fact that OLP parameterized by the height lies in XP, it turns out that CLP is NP-hard even when restricted to instances of height 4. We complement this result by showing that CLP can be solved in polynomial time for instances of height at most 3.
Clustering methods must be tailored to the dataset it operates on, as there is no objective or universal definition of ``cluster,'' but nevertheless arbitrariness in the clustering method must be minimized. This paper develops a quantitative ``stability'' method of determining clusters, where stable or persistent clustering signals are used to indicate real structures have been identified in the underlying dataset. This method is based on modulating clustering methods by controlling a parameter -- through a thermodynamic analogy, the modulation parameter is considered ``time'' and the evolving clustering methodologies can be considered a ``heat flow.'' When the information entropy of the heat flow is stable over a wide range of times -- either globally or in the local sense which we define -- we interpret this stability as an indication that essential features of the data have been found, and create clusters on this basis.
This work explores the hypothesis that subjectively attributed meaning constitutes the phenomenal content of conscious experience. That is, phenomenal content is semantic. This form of subjective meaning manifests as an intrinsic and non-representational character of qualia. Empirically, subjective meaning is ubiquitous in conscious experiences. We point to phenomenological studies that lend evidence to support this. Furthermore, this notion of meaning closely relates to what Frege refers to as "sense", in metaphysics and philosophy of language. It also aligns with Peirce's "interpretant", in semiotics. We discuss how Frege's sense can also be extended to the raw feels of consciousness. Sense and reference both play a role in phenomenal experience. Moreover, within the context of the mind-matter relation, we provide a formalization of subjective meaning associated to one's mental representations. Identifying the precise maps between the physical and mental domains, we argue that syntactic and semantic structures transcend language, and are realized within each of these domains. Formally, meaning is a relational attribute, realized via a map that interprets syntactic structures of a formal system within an appropriate semantic space. The image of this map within the mental domain is what is relevant for experience, and thus comprises the phenomenal content of qualia. We conclude with possible implications this may have for experience-based theories of consciousness.
In this paper we look at $k$-stroll, point-to-point orienteering, as well as the deadline TSP problem on graphs with bounded doubling dimension and bounded treewidth and present approximation schemes for them. Given a weighted graph $G=(V,E)$, start node $s\in V$, distances $d:E\rightarrow \mathbb{Q}^+$ and integer $k$. In the $k$-stroll problem the goal is to find a path starting at $s$ of minimum length that visits at least $k$ vertices. The dual problem to $k$-stroll is the rooted orienteering in which instead of $k$ we are given a budget $B$ and the goal is to find a walk of length at most $B$ starting at $s$ that visits as many vertices as possible. In the P2P orienteering we are given start and end nodes $s,t$ for the path. In the deadline TSP we are given a deadline $D(v)$ for each $v\in V$ and the goal is to find a walk starting at $s$ that visits as many vertices as possible before their deadline. The best approximation for rooted or P2P orienteering is $(2+\epsilon)$-approximation [12] and $O(\log n)$-approximation for deadline TSP [3]. There is no known approximation scheme for deadline TSP for any metric (not even trees). Our main result is the first approximation scheme for deadline TSP on metrics with bounded doubling dimension. To do so we first show if $G$ is a metric with doubling dimension $\kappa$ and aspect ratio $\Delta$, there is a $(1+\epsilon)$-approximation that runs in time $n^{O\left(\left(\log\Delta/\epsilon\right)^{2\kappa+1}\right)}$. We then extend these to obtain an approximation scheme for deadline TSP when the distances and deadlines are integer which runs in time $n^{O\left(\left(\log \Delta/\epsilon\right)^{2\kappa+2}\right)}$. For graphs with treewidth $\omega$ we show how to solve $k$-stroll and P2P orienteering exactly in polynomial time and a $(1+\epsilon)$-approximation for deadline TSP in time $n^{O((\omega\log\Delta/\epsilon)^2)}$.
An approach is established for maximizing the Lower bound on the Mismatch capacity (hereafter abbreviated as LM rate), a key performance bound in mismatched decoding, by optimizing the channel input probability distribution. Under a fixed channel input probability distribution, the computation of the corresponding LM rate is a convex optimization problem. When optimizing the channel input probability distribution, however, the corresponding optimization problem adopts a max-min formulation, which is generally non-convex and is intractable with standard approaches. To solve this problem, a novel dual form of the LM rate is proposed, thereby transforming the max-min formulation into an equivalent double maximization formulation. This new formulation leads to a maximization problem setup wherein each individual optimization direction is convex. Consequently, an alternating maximization algorithm is established to solve the resultant maximization problem setup. Each step of the algorithm only involves a closed-form iteration, which is efficiently implemented with standard optimization procedures. Numerical experiments show the proposed approach for optimizing the LM rate leads to noticeable rate gains.
We build on the view of the Exact Renormalization Group (ERG) as an instantiation of Optimal Transport described by a functional convection-diffusion equation. We provide a new information theoretic perspective for understanding the ERG through the intermediary of Bayesian Statistical Inference. This connection is facilitated by the Dynamical Bayesian Inference scheme, which encodes Bayesian inference in the form of a one parameter family of probability distributions solving an integro-differential equation derived from Bayes' law. In this note, we demonstrate how the Dynamical Bayesian Inference equation is, itself, equivalent to a diffusion equation which we dub Bayesian Diffusion. Identifying the features that define Bayesian Diffusion, and mapping them onto the features that define the ERG, we obtain a dictionary outlining how renormalization can be understood as the inverse of statistical inference.
We propose a system for visual scene analysis and recognition based on encoding the sparse, latent feature-representation of an image into a high-dimensional vector that is subsequently factorized to parse scene content. The sparse feature representation is learned from image statistics via convolutional sparse coding, while scene parsing is performed by a resonator network. The integration of sparse coding with the resonator network increases the capacity of distributed representations and reduces collisions in the combinatorial search space during factorization. We find that for this problem the resonator network is capable of fast and accurate vector factorization, and we develop a confidence-based metric that assists in tracking the convergence of the resonator network.
The dominating NLP paradigm of training a strong neural predictor to perform one task on a specific dataset has led to state-of-the-art performance in a variety of applications (eg. sentiment classification, span-prediction based question answering or machine translation). However, it builds upon the assumption that the data distribution is stationary, ie. that the data is sampled from a fixed distribution both at training and test time. This way of training is inconsistent with how we as humans are able to learn from and operate within a constantly changing stream of information. Moreover, it is ill-adapted to real-world use cases where the data distribution is expected to shift over the course of a model's lifetime. The first goal of this thesis is to characterize the different forms this shift can take in the context of natural language processing, and propose benchmarks and evaluation metrics to measure its effect on current deep learning architectures. We then proceed to take steps to mitigate the effect of distributional shift on NLP models. To this end, we develop methods based on parametric reformulations of the distributionally robust optimization framework. Empirically, we demonstrate that these approaches yield more robust models as demonstrated on a selection of realistic problems. In the third and final part of this thesis, we explore ways of efficiently adapting existing models to new domains or tasks. Our contribution to this topic takes inspiration from information geometry to derive a new gradient update rule which alleviate catastrophic forgetting issues during adaptation.
AI is undergoing a paradigm shift with the rise of models (e.g., BERT, DALL-E, GPT-3) that are trained on broad data at scale and are adaptable to a wide range of downstream tasks. We call these models foundation models to underscore their critically central yet incomplete character. This report provides a thorough account of the opportunities and risks of foundation models, ranging from their capabilities (e.g., language, vision, robotics, reasoning, human interaction) and technical principles(e.g., model architectures, training procedures, data, systems, security, evaluation, theory) to their applications (e.g., law, healthcare, education) and societal impact (e.g., inequity, misuse, economic and environmental impact, legal and ethical considerations). Though foundation models are based on standard deep learning and transfer learning, their scale results in new emergent capabilities,and their effectiveness across so many tasks incentivizes homogenization. Homogenization provides powerful leverage but demands caution, as the defects of the foundation model are inherited by all the adapted models downstream. Despite the impending widespread deployment of foundation models, we currently lack a clear understanding of how they work, when they fail, and what they are even capable of due to their emergent properties. To tackle these questions, we believe much of the critical research on foundation models will require deep interdisciplinary collaboration commensurate with their fundamentally sociotechnical nature.
Deep Convolutional Neural Networks have pushed the state-of-the art for semantic segmentation provided that a large amount of images together with pixel-wise annotations is available. Data collection is expensive and a solution to alleviate it is to use transfer learning. This reduces the amount of annotated data required for the network training but it does not get rid of this heavy processing step. We propose a method of transfer learning without annotations on the target task for datasets with redundant content and distinct pixel distributions. Our method takes advantage of the approximate content alignment of the images between two datasets when the approximation error prevents the reuse of annotation from one dataset to another. Given the annotations for only one dataset, we train a first network in a supervised manner. This network autonomously learns to generate deep data representations relevant to the semantic segmentation. Then the images in the new dataset, we train a new network to generate a deep data representation that matches the one from the first network on the previous dataset. The training consists in a regression between feature maps and does not require any annotations on the new dataset. We show that this method reaches performances similar to a classic transfer learning on the PASCAL VOC dataset with synthetic transformations.
Image segmentation is an important component of many image understanding systems. It aims to group pixels in a spatially and perceptually coherent manner. Typically, these algorithms have a collection of parameters that control the degree of over-segmentation produced. It still remains a challenge to properly select such parameters for human-like perceptual grouping. In this work, we exploit the diversity of segments produced by different choices of parameters. We scan the segmentation parameter space and generate a collection of image segmentation hypotheses (from highly over-segmented to under-segmented). These are fed into a cost minimization framework that produces the final segmentation by selecting segments that: (1) better describe the natural contours of the image, and (2) are more stable and persistent among all the segmentation hypotheses. We compare our algorithm's performance with state-of-the-art algorithms, showing that we can achieve improved results. We also show that our framework is robust to the choice of segmentation kernel that produces the initial set of hypotheses.