A linear inference is a valid inequality of Boolean algebra in which each variable occurs at most once on each side. In this work we leverage recently developed graphical representations of linear formulae to build an implementation that is capable of more efficiently searching for switch-medial-independent inferences. We use it to find four `minimal' 8-variable independent inferences and also prove that no smaller ones exist; in contrast, a previous approach based directly on formulae reached computational limits already at 7 variables. Two of these new inferences derive some previously found independent linear inferences. The other two (which are dual) exhibit structure seemingly beyond the scope of previous approaches we are aware of; in particular, their existence contradicts a conjecture of Das and Strassburger. We were also able to identify 10 minimal 9-variable linear inferences independent of all the aforementioned inferences, comprising 5 dual pairs, and present applications of our implementation to recent `graph logics'.
The Independent Cutset problem asks whether there is a set of vertices in a given graph that is both independent and a cutset. Such a problem is $\textsf{NP}$-complete even when the input graph is planar and has maximum degree five. In this paper, we first present a $\mathcal{O}^*(1.4423^{n})$-time algorithm for the problem. We also show how to compute a minimum independent cutset (if any) in the same running time. Since the property of having an independent cutset is MSO$_1$-expressible, our main results are concerned with structural parameterizations for the problem considering parameters that are not bounded by a function of the clique-width of the input. We present $\textsf{FPT}$-time algorithms for the problem considering the following parameters: the dual of the maximum degree, the dual of the solution size, the size of a dominating set (where a dominating set is given as an additional input), the size of an odd cycle transversal, the distance to chordal graphs, and the distance to $P_5$-free graphs. We close by introducing the notion of $\alpha$-domination, which allows us to identify more fixed-parameter tractable and polynomial-time solvable cases.
Tensor networks have been an important concept and technique in many research areas, such as quantum computation and machine learning. We study the exponential complexity of contracting tensor networks on two special graph structures: planar graphs and finite element graphs. We prove that any finite element graph has a $O(d\sqrt{\max\{\Delta,d\}N})$ size edge separator. Furthermore, we develop a $2^{O(d\sqrt{\max\{\Delta,d\}N})}$ time algorithm to contracting a tensor network consisting of $N$ Boolean tensors, whose underlying graph is a finite element graph with maximum degree $\Delta$ and has no face with more than $d$ boundary edges in the planar skeleton, based on the $2^{O(\sqrt{\Delta N})}$ time algorithm \cite{fastcounting} for planar Boolean tensor network contractions. We use two methods to accelerate the exponential algorithms by transferring high-dimensional tensors to low-dimensional tensors. We put up a $O(k)$ size planar gadget for any Boolean symmetric tensor of dimension $k$, where the gadget only consists of Boolean tensors with dimension no more than $5$. Another method is decomposing any tensor into a series of vectors (unary functions), according to its \emph{CP decomposition} \cite{tensor-rank}. We also prove the sub-exponential time lower bound for contracting tensor networks under the counting \emph{Exponential Time Hypothesis} (\#ETH) holds.
We consider estimation and inference for a regression coefficient in panels with interactive fixed effects (i.e., with a factor structure). We show that previously developed estimators and confidence intervals (CIs) might be heavily biased and size-distorted when some of the factors are weak. We propose estimators with improved rates of convergence and bias-aware CIs that are uniformly valid regardless of whether the factors are strong or not. Our approach applies the theory of minimax linear estimation to form a debiased estimate using a nuclear norm bound on the error of an initial estimate of the interactive fixed effects. We use the obtained estimate to construct a bias-aware CI taking into account the remaining bias due to weak factors. In Monte Carlo experiments, we find a substantial improvement over conventional approaches when factors are weak, with little cost to estimation error when factors are strong.
Various neural network architectures rely on pooling operators to aggregate information coming from different sources. It is often implicitly assumed in such contexts that vectors encode epistemic states, i.e. that vectors capture the evidence that has been obtained about some properties of interest, and that pooling these vectors yields a vector that combines this evidence. We study, for a number of standard pooling operators, under what conditions they are compatible with this idea, which we call the epistemic pooling principle. While we find that all the considered pooling operators can satisfy the epistemic pooling principle, this only holds when embeddings are sufficiently high-dimensional and, for most pooling operators, when the embeddings satisfy particular constraints (e.g. having non-negative coordinates). We furthermore show that these constraints have important implications on how the embeddings can be used in practice. In particular, we find that when the epistemic pooling principle is satisfied, in most cases it is impossible to verify the satisfaction of propositional formulas using linear scoring functions, with two exceptions: (i) max-pooling with embeddings that are upper-bounded and (ii) Hadamard pooling with non-negative embeddings. This finding helps to clarify, among others, why Graph Neural Networks sometimes under-perform in reasoning tasks. Finally, we also study an extension of the epistemic pooling principle to weighted epistemic states, which are important in the context of non-monotonic reasoning, where max-pooling emerges as the most suitable operator.
In this paper, we devise a scheme for kernelizing, in sublinear space and polynomial time, various problems on planar graphs. The scheme exploits planarity to ensure that the resulting algorithms run in polynomial time and use O((sqrt(n) + k) log n) bits of space, where n is the number of vertices in the input instance and k is the intended solution size. As examples, we apply the scheme to Dominating Set and Vertex Cover. For Dominating Set, we also show that a well-known kernelization algorithm due to Alber et al. (JACM 2004) can be carried out in polynomial time and space O(k log n). Along the way, we devise restricted-memory procedures for computing region decompositions and approximating the aforementioned problems, which might be of independent interest.
We consider the problem of inference for projection parameters in linear regression with increasing dimensions. This problem has been studied under a variety of assumptions in the literature. The classical asymptotic normality result for the least squares estimator of the projection parameter only holds when the dimension $d$ of the covariates is of smaller order than $n^{1/2}$, where $n$ is the sample size. Traditional sandwich estimator-based Wald intervals are asymptotically valid in this regime. In this work, we propose a bias correction for the least squares estimator and prove the asymptotic normality of the resulting debiased estimator as long as $d = o(n^{2/3})$, with an explicit bound on the rate of convergence to normality. We leverage recent methods of statistical inference that do not require an estimator of the variance to perform asymptotically valid statistical inference. We provide a discussion of how our techniques can be generalized to increase the allowable range of $d$ even further.
Predicting human gaze is important in Human-Computer Interaction (HCI). However, to practically serve HCI applications, gaze prediction models must be scalable, fast, and accurate in their spatial and temporal gaze predictions. Recent scanpath prediction models focus on goal-directed attention (search). Such models are limited in their application due to a common approach relying on trained target detectors for all possible objects, and the availability of human gaze data for their training (both not scalable). In response, we pose a new task called ZeroGaze, a new variant of zero-shot learning where gaze is predicted for never-before-searched objects, and we develop a novel model, Gazeformer, to solve the ZeroGaze problem. In contrast to existing methods using object detector modules, Gazeformer encodes the target using a natural language model, thus leveraging semantic similarities in scanpath prediction. We use a transformer-based encoder-decoder architecture because transformers are particularly useful for generating contextual representations. Gazeformer surpasses other models by a large margin on the ZeroGaze setting. It also outperforms existing target-detection models on standard gaze prediction for both target-present and target-absent search tasks. In addition to its improved performance, Gazeformer is more than five times faster than the state-of-the-art target-present visual search model.
Let $G$ be an undirected graph, and $s,t$ distinguished vertices of $G$. A minimal $s,t$-separator is an inclusion-wise minimal vertex-set whose removal places $s$ and $t$ in distinct connected components. We present an algorithm for listing the minimal $s,t$-separators of a graph in non-decreasing order of cardinality, in polynomial-delay. This problem finds applications in various algorithms parameterized by treewidth, which include query evaluation in relational databases, probabilistic inference, and many more. In the process, we prove several results that are of independent interest. We establish a new island of tractability to the intensively studied 2-disjoint connected subgraphs problem, which is NP-complete even for restricted graph classes that include planar graphs, and prove new characterizations of minimal $s,t$-separators. Ours is the first to present a ranked enumeration algorithm for minimal separators where the delay is polynomial in the size of the input graph.
A mainstream type of current self-supervised learning methods pursues a general-purpose representation that can be well transferred to downstream tasks, typically by optimizing on a given pretext task such as instance discrimination. In this work, we argue that existing pretext tasks inevitably introduce biases into the learned representation, which in turn leads to biased transfer performance on various downstream tasks. To cope with this issue, we propose Maximum Entropy Coding (MEC), a more principled objective that explicitly optimizes on the structure of the representation, so that the learned representation is less biased and thus generalizes better to unseen downstream tasks. Inspired by the principle of maximum entropy in information theory, we hypothesize that a generalizable representation should be the one that admits the maximum entropy among all plausible representations. To make the objective end-to-end trainable, we propose to leverage the minimal coding length in lossy data coding as a computationally tractable surrogate for the entropy, and further derive a scalable reformulation of the objective that allows fast computation. Extensive experiments demonstrate that MEC learns a more generalizable representation than previous methods based on specific pretext tasks. It achieves state-of-the-art performance consistently on various downstream tasks, including not only ImageNet linear probe, but also semi-supervised classification, object detection, instance segmentation, and object tracking. Interestingly, we show that existing batch-wise and feature-wise self-supervised objectives could be seen equivalent to low-order approximations of MEC. Code and pre-trained models are available at //github.com/xinliu20/MEC.
As soon as abstract mathematical computations were adapted to computation on digital computers, the problem of efficient representation, manipulation, and communication of the numerical values in those computations arose. Strongly related to the problem of numerical representation is the problem of quantization: in what manner should a set of continuous real-valued numbers be distributed over a fixed discrete set of numbers to minimize the number of bits required and also to maximize the accuracy of the attendant computations? This perennial problem of quantization is particularly relevant whenever memory and/or computational resources are severely restricted, and it has come to the forefront in recent years due to the remarkable performance of Neural Network models in computer vision, natural language processing, and related areas. Moving from floating-point representations to low-precision fixed integer values represented in four bits or less holds the potential to reduce the memory footprint and latency by a factor of 16x; and, in fact, reductions of 4x to 8x are often realized in practice in these applications. Thus, it is not surprising that quantization has emerged recently as an important and very active sub-area of research in the efficient implementation of computations associated with Neural Networks. In this article, we survey approaches to the problem of quantizing the numerical values in deep Neural Network computations, covering the advantages/disadvantages of current methods. With this survey and its organization, we hope to have presented a useful snapshot of the current research in quantization for Neural Networks and to have given an intelligent organization to ease the evaluation of future research in this area.