A logical zonotope, which is a new set representation for binary vectors, is introduced in this paper. A logical zonotope is constructed by XOR-ing a binary vector with a combination of other binary vectors called generators. Such a zonotope can represent up to 2^n binary vectors using only n generators. It is shown that logical operations over sets of binary vectors can be performed on the zonotopes' generators and, thus, significantly reduce the computational complexity of various logical operations (e.g., XOR, NAND, AND, OR, and semi-tensor products). Similar to traditional zonotopes' role in the formal verification of dynamical systems over real vector spaces, logical zonotopes can be used to analyze discrete dynamical systems defined over binary vector spaces efficiently. We illustrate the approach and its ability to reduce the computational complexity in two use cases: (1) encryption key discovery of a linear feedback shift register and (2) safety verification of a road traffic intersection protocol.
It is important to choose the geographical distributions of public resources in a fair and equitable manner. However, it is complicated to quantify the equity of such a distribution; important factors include distances to resource sites, availability of transportation, and ease of travel. We use persistent homology, which is a tool from topological data analysis, to study the effective availability and coverage of polling sites. The information from persistent homology allows us to infer holes in the distribution of polling sites. We analyze and compare the coverage of polling sites in Los Angeles County and five cities (Atlanta, Chicago, Jacksonville, New York City, and Salt Lake City), and we conclude that computation of persistent homology appears to be a reasonable approach to analyzing resource coverage.
Prophet inequalities are a central object of study in optimal stopping theory. A gambler is sent values online, sampled from an instance of independent distributions, in an adversarial, random or selected order, depending on the model. When observing each value, the gambler either accepts it as a reward or irrevocably rejects it and proceeds to observe the next value. The goal of the gambler, who cannot see the future, is maximising the expected value of the reward while competing against the expectation of a prophet (the offline maximum). In other words, one seeks to maximise the gambler-to-prophet ratio of the expectations. The model, in which the gambler selects the arrival order first, and then observes the values, is known as Order Selection. Recently it has been shown that in this model a ratio of $0.7251$ can be attained for any instance. If the gambler chooses the arrival order (uniformly) at random, we obtain the Random Order model. The worst case ratio over all possible instances has been extensively studied for at least $40$ years. Still, it is not known if carefully choosing the order, or simply taking it at random, benefits the gambler. We prove that, in the Random Order model, no algorithm can achieve a ratio larger than $0.7235$, thus showing for the first time that there is a real benefit in choosing the order.
Critical points mark locations in the domain where the level-set topology of a scalar function undergoes fundamental changes and thus indicate potentially interesting features in the data. Established methods exist to locate and relate such points in a deterministic setting, but it is less well understood how the concept of critical points can be extended to the analysis of uncertain data. Most methods for this task aim at finding likely locations of critical points or estimate the probability of their occurrence locally but do not indicate if critical points at potentially different locations in different realizations of a stochastic process are manifestations of the same feature, which is required to characterize the spatial uncertainty of critical points. Previous work on relating critical points across different realizations reported challenges for interpreting the resulting spatial distribution of critical points but did not investigate the causes. In this work, we provide a mathematical formulation of the problem of finding critical points with spatial uncertainty and computing their spatial distribution, which leads us to the notion of uncertain critical points. We analyze the theoretical properties of these structures and highlight connections to existing works for special classes of uncertain fields. We derive conditions under which well-interpretable results can be obtained and discuss the implications of those restrictions for the field of visualization. We demonstrate that the discussed limitations are not purely academic but also arise in real-world data.
Our modern world relies on a growing number of interconnected and interacting devices, leading to a plethora of logs establishing audit trails for all kinds of events. Simultaneously, logs become increasingly important for forensic investigations, and thus, an adversary will aim to alter logs to avoid culpability, e.g., by compromising devices that generate and store logs. Thus, it is essential to ensure that no one can tamper with any logs without going undetected. However, existing approaches to establish tamper evidence of logs do not scale and cannot protect the increasingly large number of devices found today, as they impose large storage or network overheads. Additionally, most schemes do not provide an efficient mechanism to prove that individual events have been logged to establish accountability when different devices interact. This paper introduces a novel scheme for practical large-scale tamper-evident logging with the help of a trusted third party. To achieve this, we present a new binary hash tree construction designed around timestamps to achieve constant storage overhead with a configured temporal resolution. Additionally, our design enables the efficient construction of shareable proofs, proving that an event was indeed logged. Our evaluation shows that - using practical parameters - our scheme can localize any tampering of logs with a sub-second resolution, with a constant overhead of ~8KB per hour per device.
Inferring the posterior distribution in SLAM is critical for evaluating the uncertainty in localization and mapping, as well as supporting subsequent planning tasks aiming to reduce uncertainty for safe navigation. However, real-time full posterior inference techniques, such as Gaussian approximation and particle filters, either lack expressiveness for representing non-Gaussian posteriors or suffer from performance degeneracy when estimating high-dimensional posteriors. Inspired by the complementary strengths of Gaussian approximation and particle filters$\unicode{x2013}$scalability and non-Gaussian estimation, respectively$\unicode{x2013}$we blend these two approaches to infer marginal posteriors in SLAM. Specifically, Gaussian approximation provides robot pose distributions on which particle filters are conditioned to sample landmark marginals. In return, the maximum a posteriori point among these samples can be used to reset linearization points in the nonlinear optimization solver of the Gaussian approximation, facilitating the pursuit of global optima. We demonstrate the scalability, generalizability, and accuracy of our algorithm for real-time full posterior inference on realworld range-only SLAM and object-based bearing-only SLAM datasets.
The SZZ algorithm is used to connect bug-fixing commits to the earlier commits that introduced bugs. This algorithm has many applications and many variants have been devised. However, there are some types of commits that cannot be traced by the SZZ algorithm, referred to as "ghost commits". The evaluation of how these ghost commits impact the SZZ algorithm remains limited. Moreover, these algorithms have been evaluated on datasets created by software engineering researchers from information in bug trackers and version controlled histories. Since Oct 2013, the Linux kernel developers have started labelling bug-fixing patches with the commit identifiers of the corresponding bug-inducing commit(s) as a standard practice. As of v6.1-rc5, 76,046 pairs of bug-fixing patches and bug-inducing commits are available. This provides a unique opportunity to evaluate the SZZ algorithm on a large dataset that has been created and reviewed by project developers, entirely independently of the biases of software engineering researchers. In this paper, we apply six SZZ algorithms to 76,046 pairs of bug-fixing patches and bug-introducing commits from the Linux kernel. Our findings reveal that SZZ algorithms experience a more significant decline in recall on our dataset (13.8%) as compared to prior findings reported by Rosa et al., and the disparities between the individual SZZ algorithms diminish. Moreover, we find that 17.47% of bug-fixing commits are ghost commits. Finally, we propose Tracing-Commit SZZ (TC-SZZ), that traces all commits in the change history of lines modified or deleted in bug-fixing commits. Applying TC-SZZ to all failure cases, excluding ghost commits, we found that TC-SZZ could identify 17.7% of them. Our further analysis found that 34.6% of bug-inducing commits were in the function history, 27.5% in the file history (but not in the function history), and...
In pace with developments in the research field of artificial intelligence, knowledge graphs (KGs) have attracted a surge of interest from both academia and industry. As a representation of semantic relations between entities, KGs have proven to be particularly relevant for natural language processing (NLP), experiencing a rapid spread and wide adoption within recent years. Given the increasing amount of research work in this area, several KG-related approaches have been surveyed in the NLP research community. However, a comprehensive study that categorizes established topics and reviews the maturity of individual research streams remains absent to this day. Contributing to closing this gap, we systematically analyzed 507 papers from the literature on KGs in NLP. Our survey encompasses a multifaceted review of tasks, research types, and contributions. As a result, we present a structured overview of the research landscape, provide a taxonomy of tasks, summarize our findings, and highlight directions for future work.
Visual recognition is currently one of the most important and active research areas in computer vision, pattern recognition, and even the general field of artificial intelligence. It has great fundamental importance and strong industrial needs. Deep neural networks (DNNs) have largely boosted their performances on many concrete tasks, with the help of large amounts of training data and new powerful computation resources. Though recognition accuracy is usually the first concern for new progresses, efficiency is actually rather important and sometimes critical for both academic research and industrial applications. Moreover, insightful views on the opportunities and challenges of efficiency are also highly required for the entire community. While general surveys on the efficiency issue of DNNs have been done from various perspectives, as far as we are aware, scarcely any of them focused on visual recognition systematically, and thus it is unclear which progresses are applicable to it and what else should be concerned. In this paper, we present the review of the recent advances with our suggestions on the new possible directions towards improving the efficiency of DNN-related visual recognition approaches. We investigate not only from the model but also the data point of view (which is not the case in existing surveys), and focus on three most studied data types (images, videos and points). This paper attempts to provide a systematic summary via a comprehensive survey which can serve as a valuable reference and inspire both researchers and practitioners who work on visual recognition problems.
For deploying a deep learning model into production, it needs to be both accurate and compact to meet the latency and memory constraints. This usually results in a network that is deep (to ensure performance) and yet thin (to improve computational efficiency). In this paper, we propose an efficient method to train a deep thin network with a theoretic guarantee. Our method is motivated by model compression. It consists of three stages. In the first stage, we sufficiently widen the deep thin network and train it until convergence. In the second stage, we use this well-trained deep wide network to warm up (or initialize) the original deep thin network. This is achieved by letting the thin network imitate the immediate outputs of the wide network from layer to layer. In the last stage, we further fine tune this well initialized deep thin network. The theoretical guarantee is established by using mean field analysis, which shows the advantage of layerwise imitation over traditional training deep thin networks from scratch by backpropagation. We also conduct large-scale empirical experiments to validate our approach. By training with our method, ResNet50 can outperform ResNet101, and BERT_BASE can be comparable with BERT_LARGE, where both the latter models are trained via the standard training procedures as in the literature.
Neural machine translation (NMT) is a deep learning based approach for machine translation, which yields the state-of-the-art translation performance in scenarios where large-scale parallel corpora are available. Although the high-quality and domain-specific translation is crucial in the real world, domain-specific corpora are usually scarce or nonexistent, and thus vanilla NMT performs poorly in such scenarios. Domain adaptation that leverages both out-of-domain parallel corpora as well as monolingual corpora for in-domain translation, is very important for domain-specific translation. In this paper, we give a comprehensive survey of the state-of-the-art domain adaptation techniques for NMT.