Image recognition tasks typically use deep learning and require enormous processing power, thus relying on hardware accelerators like GPUs and TPUs for fast, timely processing. Failure in real-time image recognition tasks can occur due to sub-optimal mapping on hardware accelerators during model deployment, which may lead to timing uncertainty and erroneous behavior. Mapping on hardware accelerators is done using multiple software components like deep learning frameworks, compilers, and device libraries, that we refer to as the computational environment. Owing to the increased use of image recognition tasks in safety-critical applications like autonomous driving and medical imaging, it is imperative to assess their robustness to changes in the computational environment, as the impact of parameters like deep learning frameworks, compiler optimizations, and hardware devices on model performance and correctness is not yet well understood. In this paper we present a differential testing framework, DeltaNN, that allows us to assess the impact of different computational environment parameters on the performance of image recognition models during deployment, post training. DeltaNN generates different implementations of a given image recognition model for variations in environment parameters, namely, deep learning frameworks, compiler optimizations and hardware devices and analyzes differences in model performance as a result. Using DeltaNN, we conduct an empirical study of robustness analysis of three popular image recognition models using the ImageNet dataset. We report the impact in terms of misclassifications and inference time differences across different settings. In total, we observed up to 72% output label differences across deep learning frameworks, and up to 81% unexpected performance degradation in terms of inference time, when applying compiler optimizations.
Quantum programming languages aim to reduce the burden of manipulating hardware-level logic gates when implementing a quantum algorithm. A hurdle to this goal is the difficulty of expressing control flow, such as branching and iteration, that depends on the value of data in quantum superposition. To implement algorithms for factorization, search, and simulation that contain control flow, quantum languages often require the use of bit-level logic gates as opposed to the high-level constructs provided by classical languages. The reason for this gap is that whereas a classical computer supports imperative abstractions for control flow via a program counter that can depend on data and functional abstractions via terms in the $\lambda$-calculus, the typical architecture of a quantum computer does not provide a program counter that can depend on data in superposition, nor a physical representation of $\lambda$-terms in superposition. In principle, a quantum architecture supporting such abstractions would simplify the implementation of control flow in quantum programs. However, in this work, we identify a fundamental obstacle to control flow in quantum programming, which is that a quantum computer cannot correctly support the conventional conditional jump instruction in superposition, nor the $\beta$-reduction of $\lambda$-terms in superposition. We formally prove that programming abstractions with non-injective state transition semantics, such as the above, produce incorrect results in superposition. As a way forward, we present the necessary and sufficient conditions for control flow in superposition to be correctly realizable in a program. We introduce the quantum control machine, an instruction set architecture that satisfies these conditions, and show how it enables the use of control flow to implement algorithms such as phase estimation, quantum walk, and physical simulation.
Anomaly detection requires detecting abnormal samples in large unlabeled datasets. While progress in deep learning and the advent of foundation models has produced powerful unsupervised anomaly detection methods, their deployment in practice is often hindered by the lack of labeled data -- without it, the detection accuracy of an anomaly detector cannot be evaluated reliably. In this work, we propose a general-purpose framework for evaluating image-based anomaly detectors with synthetically generated validation data. Our method assumes access to a small support set of normal images which are processed with a pre-trained diffusion model (our proposed method requires no training or fine-tuning) to produce synthetic anomalies. When mixed with normal samples from the support set, the synthetic anomalies create detection tasks that compose a validation framework for anomaly detection evaluation and model selection. In an extensive empirical study, ranging from natural images to industrial applications, we find that our synthetic validation framework selects the same models and hyper-parameters as selection with a ground-truth validation set. In addition, we find that prompts selected by our method for CLIP-based anomaly detection outperforms all other prompt selection strategies, and leads to the overall best detection accuracy, even on the challenging MVTec-AD dataset.
Optimal recovery is a mathematical framework for learning functions from observational data by adopting a worst-case perspective tied to model assumptions on the functions to be learned. Working in a finite-dimensional Hilbert space, we consider model assumptions based on approximability and observation inaccuracies modeled as additive errors bounded in $\ell_2$. We focus on the local recovery problem, which amounts to the determination of Chebyshev centers. Earlier work by Beck and Eldar presented a semidefinite recipe for the determination of Chebyshev centers. The result was valid in the complex setting only, but not necessarily in the real setting, since it relied on the S-procedure with two quadratic constraints, which offers a tight relaxation only in the complex setting. Our contribution consists in proving that this semidefinite recipe is exact in the real setting, too, at least in the particular instance where the quadratic constraints involve orthogonal projectors. Our argument exploits a previous work of ours, where exact Chebyshev centers were obtained in a different way. We conclude by stating some open questions and by commenting on other recent results in optimal recovery.
In the context of computer models, calibration is the process of estimating unknown simulator parameters from observational data. Calibration is variously referred to as model fitting, parameter estimation/inference, an inverse problem, and model tuning. The need for calibration occurs in most areas of science and engineering, and has been used to estimate hard to measure parameters in climate, cardiology, drug therapy response, hydrology, and many other disciplines. Although the statistical method used for calibration can vary substantially, the underlying approach is essentially the same and can be considered abstractly. In this survey, we review the decisions that need to be taken when calibrating a model, and discuss a range of computational methods that can be used to compute Bayesian posterior distributions.
Index structures often materialize one or multiple levels of explicit indirections (aka pointers) to allow for a quick traversal to the data of interest. Unfortunately, dereferencing a pointer to go from one level to the other is costly since additionally to following the address, it involves two address translations from virtual memory to physical memory under the hood. In the worst case, such an address translation is resolved by an index access itself, namely by a lookup into the page table, a central hardware-accelerated index structure of the OS. However, if the page table is anyways constantly queried, it raises the question whether we can actively incorporate it into our database indexes and make it work for us. Precisely, instead of materializing indirections in form of pointers, we propose to express these indirections directly in the page table wherever possible. By introducing such shortcuts, we (a) effectively reduce the height of traversal during lookups and (b) exploit the hardware-acceleration of lookups in the page table. In this work, we analyze the strengths and considerations of this approach and showcase its effectiveness at the case of the real-world indexing scheme extendible hashing.
The escalating complexity of software systems and accelerating development cycles pose a significant challenge in managing code errors and implementing business logic. Traditional techniques, while cornerstone for software quality assurance, exhibit limitations in handling intricate business logic and extensive codebases. To address these challenges, we introduce the Intelligent Code Analysis Agent (ICAA), a novel concept combining AI models, engineering process designs, and traditional non-AI components. The ICAA employs the capabilities of large language models (LLMs) such as GPT-3 or GPT-4 to automatically detect and diagnose code errors and business logic inconsistencies. In our exploration of this concept, we observed a substantial improvement in bug detection accuracy, reducing the false-positive rate to 66\% from the baseline's 85\%, and a promising recall rate of 60.8\%. However, the token consumption cost associated with LLMs, particularly the average cost for analyzing each line of code, remains a significant consideration for widespread adoption. Despite this challenge, our findings suggest that the ICAA holds considerable potential to revolutionize software quality assurance, significantly enhancing the efficiency and accuracy of bug detection in the software development process. We hope this pioneering work will inspire further research and innovation in this field, focusing on refining the ICAA concept and exploring ways to mitigate the associated costs.
The rapid development of deep learning has made a great progress in segmentation, one of the fundamental tasks of computer vision. However, the current segmentation algorithms mostly rely on the availability of pixel-level annotations, which are often expensive, tedious, and laborious. To alleviate this burden, the past years have witnessed an increasing attention in building label-efficient, deep-learning-based segmentation algorithms. This paper offers a comprehensive review on label-efficient segmentation methods. To this end, we first develop a taxonomy to organize these methods according to the supervision provided by different types of weak labels (including no supervision, coarse supervision, incomplete supervision and noisy supervision) and supplemented by the types of segmentation problems (including semantic segmentation, instance segmentation and panoptic segmentation). Next, we summarize the existing label-efficient segmentation methods from a unified perspective that discusses an important question: how to bridge the gap between weak supervision and dense prediction -- the current methods are mostly based on heuristic priors, such as cross-pixel similarity, cross-label constraint, cross-view consistency, cross-image relation, etc. Finally, we share our opinions about the future research directions for label-efficient deep segmentation.
Since hardware resources are limited, the objective of training deep learning models is typically to maximize accuracy subject to the time and memory constraints of training and inference. We study the impact of model size in this setting, focusing on Transformer models for NLP tasks that are limited by compute: self-supervised pretraining and high-resource machine translation. We first show that even though smaller Transformer models execute faster per iteration, wider and deeper models converge in significantly fewer steps. Moreover, this acceleration in convergence typically outpaces the additional computational overhead of using larger models. Therefore, the most compute-efficient training strategy is to counterintuitively train extremely large models but stop after a small number of iterations. This leads to an apparent trade-off between the training efficiency of large Transformer models and the inference efficiency of small Transformer models. However, we show that large models are more robust to compression techniques such as quantization and pruning than small models. Consequently, one can get the best of both worlds: heavily compressed, large models achieve higher accuracy than lightly compressed, small models.
This work considers the question of how convenient access to copious data impacts our ability to learn causal effects and relations. In what ways is learning causality in the era of big data different from -- or the same as -- the traditional one? To answer this question, this survey provides a comprehensive and structured review of both traditional and frontier methods in learning causality and relations along with the connections between causality and machine learning. This work points out on a case-by-case basis how big data facilitates, complicates, or motivates each approach.
We introduce a multi-task setup of identifying and classifying entities, relations, and coreference clusters in scientific articles. We create SciERC, a dataset that includes annotations for all three tasks and develop a unified framework called Scientific Information Extractor (SciIE) for with shared span representations. The multi-task setup reduces cascading errors between tasks and leverages cross-sentence relations through coreference links. Experiments show that our multi-task model outperforms previous models in scientific information extraction without using any domain-specific features. We further show that the framework supports construction of a scientific knowledge graph, which we use to analyze information in scientific literature.