Misaligned incentives in secure software development have long been the focus of research in the economics of security. Product liability, a powerful legal framework in other industries, has been largely ineffective for software products until recent times. However, the rapid regulatory responses to recent global cyberattacks by both the United States and the European Union, together with the (relative) success of the General Data Protection Regulation in defining both duty and standard of care for software vendors, may just enable regulators to use liability to re-align incentives for the benefit of the digital society. Specifically, the recently proposed United States National Cybersecurity Strategy shifts responsibility for cyber incidents back to software vendors. In doing so, the strategy also puts forward the concept of the liability waiver: if a software company voluntarily undergoes and passes an IT security audit, its liability is waived. In this paper, we analyze this audit scenario from the aspect of the software vendor. We propose a mechanism where a software vendor should first undergo a repeated auditing process in each stage of which the vendor decides whether to quit early or stay with additional security investment. We show that the optimal strategy for an opt-in vendor is to never quit; and exert cumulative investments in either "one-and-done" or "incremental" manner. We relate the audit mechanism to a liability waiver insurance policy and revealed its effect on reshaping the vendor's risk perception. We also discuss influence of audit quality on the vendor's incentives and pinpoint that a desirable audit rule should be highly accurate and less strict.
Anecdotal evidence of cannabis use by professional programmers abounds. Recent studies have found that some professionals regularly use cannabis while programming even for work-related tasks. However, accounts of the impacts of cannabis on programming vary widely and are often contradictory. For example, some programmers claim that it impairs their ability to generate correct solutions while others claim it enhances creativity and focus. There remains a need for an empirical understanding of the true impacts of cannabis on programming. This paper presents the first controlled observational study of the effects of cannabis on programming ability. Based on a within-subjects design with over 70 participants, we find that at ecologically valid dosages, cannabis significantly impairs programming performance. Programs implemented while high contain more bugs and take longer to write (p < 0.05), a small to medium effect (0.22 <= d <= 0.44). We also did not find any evidence that high programmers generate more divergent solutions. However, programmers can accurately assess differences in their programming performance (r = 0.59), even when under the influence of cannabis. We hope that this research will facilitate evidence-based policies and help developers make informed decisions regarding cannabis use while programming.
This work focuses on advancing security research in the hardware design space by formally defining the realistic problem of Hardware Trojan (HT) detection. The goal is to model HT detection more closely to the real world, i.e., describing the problem as "The Seeker's Dilemma" (an extension of Hide&Seek on a graph), where a detecting agent is unaware of whether circuits are infected by HTs or not. Using this theoretical problem formulation, we create a benchmark that consists of a mixture of HT-free and HT-infected restructured circuits while preserving their original functionalities. The restructured circuits are randomly infected by HTs, causing a situation where the defender is uncertain if a circuit is infected or not. We believe that our innovative dataset will help the community better judge the detection quality of different methods by comparing their success rates in circuit classification. We use our developed benchmark to evaluate three state-of-the-art HT detection tools to show baseline results for this approach. We use Principal Component Analysis to assess the strength of our benchmark, where we observe that some restructured HT-infected circuits are mapped closely to HT-free circuits, leading to significant label misclassification by detectors.
Extracting information efficiently from quantum systems is a major component of quantum information processing tasks. Randomized measurements, or classical shadows, enable predicting many properties of arbitrary quantum states using few measurements. While random single qubit measurements are experimentally friendly and suitable for learning low-weight Pauli observables, they perform poorly for nonlocal observables. Prepending a shallow random quantum circuit before measurements maintains this experimental friendliness, but also has favorable sample complexities for observables beyond low-weight Paulis, including high-weight Paulis and global low-rank properties such as fidelity. However, in realistic scenarios, quantum noise accumulated with each additional layer of the shallow circuit biases the results. To address these challenges, we propose the robust shallow shadows protocol. Our protocol uses Bayesian inference to learn the experimentally relevant noise model and mitigate it in postprocessing. This mitigation introduces a bias-variance trade-off: correcting for noise-induced bias comes at the cost of a larger estimator variance. Despite this increased variance, as we demonstrate on a superconducting quantum processor, our protocol correctly recovers state properties such as expectation values, fidelity, and entanglement entropy, while maintaining a lower sample complexity compared to the random single qubit measurement scheme. We also theoretically analyze the effects of noise on sample complexity and show how the optimal choice of the shallow shadow depth varies with noise strength. This combined theoretical and experimental analysis positions the robust shallow shadow protocol as a scalable, robust, and sample-efficient protocol for characterizing quantum states on current quantum computing platforms.
Audits are critical mechanisms for identifying the risks and limitations of deployed artificial intelligence (AI) systems. However, the effective execution of AI audits remains incredibly difficult. As a result, practitioners make use of various tools to support their efforts. Drawing on interviews with 35 AI audit practitioners and a landscape analysis of 390 tools, we map the current ecosystem of available AI audit tools. While there are many tools designed to assist practitioners with setting standards and evaluating AI systems, these tools often fell short of supporting the accountability goals of AI auditing in practice. We thus highlight areas for future tool development beyond evaluation -- from harms discovery to advocacy -- and outline challenges practitioners faced in their efforts to use AI audit tools. We conclude that resources are lacking to adequately support the full scope of needs for many AI audit practitioners and recommend that the field move beyond tools for just evaluation, towards more comprehensive infrastructure for AI accountability.
As one of the most successful and effective software testing techniques in recent years, fuzz testing has uncovered numerous bugs and vulnerabilities in modern software, including network protocol software. In contrast to other fuzzing targets, network protocol software exhibits its distinct characteristics and challenges, introducing a plethora of research questions that need to be addressed in the design and implementation of network protocol fuzzers. While some research work has evaluated and systematized the knowledge of general fuzzing techniques at a high level, there is a lack of similar analysis and summarization for fuzzing research specific to network protocols. This paper offers a comprehensive exposition of network protocol software's fuzzing-related features and conducts a systematic review of some representative advancements in network protocol fuzzing since its inception. We summarize state-of-the-art strategies and solutions in various aspects, propose a unified protocol fuzzing process model, and introduce the techniques involved in each stage of the model. At the same time, this paper also summarizes the promising research directions in the landscape of protocol fuzzing to foster exploration within the community for more efficient and intelligent modern network protocol fuzzing techniques.
Security vulnerabilities are increasingly prevalent in modern software and they are widely consequential to our society. Various approaches to defending against these vulnerabilities have been proposed, among which those leveraging deep learning (DL) avoid major barriers with other techniques hence attracting more attention in recent years. However, DL-based approaches face critical challenges including the lack of sizable and quality-labeled task-specific datasets and their inability to generalize well to unseen, real-world scenarios. Lately, large language models (LLMs) have demonstrated impressive potential in various domains by overcoming those challenges, especially through chain-of-thought (CoT) prompting. In this paper, we explore how to leverage LLMs and CoT to address three key software vulnerability analysis tasks: identifying a given type of vulnerabilities, discovering vulnerabilities of any type, and patching detected vulnerabilities. We instantiate the general CoT methodology in the context of these tasks through VSP , our unified, vulnerability-semantics-guided prompting approach, and conduct extensive experiments assessing VSP versus five baselines for the three tasks against three LLMs and two datasets. Results show substantial superiority of our CoT-inspired prompting (553.3%, 36.5%, and 30.8% higher F1 accuracy for vulnerability identification, discovery, and patching, respectively, on CVE datasets) over the baselines. Through in-depth case studies analyzing VSP failures, we also reveal current gaps in LLM/CoT for challenging vulnerability cases, while proposing and validating respective improvements.
As research and deployment of AI grows, the computational burden to support and sustain its progress inevitably does too. To train or fine-tune state-of-the-art models in NLP, computer vision, etc., some form of AI hardware acceleration is virtually a requirement. Recent large language models require considerable resources to train and deploy, resulting in significant energy usage, potential carbon emissions, and massive demand for GPUs and other hardware accelerators. However, this surge carries large implications for energy sustainability at the HPC/datacenter level. In this paper, we study the aggregate effect of power-capping GPUs on GPU temperature and power draw at a research supercomputing center. With the right amount of power-capping, we show significant decreases in both temperature and power draw, reducing power consumption and potentially improving hardware life-span with minimal impact on job performance. While power-capping reduces power draw by design, the aggregate system-wide effect on overall energy consumption is less clear; for instance, if users notice job performance degradation from GPU power-caps, they may request additional GPU-jobs to compensate, negating any energy savings or even worsening energy consumption. To our knowledge, our work is the first to conduct and make available a detailed analysis of the effects of GPU power-capping at the supercomputing scale. We hope our work will inspire HPCs/datacenters to further explore, evaluate, and communicate the impact of power-capping AI hardware accelerators for more sustainable AI.
Solving complicated AI tasks with different domains and modalities is a key step toward artificial general intelligence. While there are abundant AI models available for different domains and modalities, they cannot handle complicated AI tasks. Considering large language models (LLMs) have exhibited exceptional ability in language understanding, generation, interaction, and reasoning, we advocate that LLMs could act as a controller to manage existing AI models to solve complicated AI tasks and language could be a generic interface to empower this. Based on this philosophy, we present HuggingGPT, a framework that leverages LLMs (e.g., ChatGPT) to connect various AI models in machine learning communities (e.g., Hugging Face) to solve AI tasks. Specifically, we use ChatGPT to conduct task planning when receiving a user request, select models according to their function descriptions available in Hugging Face, execute each subtask with the selected AI model, and summarize the response according to the execution results. By leveraging the strong language capability of ChatGPT and abundant AI models in Hugging Face, HuggingGPT is able to cover numerous sophisticated AI tasks in different modalities and domains and achieve impressive results in language, vision, speech, and other challenging tasks, which paves a new way towards artificial general intelligence.
We present VeriX, a first step towards verified explainability of machine learning models in safety-critical applications. Specifically, our sound and optimal explanations can guarantee prediction invariance against bounded perturbations. We utilise constraint solving techniques together with feature sensitivity ranking to efficiently compute these explanations. We evaluate our approach on image recognition benchmarks and a real-world scenario of autonomous aircraft taxiing.
Recent years have witnessed the resurgence of knowledge engineering which is featured by the fast growth of knowledge graphs. However, most of existing knowledge graphs are represented with pure symbols, which hurts the machine's capability to understand the real world. The multi-modalization of knowledge graphs is an inevitable key step towards the realization of human-level machine intelligence. The results of this endeavor are Multi-modal Knowledge Graphs (MMKGs). In this survey on MMKGs constructed by texts and images, we first give definitions of MMKGs, followed with the preliminaries on multi-modal tasks and techniques. We then systematically review the challenges, progresses and opportunities on the construction and application of MMKGs respectively, with detailed analyses of the strength and weakness of different solutions. We finalize this survey with open research problems relevant to MMKGs.