Static analysis is a classical technique for improving software security and software quality in general. Fairly recently, a new static analyzer was implemented in the GNU Compiler Collection (GCC). The present paper uses the GCC's analyzer to empirically examine popular Linux packages. The dataset used is based on those packages in the Gentoo Linux distribution that are either written in C or contain C code. In total, $3,538$ such packages are covered. According to the results, uninitialized variables and NULL pointer dereference issues are the most common problems according to the analyzer. Classical memory management issues are relatively rare. The warnings also follow a long-tailed probability distribution across the packages; a few packages are highly warning-prone, whereas no warnings are present for as much as 89% of the packages. Furthermore, the warnings do not vary across different application domains. With these results, the paper contributes to the domain of large-scale empirical research on software quality and security. In addition, a discussion is presented about practical implications of the results.
Mixed methods research is often used in software engineering, but researchers outside of the social or human sciences often lack experience when using these designs. This paper provides guidelines and advice on how to design mixed method research, and to encourage the intentional, rigorous, and innovative use of mixed methods in software engineering. It also presents key characteristics of core mixed method research designs. Through a number of fictitious but recognizable software engineering research scenarios, we showcase how to choose suitable designs and consider the inevitable trade-offs any design choice leads to. We describe several antipatterns that illustrate what to avoid in mixed method research, and when mixed method research should be considered over other approaches.
Academic research in static analysis produces software implementations. These implementations are time-consuming to develop and some need to be maintained in order to enable building further research upon the implementation. While necessary, these processes can be quickly challenging. This article documents the tools and techniques we have come up with to simplify the maintenance of Mopsa since 2017. Mopsa is a static analysis platform that aims at being sound. First, we describe an automated way to measure precision that does not require any baseline of true bugs obtained by manually inspecting the results. Further, it improves transparency of the analysis, and helps discovering regressions during continuous integration. Second, we have taken inspiration from standard tools observing the concrete execution of a program to design custom tools observing the abstract execution of the analyzed program itself, such as abstract debuggers and profilers. Finally, we report on some cases of automated testcase reduction.
Quantum software is a key enabler for the revolutionary applications promised by Quantum Computing (QC), a field poised to transform industries and attract significant investment. However, quantum software engineering (QSE) faces unique ethical challenges that must be addressed early to ensure responsible and equitable development. This vision paper explores the role of ethics and specifically diversity and inclusion (D&I) considerations in QSE, emphasising the importance of embedding these principles throughout the quantum software lifecycle. Neglecting D&I could lead to power imbalances, algorithmic biases, and limited innovation, ultimately hindering QC's potential to benefit society equitably. By drawing lessons from classical software and artificial intelligence, we discuss our vision for integrating D&I into QSE, for a more transparent, fair, and accountable future for quantum technologies. This paper aims to initiate discussion within the research community and lay the foundation for the ethical development of quantum software that promotes social equity.
This study investigates the impact of integrating DevSecOps and Generative Artificial Intelligence (GAI) on software delivery performance within technology firms. Utilizing a qualitative research methodology, the research involved semi-structured interviews with industry practitioners and analysis of case studies from organizations that have successfully implemented these methodologies. The findings reveal significant enhancements in research and development (R&D) efficiency, improved source code management, and heightened software quality and security. The integration of GAI facilitated automation of coding tasks and predictive analytics, while DevSecOps ensured that security measures were embedded throughout the development lifecycle. Despite the promising results, the study identifies gaps related to the generalizability of the findings due to the limited sample size and the qualitative nature of the research. This paper contributes valuable insights into the practical implementation of DevSecOps and GAI, highlighting their potential to transform software delivery processes in technology firms. Future research directions include quantitative assessments of the impact on specific business outcomes and comparative studies across different industries.
Probabilistic programs are a powerful and convenient approach to formalise distributions over system executions. A classical verification problem for probabilistic programs is temporal inference: to compute the likelihood that the execution traces satisfy a given temporal property. This paper presents a general framework for temporal inference, which applies to a rich variety of quantitative models including those that arise in the operational semantics of probabilistic and weighted programs. The key idea underlying our framework is that in a variety of existing approaches, the main construction that enables temporal inference is that of a product between the system of interest and the temporal property. We provide a unifying mathematical definition of product constructions, enabled by the realisation that 1) both systems and temporal properties can be modelled as coalgebras and 2) product constructions are distributive laws in this context. Our categorical framework leads us to our main contribution: a sufficient condition for correctness, which is precisely what enables to use the product construction for temporal inference. We show that our framework can be instantiated to naturally recover a number of disparate approaches from the literature including, e.g., partial expected rewards in Markov reward models, resource-sensitive reachability analysis, and weighted optimization problems. Further, we demonstrate a product of weighted programs and weighted temporal properties as a new instance to show the scalability of our approach.
[Context and Motivation] Several studies have investigated attributes of great software practitioners. However, the investigation of such attributes is still missing in Requirements Engineering (RE). The current knowledge on attributes of great software practitioners might not be easily translated to the context of RE because its activities are, usually, less technical and more human-centered than other software engineering activities. [Question/Problem] This work aims to investigate which are the attributes of great requirements engineers, the relationship between them, and strategies that can be employed to obtain these attributes. We follow a method composed of a survey with 18 practitioners and follow up interviews with 11 of them. [Principal Ideas/Results] Investigative ability in talking to stakeholders, judicious, and understand the business are the most commonly mentioned attributes amongst the set of 22 attributes identified, which were grouped into four categories. We also found 38 strategies to improve RE skills. Examples are training, talking to all stakeholders, and acquiring domain knowledge. [Contribution] The attributes, their categories, and relationships are organized into a map. The relations between attributes and strategies are represented in a Sankey diagram. Software practitioners can use our findings to improve their understanding about the role and responsibilities of requirements engineers.
The paper examines the handling times of software vulnerabilities in CPython, the reference implementation and interpreter for the today's likely most popular programming language, Python. The background comes from the so-called vulnerability life cycle analysis, the literature on bug fixing times, and the recent research on security of Python software. Based on regression analysis, the associated vulnerability fixing times can be explained very well merely by knowing who have reported the vulnerabilities. Severity, proof-of-concept code, commits made to a version control system, comments posted on a bug tracker, and references to other sources do not explain the vulnerability fixing times. With these results, the paper contributes to the recent effort to better understand security of the Python ecosystem.
Despite extensive efforts to create fairer machine learning (ML) datasets, there remains a limited understanding of the practical aspects of dataset curation. Drawing from interviews with 30 ML dataset curators, we present a comprehensive taxonomy of the challenges and trade-offs encountered throughout the dataset curation lifecycle. Our findings underscore overarching issues within the broader fairness landscape that impact data curation. We conclude with recommendations aimed at fostering systemic changes to better facilitate fair dataset curation practices.
Mathematical reasoning is a fundamental aspect of human intelligence and is applicable in various fields, including science, engineering, finance, and everyday life. The development of artificial intelligence (AI) systems capable of solving math problems and proving theorems has garnered significant interest in the fields of machine learning and natural language processing. For example, mathematics serves as a testbed for aspects of reasoning that are challenging for powerful deep learning models, driving new algorithmic and modeling advances. On the other hand, recent advances in large-scale neural language models have opened up new benchmarks and opportunities to use deep learning for mathematical reasoning. In this survey paper, we review the key tasks, datasets, and methods at the intersection of mathematical reasoning and deep learning over the past decade. We also evaluate existing benchmarks and methods, and discuss future research directions in this domain.
In pace with developments in the research field of artificial intelligence, knowledge graphs (KGs) have attracted a surge of interest from both academia and industry. As a representation of semantic relations between entities, KGs have proven to be particularly relevant for natural language processing (NLP), experiencing a rapid spread and wide adoption within recent years. Given the increasing amount of research work in this area, several KG-related approaches have been surveyed in the NLP research community. However, a comprehensive study that categorizes established topics and reviews the maturity of individual research streams remains absent to this day. Contributing to closing this gap, we systematically analyzed 507 papers from the literature on KGs in NLP. Our survey encompasses a multifaceted review of tasks, research types, and contributions. As a result, we present a structured overview of the research landscape, provide a taxonomy of tasks, summarize our findings, and highlight directions for future work.