Background: Continuous experimentation (CE) has been proposed as a data-driven approach to software product development. Several challenges with this approach have been described in large organisations, but its application in smaller companies with early-stage products remains largely unexplored. Aims: The goal of this study is to understand what factors could affect the adoption of CE in early-stage software startups. Method: We present a descriptive multiple-case study of five startups in Finland which differ in their utilisation of experimentation. Results: We find that practices often mentioned as prerequisites for CE, such as iterative development and continuous integration and delivery, were used in the case companies. CE was not widely recognised or used as described in the literature. Only one company performed experiments and used experimental data systematically. Conclusions: Our study indicates that small companies may be unlikely to adopt CE unless 1) at least some company employees have prior experience with the practice, 2) the company's limited available resources are not exceeded by its adoption, and 3) the practice solves a problem currently experienced by the company, or the company perceives almost immediate benefit of adopting it. We discuss implications for advancing CE in early-stage startups and outline directions for future research on the approach.
A candidate explanation of the good empirical performance of deep neural networks is the implicit regularization effect of first order optimization methods. Inspired by this, we prove a convergence theorem for nonconvex composite optimization, and apply it to a general learning problem covering many machine learning applications, including supervised learning. We then present a deep multilayer perceptron model and prove that, when sufficiently wide, it $(i)$ leads to the convergence of gradient descent to a global optimum with a linear rate, $(ii)$ benefits from the implicit regularization effect of gradient descent, $(iii)$ is subject to novel bounds on the generalization error, $(iv)$ exhibits the lazy training phenomenon and $(v)$ enjoys learning rate transfer across different widths. The corresponding coefficients, such as the convergence rate, improve as width is further increased, and depend on the even order moments of the data generating distribution up to an order depending on the number of layers. The only non-mild assumption we make is the concentration of the smallest eigenvalue of the neural tangent kernel at initialization away from zero, which has been shown to hold for a number of less general models in contemporary works. We present empirical evidence supporting this assumption as well as our theoretical claims.
Artificial Intelligence (AI) systems have gained significant traction in the recent past, creating new challenges in requirements engineering (RE) when building AI software systems. RE for AI practices have not been studied much and have scarce empirical studies. Additionally, many AI software solutions tend to focus on the technical aspects and ignore human-centered values. In this paper, we report on a case study for eliciting and modeling requirements using our framework and a supporting tool for human-centred RE for AI systems. Our case study is a mobile health application for encouraging type-2 diabetic people to reduce their sedentary behavior. We conducted our study with three experts from the app team -- a software engineer, a project manager and a data scientist. We found in our study that most human-centered aspects were not originally considered when developing the first version of the application. We also report on other insights and challenges faced in RE for the health application, e.g., frequently changing requirements.
This paper presents a data-driven framework to improve the trustworthiness of US tax preparation software systems. Given the legal implications of bugs in such software on its users, ensuring compliance and trustworthiness of tax preparation software is of paramount importance. The key barriers in developing debugging aids for tax preparation systems are the unavailability of explicit specifications and the difficulty of obtaining oracles. We posit that, since the US tax law adheres to the legal doctrine of precedent, the specifications about the outcome of tax preparation software for an individual taxpayer must be viewed in comparison with individuals that are deemed similar. Consequently, these specifications are naturally available as properties on the software requiring similar inputs provide similar outputs. Inspired by the metamorphic testing paradigm, we dub these relations metamorphic relations. In collaboration with legal and tax experts, we explicated metamorphic relations for a set of challenging properties from various US Internal Revenue Services (IRS) publications including Publication 596 (Earned Income Tax Credit), Schedule 8812 (Qualifying Children/Other Dependents), and Form 8863 (Education Credits). We focus on an open-source tax preparation software for our case study and develop a randomized test-case generation strategy to systematically validate the correctness of tax preparation software guided by metamorphic relations. We further aid this test-case generation by visually explaining the behavior of software on suspicious instances using easy to-interpret decision-tree models. Our tool uncovered several accountability bugs with varying severity ranging from non-robust behavior in corner-cases (unreliable behavior when tax returns are close to zero) to missing eligibility conditions in the updated versions of software.
Explainable machine learning provides tools to better understand predictive models and their decisions, but many such methods are limited to producing insights with respect to a single class. When generating explanations for several classes, reasoning over them to obtain a complete view may be difficult since they can present competing or contradictory evidence. To address this issue we introduce a novel paradigm of multi-class explanations. We outline the theory behind such techniques and propose a local surrogate model based on multi-output regression trees -- called LIMEtree -- which offers faithful and consistent explanations of multiple classes for individual predictions while being post-hoc, model-agnostic and data-universal. In addition to strong fidelity guarantees, our implementation supports (interactive) customisation of the explanatory insights and delivers a range of diverse explanation types, including counterfactual statements favoured in the literature. We evaluate our algorithm with a collection of quantitative experiments, a qualitative analysis based on explainability desiderata and a preliminary user study on an image classification task, comparing it to LIME. Our contributions demonstrate the benefits of multi-class explanations and wide-ranging advantages of our method across a diverse set scenarios.
A main characteristic of crowdsourcing software development (CSD) is the complexity of tasks and skills required by workers to achieve successful software crowdsourcing. The tasks proposed to the crowd in CSD are checked to ensure they are manageable and achievable. In general, individual tasks come from general goal-oriented projects. There are practices for breaking down software projects into manageable tasks, known as task decomposition. This study identified task decomposition techniques in software engineering, particularly in the context of CSD. Then, we defined the experienced developers who lead the requester in decomposing the project, preparing tasks, and reviewing submissions. This study explored and addressed decomposition approaches in CSD. Next, we selected projects in TopCoder to identify the task decomposition process in the CSD context. Finally, we concluded with future research directions for investigating decomposition approaches and their effects in the CSD context to ensure successful crowdsourced software projects.
Background. The mass transition to remote work amid the COVID-19 pandemic profoundly affected software professionals, who abruptly shifted into ostensibly temporary home offices. The effects of this transition on these professionals are complex, depending on the particularities of the context and individuals. Recent studies advocate for remote structures to create opportunities for many equity-deserving groups; however, remote work can also be challenging for some individuals, such as women and individuals with disabilities. Objective. This study aims to investigate the effects of remote work on LGBTQIA+ software professionals. Method. Grounded theory methodology was applied based on information collected from two main sources: a survey questionnaire with a sample of 57 LGBTQIA+ software professionals and nine follow-up interviews with individuals from this sample. This sample included professionals of different genders, ethnicities, sexual orientations, and levels of experience. Findings. Our findings demonstrate that (1) remote work benefits LGBTQIA+ people by increasing security and visibility; (2) remote work harms LGBTQIA+ software professionals through isolation and invisibility; (3) the benefits outweigh the drawbacks; (4) the drawbacks can be mitigated by supportive measures developed by software companies. Conclusion. This paper investigated how remote work can affect LGBTQIA+ software professionals and presented a set of recommendations on how software companies can address the benefits and limitations associated with this work model. In summary, we concluded that remote work is crucial in increasing diversity and inclusion in the software industry.
The ubiquity of distributed machine learning (ML) in sensitive public domain applications calls for algorithms that protect data privacy, while being robust to faults and adversarial behaviors. Although privacy and robustness have been extensively studied independently in distributed ML, their synthesis remains poorly understood. We present the first tight analysis of the error incurred by any algorithm ensuring robustness against a fraction of adversarial machines, as well as differential privacy (DP) for honest machines' data against any other curious entity. Our analysis exhibits a fundamental trade-off between privacy, robustness, and utility. Surprisingly, we show that the cost of this trade-off is marginal compared to that of the classical privacy-utility trade-off. To prove our lower bound, we consider the case of mean estimation, subject to distributed DP and robustness constraints, and devise reductions to centralized estimation of one-way marginals. We prove our matching upper bound by presenting a new distributed ML algorithm using a high-dimensional robust aggregation rule. The latter amortizes the dependence on the dimension in the error (caused by adversarial workers and DP), while being agnostic to the statistical properties of the data.
Because of the constrained nature of devices and networks in the Internet of Things (IoT), secure yet lightweight communication protocols are paramount. QUIC is an emerging contender in this arena and it provides several benefits over TCP. Tuning of TCP has been recently studied for IoT and guidelines are provided in RFC 9006. The same is not true of QUIC -- a much newer protocol with a learning curve. The aim of this paper is to provide empirically based insights into parameterization considerations of QUIC for IoT. To this end, we rigorously tested two modes of MQTT-over-QUIC as well as a pure-HTTP/3 publish-subscribe architecture (of our design) under various conditions. A suite of 8 metrics relating to device and network overhead and performance was employed in addition to root cause analysis on a hardware testbed. We identified a number of tuning considerations and concluded that HTTP/3 was more preferable for reliable time-sensitive applications.
Artificial intelligence (AI) has become a part of everyday conversation and our lives. It is considered as the new electricity that is revolutionizing the world. AI is heavily invested in both industry and academy. However, there is also a lot of hype in the current AI debate. AI based on so-called deep learning has achieved impressive results in many problems, but its limits are already visible. AI has been under research since the 1940s, and the industry has seen many ups and downs due to over-expectations and related disappointments that have followed. The purpose of this book is to give a realistic picture of AI, its history, its potential and limitations. We believe that AI is a helper, not a ruler of humans. We begin by describing what AI is and how it has evolved over the decades. After fundamentals, we explain the importance of massive data for the current mainstream of artificial intelligence. The most common representations for AI, methods, and machine learning are covered. In addition, the main application areas are introduced. Computer vision has been central to the development of AI. The book provides a general introduction to computer vision, and includes an exposure to the results and applications of our own research. Emotions are central to human intelligence, but little use has been made in AI. We present the basics of emotional intelligence and our own research on the topic. We discuss super-intelligence that transcends human understanding, explaining why such achievement seems impossible on the basis of present knowledge,and how AI could be improved. Finally, a summary is made of the current state of AI and what to do in the future. In the appendix, we look at the development of AI education, especially from the perspective of contents at our own university.
Causal inference is a critical research topic across many domains, such as statistics, computer science, education, public policy and economics, for decades. Nowadays, estimating causal effect from observational data has become an appealing research direction owing to the large amount of available data and low budget requirement, compared with randomized controlled trials. Embraced with the rapidly developed machine learning area, various causal effect estimation methods for observational data have sprung up. In this survey, we provide a comprehensive review of causal inference methods under the potential outcome framework, one of the well known causal inference framework. The methods are divided into two categories depending on whether they require all three assumptions of the potential outcome framework or not. For each category, both the traditional statistical methods and the recent machine learning enhanced methods are discussed and compared. The plausible applications of these methods are also presented, including the applications in advertising, recommendation, medicine and so on. Moreover, the commonly used benchmark datasets as well as the open-source codes are also summarized, which facilitate researchers and practitioners to explore, evaluate and apply the causal inference methods.