An agile organization adapts what they are building to match their customer's evolving needs. Agile teams also adapt to changes in their organization's work environment. The latest change is the evolving environment of "hybrid" work - a mix of in-person and virtual staff. Team members might sometimes work together in the office, work from home, or work in other locations, and they may struggle to sustain a high level of collaboration and innovation. It isn't just pandemic social distancing - many of us want to work from home to eliminate our commute and spend more time with family. Are there learnings and best practices that organizations can use to become and stay effective in a hybrid world? An XP 2022 panel organized by Steven Fraser (Innoxec) discussed these questions in June 2022. The panel was facilitated by Hendrik Esser (Ericsson) and featured Alistair Cockburn (Heart of Agile), Sandy Mamoli (Nomad8), Nils Brede Moe (SINTEF), Jaana Nyfjord (Spotify), and Darja Smite (Blekinge Institute of Technology).
Blokchain is a promising technology to enable distributed and reliable data sharing at the network edge. The high security in blockchain is undoubtedly a critical factor for the network to handle important data item. On the other hand, according to the dilemma in blockchain, an overemphasis on distributed security will lead to poor transaction-processing capability, which limits the application of blockchain in data sharing scenarios with high-throughput and low-latency requirements. To enable demand-oriented distributed services, this paper investigates the relationship between capability and security in blockchain from the perspective of block propagation and forking problem. First, a Markov chain is introduced to analyze the gossiping-based block propagation among edge servers, which aims to derive block propagation delay and forking probability. Then, we study the impact of forking on blockchain capability and security metrics, in terms of transaction throughput, confirmation delay, fault tolerance, and the probability of malicious modification. The analytical results show that with the adjustment of block generation time or block size, transaction throughput improves at the sacrifice of fault tolerance, and vice versa. Meanwhile, the decline in security can be offset by adjusting confirmation threshold, at the cost of increasing confirmation delay. The analysis of capability-security trade-off can provide a theoretical guideline to manage blockchain performance based on the requirements of data sharing scenarios.
MITRE ATT&CK is a widespread ontology that specifies tactics, techniques, and procedures (TTPs) typical of malware behaviour, making it possible to exploit such TTPs for malware identification. However, this is far from being an easy task given that benign usage of software can also match some of these TTPs. In this paper, we present RADAR, a system that can identify malicious behaviour in network traffic in two stages: first, RADAR extracts MITRE ATT&CK TTPs from arbitrary network traffic captures, and, secondly, it deploys decision trees to differentiate between malicious and benign uses of the detected TTPs. In order to evaluate RADAR, we created a dataset comprising of 2,286,907 malicious and benign samples, for a total of 84,792,452 network flows. The experimental analysis confirms that RADAR is able to $(i)$ match samples to multiple different TTPs, and $(ii)$ effectively detect malware with an AUC score of 0.868. Beside being effective, RADAR is also highly configurable, interpretable, privacy preserving, efficient and can be easily integrated with existing security infrastructure to complement their capabilities.
Firewalls are security devices that perform network traffic filtering. They are ubiquitous in the industry and are a common method used to enforce organizational security policy. Security policy is specified on a high level of abstraction, with statements such as "web browsing is allowed only on workstations inside the office network", and needs to be translated into low-level firewall rules to be enforceable. There has been a lot of work regarding optimization, analysis and platform independence of firewall rules, but an area that has seen much less success is automatic translation of high-level security policies into firewall rules. In addition to improving rules' readability, such translation would make it easier to detect errors.This paper surveys of over twenty papers that aim to generate firewall rules according to a security policy specified on a higher level of abstraction. It also presents an overview of similar features in modern firewall systems. Most approaches define specialized domain languages that get compiled into firewall rule sets, with some of them relying on formal specification, ontology, or graphical models. The approaches' have improved over time, but there are still many drawbacks that need to be solved before wider application.
Learning on big data brings success for artificial intelligence (AI), but the annotation and training costs are expensive. In future, learning on small data is one of the ultimate purposes of AI, which requires machines to recognize objectives and scenarios relying on small data as humans. A series of machine learning models is going on this way such as active learning, few-shot learning, deep clustering. However, there are few theoretical guarantees for their generalization performance. Moreover, most of their settings are passive, that is, the label distribution is explicitly controlled by one specified sampling scenario. This survey follows the agnostic active sampling under a PAC (Probably Approximately Correct) framework to analyze the generalization error and label complexity of learning on small data using a supervised and unsupervised fashion. With these theoretical analyses, we categorize the small data learning models from two geometric perspectives: the Euclidean and non-Euclidean (hyperbolic) mean representation, where their optimization solutions are also presented and discussed. Later, some potential learning scenarios that may benefit from small data learning are then summarized, and their potential learning scenarios are also analyzed. Finally, some challenging applications such as computer vision, natural language processing that may benefit from learning on small data are also surveyed.
In practically every industry today, artificial intelligence is one of the most effective ways for machines to assist humans. Since its inception, a large number of researchers throughout the globe have been pioneering the application of artificial intelligence in medicine. Although artificial intelligence may seem to be a 21st-century concept, Alan Turing pioneered the first foundation concept in the 1940s. Artificial intelligence in medicine has a huge variety of applications that researchers are continually exploring. The tremendous increase in computer and human resources has hastened progress in the 21st century, and it will continue to do so for many years to come. This review of the literature will highlight the emerging field of artificial intelligence in medicine and its current level of development.
Autonomic computing investigates how systems can achieve (user) specified control outcomes on their own, without the intervention of a human operator. Autonomic computing fundamentals have been substantially influenced by those of control theory for closed and open-loop systems. In practice, complex systems may exhibit a number of concurrent and inter-dependent control loops. Despite research into autonomic models for managing computer resources, ranging from individual resources (e.g., web servers) to a resource ensemble (e.g., multiple resources within a data center), research into integrating Artificial Intelligence (AI) and Machine Learning (ML) to improve resource autonomy and performance at scale continues to be a fundamental challenge. The integration of AI/ML to achieve such autonomic and self-management of systems can be achieved at different levels of granularity, from full to human-in-the-loop automation. In this article, leading academics, researchers, practitioners, engineers, and scientists in the fields of cloud computing, AI/ML, and quantum computing join to discuss current research and potential future directions for these fields. Further, we discuss challenges and opportunities for leveraging AI and ML in next generation computing for emerging computing paradigms, including cloud, fog, edge, serverless and quantum computing environments.
Artificial Intelligence (AI) is rapidly becoming integrated into military Command and Control (C2) systems as a strategic priority for many defence forces. The successful implementation of AI is promising to herald a significant leap in C2 agility through automation. However, realistic expectations need to be set on what AI can achieve in the foreseeable future. This paper will argue that AI could lead to a fragility trap, whereby the delegation of C2 functions to an AI could increase the fragility of C2, resulting in catastrophic strategic failures. This calls for a new framework for AI in C2 to avoid this trap. We will argue that antifragility along with agility should form the core design principles for AI-enabled C2 systems. This duality is termed Agile, Antifragile, AI-Enabled Command and Control (A3IC2). An A3IC2 system continuously improves its capacity to perform in the face of shocks and surprises through overcompensation from feedback during the C2 decision-making cycle. An A3IC2 system will not only be able to survive within a complex operational environment, it will also thrive, benefiting from the inevitable shocks and volatility of war.
AI is undergoing a paradigm shift with the rise of models (e.g., BERT, DALL-E, GPT-3) that are trained on broad data at scale and are adaptable to a wide range of downstream tasks. We call these models foundation models to underscore their critically central yet incomplete character. This report provides a thorough account of the opportunities and risks of foundation models, ranging from their capabilities (e.g., language, vision, robotics, reasoning, human interaction) and technical principles(e.g., model architectures, training procedures, data, systems, security, evaluation, theory) to their applications (e.g., law, healthcare, education) and societal impact (e.g., inequity, misuse, economic and environmental impact, legal and ethical considerations). Though foundation models are based on standard deep learning and transfer learning, their scale results in new emergent capabilities,and their effectiveness across so many tasks incentivizes homogenization. Homogenization provides powerful leverage but demands caution, as the defects of the foundation model are inherited by all the adapted models downstream. Despite the impending widespread deployment of foundation models, we currently lack a clear understanding of how they work, when they fail, and what they are even capable of due to their emergent properties. To tackle these questions, we believe much of the critical research on foundation models will require deep interdisciplinary collaboration commensurate with their fundamentally sociotechnical nature.
Deep learning has penetrated all aspects of our lives and brought us great convenience. However, the process of building a high-quality deep learning system for a specific task is not only time-consuming but also requires lots of resources and relies on human expertise, which hinders the development of deep learning in both industry and academia. To alleviate this problem, a growing number of research projects focus on automated machine learning (AutoML). In this paper, we provide a comprehensive and up-to-date study on the state-of-the-art AutoML. First, we introduce the AutoML techniques in details according to the machine learning pipeline. Then we summarize existing Neural Architecture Search (NAS) research, which is one of the most popular topics in AutoML. We also compare the models generated by NAS algorithms with those human-designed models. Finally, we present several open problems for future research.
Nowadays, the Convolutional Neural Networks (CNNs) have achieved impressive performance on many computer vision related tasks, such as object detection, image recognition, image retrieval, etc. These achievements benefit from the CNNs outstanding capability to learn the input features with deep layers of neuron structures and iterative training process. However, these learned features are hard to identify and interpret from a human vision perspective, causing a lack of understanding of the CNNs internal working mechanism. To improve the CNN interpretability, the CNN visualization is well utilized as a qualitative analysis method, which translates the internal features into visually perceptible patterns. And many CNN visualization works have been proposed in the literature to interpret the CNN in perspectives of network structure, operation, and semantic concept. In this paper, we expect to provide a comprehensive survey of several representative CNN visualization methods, including Activation Maximization, Network Inversion, Deconvolutional Neural Networks (DeconvNet), and Network Dissection based visualization. These methods are presented in terms of motivations, algorithms, and experiment results. Based on these visualization methods, we also discuss their practical applications to demonstrate the significance of the CNN interpretability in areas of network design, optimization, security enhancement, etc.