The arrival of deep learning techniques able to infer patterns from large datasets has dramatically improved the performance of Artificial Intelligence (AI) systems. Deep learning's rapid development and adoption, in great part led by large technology companies, has however created concerns about a premature narrowing in the technological trajectory of AI research despite its weaknesses, which include lack of robustness, high environmental costs, and potentially unfair outcomes. We seek to improve the evidence base with a semantic analysis of AI research in arXiv, a popular pre-prints database. We study the evolution of the thematic diversity of AI research, compare the thematic diversity of AI research in academia and the private sector and measure the influence of private companies in AI research through the citations they receive and their collaborations with other institutions. Our results suggest that diversity in AI research has stagnated in recent years, and that AI research involving the private sector tends to be less diverse and more influential than research in academia. We also find that private sector AI researchers tend to specialise in data-hungry and computationally intensive deep learning methods at the expense of research involving other AI methods, research that considers the societal and ethical implications of AI, and applications in sectors like health. Our results provide a rationale for policy action to prevent a premature narrowing of AI research that could constrain its societal benefits, but we note the informational, incentive and scale hurdles standing in the way of such interventions.
Interacting agents receive public information at no cost and flexibly acquire private information at a cost proportional to entropy reduction. When a policymaker provides more public information, agents acquire less private information, thus lowering information costs. Does more public information raise or reduce uncertainty faced by agents? Is it beneficial or detrimental to welfare? To address these questions, we examine the impacts of public information on flexible information acquisition in a linear-quadratic-Gaussian game with arbitrary quadratic material welfare. More public information raises uncertainty if and only if the game exhibits strategic complementarity, which can be harmful to welfare. However, when agents acquire a large amount of information, more provision of public information increases welfare through a substantial reduction in the cost of information. We give a necessary and sufficient condition for welfare to increase with public information and identify optimal public information disclosure, which is either full or partial disclosure depending upon the welfare function and the slope of the best response.
In a sports competition, a team might lose a powerful incentive to exert full effort if its final rank does not depend on the outcome of the matches still to be played. Therefore, the organiser should reduce the probability of such a situation to the extent possible. Our paper provides a classification scheme to identify these weakly (where one team is indifferent) or strongly (where both teams are indifferent) stakeless games. A statistical model is estimated to simulate the UEFA Champions League groups and compare the candidate schedules used in the 2021/22 season according to the competitiveness of the matches played in the last round(s). The option followed in four of the eight groups is found to be optimal under a wide set of parameters. Minimising the number of strongly stakeless matches is verified to be a likely goal in the computer draw of the fixture that remains hidden from the public.
With the advent of open source software, a veritable treasure trove of previously proprietary software development data was made available. This opened the field of empirical software engineering research to anyone in academia. Data that is mined from software projects, however, requires extensive processing and needs to be handled with utmost care to ensure valid conclusions. Since the software development practices and tools have changed over two decades, we aim to understand the state-of-the-art research workflows and to highlight potential challenges. We employ a systematic literature review by sampling over one thousand papers from leading conferences and by analyzing the 286 most relevant papers from the perspective of data workflows, methodologies, reproducibility, and tools. We found that an important part of the research workflow involving dataset selection was particularly problematic, which raises questions about the generality of the results in existing literature. Furthermore, we found a considerable number of papers provide little or no reproducibility instructions -- a substantial deficiency for a data-intensive field. In fact, 33% of papers provide no information on how their data was retrieved. Based on these findings, we propose ways to address these shortcomings via existing tools and also provide recommendations to improve research workflows and the reproducibility of research.
Incorporating interdisciplinary perspectives is seen as an essential step towards enhancing artificial intelligence (AI) ethics. In this regard, the field of arts is perceived to play a key role in elucidating diverse historical and cultural narratives, serving as a bridge across research communities. Most of the works that examine the interplay between the field of arts and AI ethics concern digital artworks, largely exploring the potential of computational tools in being able to surface biases in AI systems. In this paper, we investigate a complementary direction--that of uncovering the unique socio-cultural perspectives embedded in human-made art, which in turn, can be valuable in expanding the horizon of AI ethics. Through qualitative interviews of sixteen artists, art scholars, and researchers of diverse Indian art forms like music, sculpture, painting, floor drawings, dance, etc., we explore how {\it non-Western} ethical abstractions, methods of learning, and participatory practices observed in Indian arts, one of the most ancient yet perpetual and influential art traditions, can inform the FAccT community. Insights from our study suggest (1) the need for incorporating holistic perspectives (that are informed both by data-driven observations and prior beliefs encapsulating the structural models of the world) in designing ethical AI algorithms, (2) the need for integrating multimodal data formats for design, development, and evaluation of ethical AI systems, (3) the need for viewing AI ethics as a dynamic, cumulative, shared process rather than as a self contained framework to facilitate adaptability without annihilation of values, (4) the need for consistent life-long learning to enhance AI accountability, and (5) the need for identifying ethical commonalities across cultures and infusing the same into AI system design, so as to enhance applicability across geographies.
Recently, stemming from the rapid development of artificial intelligence, which has gained expansive success in pattern recognition, robotics, and bioinformatics, neuroscience is also gaining tremendous progress. A kind of spiking neural network with biological interpretability is gradually receiving wide attention, and this kind of neural network is also regarded as one of the directions toward general artificial intelligence. This review introduces the following sections, the biological background of spiking neurons and the theoretical basis, different neuronal models, the connectivity of neural circuits, the mainstream neural network learning mechanisms and network architectures, etc. This review hopes to attract different researchers and advance the development of brain-inspired intelligence and artificial intelligence.
In humans, Attention is a core property of all perceptual and cognitive operations. Given our limited ability to process competing sources, attention mechanisms select, modulate, and focus on the information most relevant to behavior. For decades, concepts and functions of attention have been studied in philosophy, psychology, neuroscience, and computing. For the last six years, this property has been widely explored in deep neural networks. Currently, the state-of-the-art in Deep Learning is represented by neural attention models in several application domains. This survey provides a comprehensive overview and analysis of developments in neural attention models. We systematically reviewed hundreds of architectures in the area, identifying and discussing those in which attention has shown a significant impact. We also developed and made public an automated methodology to facilitate the development of reviews in the area. By critically analyzing 650 works, we describe the primary uses of attention in convolutional, recurrent networks and generative models, identifying common subgroups of uses and applications. Furthermore, we describe the impact of attention in different application domains and their impact on neural networks' interpretability. Finally, we list possible trends and opportunities for further research, hoping that this review will provide a succinct overview of the main attentional models in the area and guide researchers in developing future approaches that will drive further improvements.
Reinforcement learning is one of the core components in designing an artificial intelligent system emphasizing real-time response. Reinforcement learning influences the system to take actions within an arbitrary environment either having previous knowledge about the environment model or not. In this paper, we present a comprehensive study on Reinforcement Learning focusing on various dimensions including challenges, the recent development of different state-of-the-art techniques, and future directions. The fundamental objective of this paper is to provide a framework for the presentation of available methods of reinforcement learning that is informative enough and simple to follow for the new researchers and academics in this domain considering the latest concerns. First, we illustrated the core techniques of reinforcement learning in an easily understandable and comparable way. Finally, we analyzed and depicted the recent developments in reinforcement learning approaches. My analysis pointed out that most of the models focused on tuning policy values rather than tuning other things in a particular state of reasoning.
Deep learning has penetrated all aspects of our lives and brought us great convenience. However, the process of building a high-quality deep learning system for a specific task is not only time-consuming but also requires lots of resources and relies on human expertise, which hinders the development of deep learning in both industry and academia. To alleviate this problem, a growing number of research projects focus on automated machine learning (AutoML). In this paper, we provide a comprehensive and up-to-date study on the state-of-the-art AutoML. First, we introduce the AutoML techniques in details according to the machine learning pipeline. Then we summarize existing Neural Architecture Search (NAS) research, which is one of the most popular topics in AutoML. We also compare the models generated by NAS algorithms with those human-designed models. Finally, we present several open problems for future research.
Many natural language processing tasks solely rely on sparse dependencies between a few tokens in a sentence. Soft attention mechanisms show promising performance in modeling local/global dependencies by soft probabilities between every two tokens, but they are not effective and efficient when applied to long sentences. By contrast, hard attention mechanisms directly select a subset of tokens but are difficult and inefficient to train due to their combinatorial nature. In this paper, we integrate both soft and hard attention into one context fusion model, "reinforced self-attention (ReSA)", for the mutual benefit of each other. In ReSA, a hard attention trims a sequence for a soft self-attention to process, while the soft attention feeds reward signals back to facilitate the training of the hard one. For this purpose, we develop a novel hard attention called "reinforced sequence sampling (RSS)", selecting tokens in parallel and trained via policy gradient. Using two RSS modules, ReSA efficiently extracts the sparse dependencies between each pair of selected tokens. We finally propose an RNN/CNN-free sentence-encoding model, "reinforced self-attention network (ReSAN)", solely based on ReSA. It achieves state-of-the-art performance on both Stanford Natural Language Inference (SNLI) and Sentences Involving Compositional Knowledge (SICK) datasets.