This article focuses on the application of artificial intelligence (AI) in non-orthogonal multiple-access (NOMA), which aims to achieve automated, adaptive, and high-efficiency multi-user communications towards next generation multiple access (NGMA). First, the limitations of current scenario-specific multi-antenna NOMA schemes are discussed, and the importance of AI for NGMA is highlighted. Then, to achieve the vision of NGMA, a novel cluster-free NOMA framework is proposed for providing scenario-adaptive NOMA communications, and several promising machine learning solutions are identified. To elaborate further, novel centralized and distributed machine learning paradigms are conceived for efficiently employing the proposed cluster-free NOMA framework in single-cell and multi-cell networks, where numerical results are provided to demonstrate the effectiveness. Furthermore, the interplays between the proposed cluster-free NOMA and emerging wireless techniques are presented. Finally, several open research issues of AI enabled NGMA are discussed.
Multilevel regression and poststratification (MRP) has become a popular approach for selection bias adjustment in subgroup estimation, with widespread applications from social sciences to public health. We examine the finite population inferential validity of MRP in connection with poststratification and model specification. The success of MRP prominently depends on the availability of auxiliary information strongly related to the outcome. To improve the outcome model fitting performances, we recommend modeling inclusion mechanisms conditional on auxiliary variables and adding flexible functions of estimated inclusion probabilities as predictors in the mean structure. We present a framework for statistical data integration and robust inferences of probability and nonprobability surveys, providing solutions to various challenges in practical applications. Our simulation studies indicate the statistical validity of MRP with a tradeoff between bias and variance, and the improvement over alternative methods is mainly on subgroup estimates with small sample sizes. Our development is motivated by the Adolescent Brain Cognitive Development (ABCD) Study that has collected children's information across 21 U.S. geographic locations for national representation but is subject to selection bias as a nonprobability sample. We apply the methods for population inferences to evaluate the cognition measure of diverse groups of children in the ABCD study and demonstrate that the use of auxiliary variables affects the inferential findings.
Rate-splitting multiple access (RSMA) has emerged as a novel, general, and powerful framework for the design and optimization of non-orthogonal transmission, multiple access (MA), and interference management strategies for future wireless networks. Through information and communication theoretic analysis, RSMA has been shown to be optimal (from a Degrees-of-Freedom region perspective) in several transmission scenarios. Compared to the conventional MA strategies used in 5G, RSMA enables spectral efficiency (SE), energy efficiency (EE), coverage, user fairness, reliability, and quality of service (QoS) enhancements for a wide range of network loads (including both underloaded and overloaded regimes) and user channel conditions. Furthermore, it enjoys a higher robustness against imperfect channel state information at the transmitter (CSIT) and entails lower feedback overhead and complexity. Despite its great potential to fundamentally change the physical (PHY) layer and media access control (MAC) layer of wireless communication networks, RSMA is still confronted with many challenges on the road towards standardization. In this paper, we present the first comprehensive overview on RSMA by providing a survey of the pertinent state-of-the-art research, detailing its architecture, taxonomy, and various appealing applications, as well as comparing with existing MA schemes in terms of their overall frameworks, performance, and complexities. An in-depth discussion of future RSMA research challenges is also provided to inspire future research on RSMA-aided wireless communication for beyond 5G systems.
Rate Splitting Multiple Access (RSMA) has emerged as an effective interference management scheme for applications that require high data rates. Although RSMA has shown advantages in rate enhancement and spectral efficiency, it has yet not to be ready for latency-sensitive applications such as virtual reality streaming, which is an essential building block of future 6G networks. Unlike conventional High-Definition streaming applications, streaming virtual reality applications requires not only stringent latency requirements but also the computation capability of the transmitter to quickly respond to dynamic users' demands. Thus, conventional RSMA approaches usually fail to address the challenges caused by computational demands at the transmitter, let alone the dynamic nature of the virtual reality streaming applications. To overcome the aforementioned challenges, we first formulate the virtual reality streaming problem assisted by RSMA as a joint communication and computation optimization problem. A novel multicast approach is then proposed to cluster users into different groups based on a Field-of-View metric and transmit multicast streams in a hierarchical manner. After that, we propose a deep reinforcement learning approach to obtain the solution for the optimization problem. Extensive simulations show that our framework can achieve the millisecond-latency requirement, which is much lower than other baseline schemes.
Distributed data analytics platforms (i.e., Apache Spark, Hadoop) enable cost-effective storage and processing by distributing data and computation to multiple nodes. Since these frameworks' design was primarily motivated by performance and usability, most were assumed to operate in non-malicious settings. Hence, they allow users to execute arbitrary code to analyze the data. To make the situation worse, they do not support fine-grained access control inherently or offer any plugin mechanism to enable it - which makes them risky to be used in multi-tier organizational settings. There have been attempts to build "add-on" solutions to enable fine-grained access control for distributed data analytics platforms. In this paper, we show that by knowing the nature of the solution, an attacker can evade the access control by maliciously using the platform-provided APIs. Specifically, we crafted several attack vectors to evade such solutions. Next, we systematically analyze the threats and potentially risky APIs and propose a two-layered (i.e., proactive and reactive) defense to protect against those attacks. Our proactive security layer utilizes state-of-the-art program analysis to detect potentially malicious user code. The reactive security layer consists of binary integrity checking, instrumentation-based runtime checks, and sandboxed execution. Finally, Using this solution, we provide a secure implementation of a new framework-agnostic fine-grained attribute-based access control framework named SecureDL for Apache Spark. To the best of our knowledge, this is the first work that provides secure fine-grained attribute-based access control distributed data analytics platforms that allow arbitrary code execution. Performance evaluation showed that the overhead due to added security is low.
With the rapid development of artificial intelligence (AI) community, education in AI is receiving more and more attentions. There have been many AI related courses in the respects of algorithms and applications, while not many courses in system level are seriously taken into considerations. In order to bridge the gap between AI and computing systems, we are trying to explore how to conduct AI education from the perspective of computing systems. In this paper, a course practice in intelligent computing architectures are provided to demonstrate the system education in AI era. The motivation for this course practice is first introduced as well as the learning orientations. The main goal of this course aims to teach students for designing AI accelerators on FPGA platforms. The elaborated course contents include lecture notes and related technical materials. Especially several practical labs and projects are detailed illustrated. Finally, some teaching experiences and effects are discussed as well as some potential improvements in the future.
Meta-learning, or learning to learn, has gained renewed interest in recent years within the artificial intelligence community. However, meta-learning is incredibly prevalent within nature, has deep roots in cognitive science and psychology, and is currently studied in various forms within neuroscience. The aim of this review is to recast previous lines of research in the study of biological intelligence within the lens of meta-learning, placing these works into a common framework. More recent points of interaction between AI and neuroscience will be discussed, as well as interesting new directions that arise under this perspective.
Edge intelligence refers to a set of connected systems and devices for data collection, caching, processing, and analysis in locations close to where data is captured based on artificial intelligence. The aim of edge intelligence is to enhance the quality and speed of data processing and protect the privacy and security of the data. Although recently emerged, spanning the period from 2011 to now, this field of research has shown explosive growth over the past five years. In this paper, we present a thorough and comprehensive survey on the literature surrounding edge intelligence. We first identify four fundamental components of edge intelligence, namely edge caching, edge training, edge inference, and edge offloading, based on theoretical and practical results pertaining to proposed and deployed systems. We then aim for a systematic classification of the state of the solutions by examining research results and observations for each of the four components and present a taxonomy that includes practical problems, adopted techniques, and application goals. For each category, we elaborate, compare and analyse the literature from the perspectives of adopted techniques, objectives, performance, advantages and drawbacks, etc. This survey article provides a comprehensive introduction to edge intelligence and its application areas. In addition, we summarise the development of the emerging research field and the current state-of-the-art and discuss the important open issues and possible theoretical and technical solutions.
Interest in the field of Explainable Artificial Intelligence has been growing for decades and has accelerated recently. As Artificial Intelligence models have become more complex, and often more opaque, with the incorporation of complex machine learning techniques, explainability has become more critical. Recently, researchers have been investigating and tackling explainability with a user-centric focus, looking for explanations to consider trustworthiness, comprehensibility, explicit provenance, and context-awareness. In this chapter, we leverage our survey of explanation literature in Artificial Intelligence and closely related fields and use these past efforts to generate a set of explanation types that we feel reflect the expanded needs of explanation for today's artificial intelligence applications. We define each type and provide an example question that would motivate the need for this style of explanation. We believe this set of explanation types will help future system designers in their generation and prioritization of requirements and further help generate explanations that are better aligned to users' and situational needs.
In the last years, Artificial Intelligence (AI) has achieved a notable momentum that may deliver the best of expectations over many application sectors across the field. For this to occur, the entire community stands in front of the barrier of explainability, an inherent problem of AI techniques brought by sub-symbolism (e.g. ensembles or Deep Neural Networks) that were not present in the last hype of AI. Paradigms underlying this problem fall within the so-called eXplainable AI (XAI) field, which is acknowledged as a crucial feature for the practical deployment of AI models. This overview examines the existing literature in the field of XAI, including a prospect toward what is yet to be reached. We summarize previous efforts to define explainability in Machine Learning, establishing a novel definition that covers prior conceptual propositions with a major focus on the audience for which explainability is sought. We then propose and discuss about a taxonomy of recent contributions related to the explainability of different Machine Learning models, including those aimed at Deep Learning methods for which a second taxonomy is built. This literature analysis serves as the background for a series of challenges faced by XAI, such as the crossroads between data fusion and explainability. Our prospects lead toward the concept of Responsible Artificial Intelligence, namely, a methodology for the large-scale implementation of AI methods in real organizations with fairness, model explainability and accountability at its core. Our ultimate goal is to provide newcomers to XAI with a reference material in order to stimulate future research advances, but also to encourage experts and professionals from other disciplines to embrace the benefits of AI in their activity sectors, without any prior bias for its lack of interpretability.
In recent years, mobile devices have gained increasingly development with stronger computation capability and larger storage. Some of the computation-intensive machine learning and deep learning tasks can now be run on mobile devices. To take advantage of the resources available on mobile devices and preserve users' privacy, the idea of mobile distributed machine learning is proposed. It uses local hardware resources and local data to solve machine learning sub-problems on mobile devices, and only uploads computation results instead of original data to contribute to the optimization of the global model. This architecture can not only relieve computation and storage burden on servers, but also protect the users' sensitive information. Another benefit is the bandwidth reduction, as various kinds of local data can now participate in the training process without being uploaded to the server. In this paper, we provide a comprehensive survey on recent studies of mobile distributed machine learning. We survey a number of widely-used mobile distributed machine learning methods. We also present an in-depth discussion on the challenges and future directions in this area. We believe that this survey can demonstrate a clear overview of mobile distributed machine learning and provide guidelines on applying mobile distributed machine learning to real applications.