Recent developments in the Internet of Things (IoT) and real-time applications, have led to the unprecedented growth in the connected devices and their generated data. Traditionally, this sensor data is transferred and processed at the cloud, and the control signals are sent back to the relevant actuators, as part of the IoT applications. This cloud-centric IoT model, resulted in increased latencies and network load, and compromised privacy. To address these problems, Fog Computing was coined by Cisco in 2012, a decade ago, which utilizes proximal computational resources for processing the sensor data. Ever since its proposal, fog computing has attracted significant attention and the research fraternity focused at addressing different challenges such as fog frameworks, simulators, resource management, placement strategies, quality of service aspects, fog economics etc. However, after a decade of research, we still do not see large-scale deployments of public/private fog networks, which can be utilized in realizing interesting IoT applications. In the literature, we only see pilot case studies and small-scale testbeds, and utilization of simulators for demonstrating scale of the specified models addressing the respective technical challenges. There are several reasons for this, and most importantly, fog computing did not present a clear business case for the companies and participating individuals yet. This paper summarizes the technical, non-functional and economic challenges, which have been posing hurdles in adopting fog computing, by consolidating them across different clusters. The paper also summarizes the relevant academic and industrial contributions in addressing these challenges and provides future research directions in realizing real-time fog computing applications, also considering the emerging trends such as federated learning and quantum computing.
Recent advancements in deep learning and computer vision have led to a surge of interest in generating realistic talking heads. This paper presents a comprehensive survey of state-of-the-art methods for talking head generation. We systematically categorises them into four main approaches: image-driven, audio-driven, video-driven and others (including neural radiance fields (NeRF), and 3D-based methods). We provide an in-depth analysis of each method, highlighting their unique contributions, strengths, and limitations. Furthermore, we thoroughly compare publicly available models, evaluating them on key aspects such as inference time and human-rated quality of the generated outputs. Our aim is to provide a clear and concise overview of the current landscape in talking head generation, elucidating the relationships between different approaches and identifying promising directions for future research. This survey will serve as a valuable reference for researchers and practitioners interested in this rapidly evolving field.
With the rapid advancement of the Internet of Things (IoT) and mobile communication technologies, a multitude of devices are becoming interconnected, marking the onset of an era where all things are connected. While this growth opens opportunities for novel products and applications, it also leads to increased energy demand and battery reliance in IoT devices, creating a significant bottleneck that hinders sustainable progress. Traditional energy harvesting (EH) techniques, although promising, face limitations such as insufficient efficiency, high costs, and practical constraints that impede widespread adoption. Backscatter communication (BackCom), a low-power and passive method, emerges as a promising solution to these challenges, directly addressing stranded energy impasse by reducing manufacturing costs and energy consumption in IoT devices. We perform an in-depth analysis of three primary BackCom architectures: Monostatic, Bistatic, and Ambient BackComs. In our exploration, we identify fundamental challenges, such as complex interference environments (including the direct-link interference and the mutual interference among tags) and double-path fading, which contribute to suboptimal performance in BackCom systems. This review aims to furnish a comprehensive examination of existing solutions designed to combat complex interference environments and double-path fading, offering insightful analysis and comparison to select effective strategies to address these challenges. We also delve into emerging trends and challenges in BackCom, forecasting potential paths for technological advancement and providing insights into navigating the intricate landscape of future communication needs. Our work provides researchers and engineers with a clear and comprehensive perspective, enabling them to better understand and creatively tackle the ongoing challenges in BackCom systems.
Graphs represent interconnected structures prevalent in a myriad of real-world scenarios. Effective graph analytics, such as graph learning methods, enables users to gain profound insights from graph data, underpinning various tasks including node classification and link prediction. However, these methods often suffer from data imbalance, a common issue in graph data where certain segments possess abundant data while others are scarce, thereby leading to biased learning outcomes. This necessitates the emerging field of imbalanced learning on graphs, which aims to correct these data distribution skews for more accurate and representative learning outcomes. In this survey, we embark on a comprehensive review of the literature on imbalanced learning on graphs. We begin by providing a definitive understanding of the concept and related terminologies, establishing a strong foundational understanding for readers. Following this, we propose two comprehensive taxonomies: (1) the problem taxonomy, which describes the forms of imbalance we consider, the associated tasks, and potential solutions; (2) the technique taxonomy, which details key strategies for addressing these imbalances, and aids readers in their method selection process. Finally, we suggest prospective future directions for both problems and techniques within the sphere of imbalanced learning on graphs, fostering further innovation in this critical area.
Blockchain has become a popular decentralized paradigm for various applications in the zero-trust environment. The core of the blockchain is the consensus protocol, which establishes consensus among all the participants. PoW (Proof-of-Work) is one of the most popular consensus protocols. However, the PoW consensus protocol which incentives the participants to use their computing power to solve a meaningless hash puzzle is continuously questioned as energy-wasting. To address these issues, we propose an efficient and secure consensus protocol based on proof of useful federated learning for blockchain (called FedChain). We first propose a secure and robust blockchain architecture that takes federated learning tasks as proof of work. Then a pool aggregation mechanism is integrated to improve the efficiency of the FedChain architecture. To protect model parameter privacy for each participant within a mining pool, a secret sharing-based ring-all reduce architecture is designed. We also introduce a data distribution-based federated learning model optimization algorithm to improve the model performance of FedChain. At last, a zero-knowledge proof-based federated learning model verification is introduced to preserve the privacy of federated learning participants while proving the model performance of federated learning participants. Our approach has been tested and validated through extensive experiments, demonstrating its performance.
Multi-Sensor Fusion (MSF) based perception systems have been the foundation in supporting many industrial applications and domains, such as self-driving cars, robotic arms, and unmanned aerial vehicles. Over the past few years, the fast progress in data-driven artificial intelligence (AI) has brought a fast-increasing trend to empower MSF systems by deep learning techniques to further improve performance, especially on intelligent systems and their perception systems. Although quite a few AI-enabled MSF perception systems and techniques have been proposed, up to the present, limited benchmarks that focus on MSF perception are publicly available. Given that many intelligent systems such as self-driving cars are operated in safety-critical contexts where perception systems play an important role, there comes an urgent need for a more in-depth understanding of the performance and reliability of these MSF systems. To bridge this gap, we initiate an early step in this direction and construct a public benchmark of AI-enabled MSF-based perception systems including three commonly adopted tasks (i.e., object detection, object tracking, and depth completion). Based on this, to comprehensively understand MSF systems' robustness and reliability, we design 14 common and realistic corruption patterns to synthesize large-scale corrupted datasets. We further perform a systematic evaluation of these systems through our large-scale evaluation. Our results reveal the vulnerability of the current AI-enabled MSF perception systems, calling for researchers and practitioners to take robustness and reliability into account when designing AI-enabled MSF.
Many engineering organizations are reimplementing and extending deep neural networks from the research community. We describe this process as deep learning model reengineering. Deep learning model reengineering - reusing, reproducing, adapting, and enhancing state-of-the-art deep learning approaches - is challenging for reasons including under-documented reference models, changing requirements, and the cost of implementation and testing. In addition, individual engineers may lack expertise in software engineering, yet teams must apply knowledge of software engineering and deep learning to succeed. Prior work has examined on DL systems from a "product" view, examining defects from projects regardless of the engineers' purpose. Our study is focused on reengineering activities from a "process" view, and focuses on engineers specifically engaged in the reengineering process. Our goal is to understand the characteristics and challenges of deep learning model reengineering. We conducted a case study of this phenomenon, focusing on the context of computer vision. Our results draw from two data sources: defects reported in open-source reeengineering projects, and interviews conducted with open-source project contributors and the leaders of a reengineering team. Our results describe how deep learning-based computer vision techniques are reengineered, analyze the distribution of defects in this process, and discuss challenges and practices. Integrating our quantitative and qualitative data, we proposed a novel reengineering workflow. Our findings inform several future directions, including: measuring additional unknown aspects of model reengineering; standardizing engineering practices to facilitate reengineering; and developing tools to support model reengineering and model reuse.
Recently, there has been significant progress in the development of large models. Following the success of ChatGPT, numerous language models have been introduced, demonstrating remarkable performance. Similar advancements have also been observed in image generation models, such as Google's Imagen model, OpenAI's DALL-E 2, and stable diffusion models, which have exhibited impressive capabilities in generating images. However, similar to large language models, these models still encounter unresolved challenges. Fortunately, the availability of open-source stable diffusion models and their underlying mathematical principles has enabled the academic community to extensively analyze the performance of current image generation models and make improvements based on this stable diffusion framework. This survey aims to examine the existing issues and the current solutions pertaining to image generation models.
In pace with developments in the research field of artificial intelligence, knowledge graphs (KGs) have attracted a surge of interest from both academia and industry. As a representation of semantic relations between entities, KGs have proven to be particularly relevant for natural language processing (NLP), experiencing a rapid spread and wide adoption within recent years. Given the increasing amount of research work in this area, several KG-related approaches have been surveyed in the NLP research community. However, a comprehensive study that categorizes established topics and reviews the maturity of individual research streams remains absent to this day. Contributing to closing this gap, we systematically analyzed 507 papers from the literature on KGs in NLP. Our survey encompasses a multifaceted review of tasks, research types, and contributions. As a result, we present a structured overview of the research landscape, provide a taxonomy of tasks, summarize our findings, and highlight directions for future work.
With the advances of data-driven machine learning research, a wide variety of prediction problems have been tackled. It has become critical to explore how machine learning and specifically deep learning methods can be exploited to analyse healthcare data. A major limitation of existing methods has been the focus on grid-like data; however, the structure of physiological recordings are often irregular and unordered which makes it difficult to conceptualise them as a matrix. As such, graph neural networks have attracted significant attention by exploiting implicit information that resides in a biological system, with interactive nodes connected by edges whose weights can be either temporal associations or anatomical junctions. In this survey, we thoroughly review the different types of graph architectures and their applications in healthcare. We provide an overview of these methods in a systematic manner, organized by their domain of application including functional connectivity, anatomical structure and electrical-based analysis. We also outline the limitations of existing techniques and discuss potential directions for future research.
Deep neural networks (DNNs) are successful in many computer vision tasks. However, the most accurate DNNs require millions of parameters and operations, making them energy, computation and memory intensive. This impedes the deployment of large DNNs in low-power devices with limited compute resources. Recent research improves DNN models by reducing the memory requirement, energy consumption, and number of operations without significantly decreasing the accuracy. This paper surveys the progress of low-power deep learning and computer vision, specifically in regards to inference, and discusses the methods for compacting and accelerating DNN models. The techniques can be divided into four major categories: (1) parameter quantization and pruning, (2) compressed convolutional filters and matrix factorization, (3) network architecture search, and (4) knowledge distillation. We analyze the accuracy, advantages, disadvantages, and potential solutions to the problems with the techniques in each category. We also discuss new evaluation metrics as a guideline for future research.