Digital identity has always been considered the keystone for implementing secure and trustworthy communications among parties. The ever-evolving digital landscape has gone through many technological transformations that have also affected the way entities are digitally identified. During this digital evolution, identity management has shifted from centralized to decentralized approaches. The last era of this journey is represented by the emerging Self-Sovereign Identity (SSI), which gives users full control over their data. SSI leverages decentralized identifiers (DIDs) and verifiable credentials (VCs), which have been recently standardized by the World Wide Web Community (W3C). These technologies have the potential to build more secure and decentralized digital identity systems, remarkably contributing to strengthening the security of communications that typically involve many distributed participants. It is worth noting that the scope of DIDs and VCs extends beyond individuals, encompassing a broad range of entities including cloud, edge, and Internet of Things (IoT) resources. However, due to their novelty, existing literature lacks a comprehensive survey on how DIDs and VCs have been employed in different application domains, which go beyond SSI systems. This paper provides readers with a comprehensive overview of such technologies from different perspectives. Specifically, we first provide the background on DIDs and VCs. Then, we analyze available implementations and offer an in-depth review of how these technologies have been employed across different use-case scenarios. Furthermore, we examine recent regulations and initiatives that have been emerging worldwide. Finally, we present some challenges that hinder their adoption in real-world scenarios and future research directions.
Previous stance detection studies typically concentrate on evaluating stances within individual instances, thereby exhibiting limitations in effectively modeling multi-party discussions concerning the same specific topic, as naturally transpire in authentic social media interactions. This constraint arises primarily due to the scarcity of datasets that authentically replicate real social media contexts, hindering the research progress of conversational stance detection. In this paper, we introduce a new multi-turn conversation stance detection dataset (called \textbf{MT-CSD}), which encompasses multiple targets for conversational stance detection. To derive stances from this challenging dataset, we propose a global-local attention network (\textbf{GLAN}) to address both long and short-range dependencies inherent in conversational data. Notably, even state-of-the-art stance detection methods, exemplified by GLAN, exhibit an accuracy of only 50.47\%, highlighting the persistent challenges in conversational stance detection. Furthermore, our MT-CSD dataset serves as a valuable resource to catalyze advancements in cross-domain stance detection, where a classifier is adapted from a different yet related target. We believe that MT-CSD will contribute to advancing real-world applications of stance detection research. Our source code, data, and models are available at \url{//github.com/nfq729/MT-CSD}.
As discussions around 6G begin, it is important to carefully quantify the spectral efficiency gains actually realized by deployed 5G networks as compared to 4G through various enhancements such as higher modulation, beamforming, and MIMO. This will inform the design of future cellular systems, especially in the mid-bands, which provide a good balance between bandwidth and propagation. Similar to 4G, 5G also utilizes low-band (<1 GHz) and mid-band spectrum (1 to 6 GHz), and hence comparing the performance of 4G and 5G in these bands will provide insights into how further improvements can be attained. In this work, we address a crucial question: is the performance boost in 5G compared to 4G primarily a result of increased bandwidth, or do the other enhancements play significant roles, and if so, under what circumstances? Hence, we conduct city-wide measurements of 4G and 5G cellular networks deployed in low- and mid-bands in Chicago and Minneapolis, and carefully quantify the contributions of different aspects of 5G advancements to its improved throughput performance. Our analyses show that (i) compared to 4G, the throughput improvement in 5G today is mainly influenced by the wider channel bandwidth, both from single channels and channel aggregation, (ii) in addition to wider channels, improved 5G throughput requires better signal conditions, which can be delivered by denser deployment and/or use of beamforming in mid-bands, (iii) the channel rank in real-world environments rarely supports the full 4 layers of 4x4 MIMO and (iv) advanced features such as MU-MIMO and higher order modulation such as 1024-QAM have yet to be widely deployed. These observations and conclusions lead one to consider designing the next generation of cellular systems to have wider channels, perhaps with improved channel aggregation, dense deployment with more beams.
Academic and industrial sectors have been engaged in a fierce competition to develop quantum technologies, fueled by the explosive advancements in quantum hardware. While universal quantum computers have been shown to support up to hundreds of qubits, the scale of quantum annealers has reached three orders of magnitude (i.e., thousands of qubits). Therefore, quantum algorithms are becoming increasingly popular in a variety of fields, with optimization being one of the most prominent. This work aims to explore the topic of quantum optimization by comprehensively evaluating the technologies provided by D-Wave Systems. To do so, a model for the energy optimization of data centers is proposed as a benchmark. D-Wave quantum and hybrid solvers are compared, in order to identify the most suitable one for the considered application. To highlight its advantageous performance capabilities and associated solving potential, the selected D-Wave hybrid solver is then contrasted with CPLEX, a highly efficient classical solver.
Electrochemical communication is a mechanism that enables intercellular interaction among bacteria within communities. Bacteria achieves synchronization and coordinates collective actions at the population level through the utilization of electrochemical signals. In this work, we investigate the response of bacterial biofilms to artificial potassium concentration stimulation. We introduce signal inputs at a specific location within the biofilm and observe their transmission to other regions, facilitated by intermediary cells that amplify and relay the signal. We analyze the output signals when biofilm regions are subjected to different input signal types and explore their impact on biofilm growth. Furthermore, we investigate how the temporal gap between input pulses influences output signal characteristics, demonstrating that an appropriate gap yields distinct and well-defined output signals. Our research sheds light on the potential of bacterial biofilms as communication nodes in electrochemical communication networks.
LLMs have demonstrated impressive performance in answering medical questions, such as passing scores on medical licensing examinations. However, medical board exam questions or general clinical questions do not capture the complexity of realistic clinical cases. Moreover, the lack of reference explanations means we cannot easily evaluate the reasoning of model decisions, a crucial component of supporting doctors in making complex medical decisions. To address these challenges, we construct two new datasets: JAMA Clinical Challenge and Medbullets. JAMA Clinical Challenge consists of questions based on challenging clinical cases, while Medbullets comprises USMLE Step 2&3 style clinical questions. Both datasets are structured as multiple-choice question-answering tasks, where each question is accompanied by an expert-written explanation. We evaluate four LLMs on the two datasets using various prompts. Experiments demonstrate that our datasets are harder than previous benchmarks. The inconsistency between automatic and human evaluations of model-generated explanations highlights the need to develop new metrics to support future research on explainable medical QA.
When a robotic system is redundant with respect to a given task, the remaining degrees of freedom can be used to satisfy additional objectives. With current robotic systems having more and more degrees of freedom, this can lead to an entire hierarchy of tasks that need to be solved according to given priorities. In this paper, the first compliant control strategy is presented that allows to consider an arbitrary number of equality and inequality tasks, while still preserving the natural inertia of the robot. The approach is therefore a generalization of a passivity-based controller to the case of an arbitrary number of equality and inequality tasks. The key idea of the method is to use a Weighted Hierarchical Quadratic Problem to extract the set of active tasks and use the latter to perform a coordinate transformation that inertially decouples the tasks. Thereby unifying the line of research focusing on optimization-based and passivity-based multi-task controllers. The method is validated in simulation.
Graphs are important data representations for describing objects and their relationships, which appear in a wide diversity of real-world scenarios. As one of a critical problem in this area, graph generation considers learning the distributions of given graphs and generating more novel graphs. Owing to their wide range of applications, generative models for graphs, which have a rich history, however, are traditionally hand-crafted and only capable of modeling a few statistical properties of graphs. Recent advances in deep generative models for graph generation is an important step towards improving the fidelity of generated graphs and paves the way for new kinds of applications. This article provides an extensive overview of the literature in the field of deep generative models for graph generation. Firstly, the formal definition of deep generative models for the graph generation and the preliminary knowledge are provided. Secondly, taxonomies of deep generative models for both unconditional and conditional graph generation are proposed respectively; the existing works of each are compared and analyzed. After that, an overview of the evaluation metrics in this specific domain is provided. Finally, the applications that deep graph generation enables are summarized and five promising future research directions are highlighted.
A community reveals the features and connections of its members that are different from those in other communities in a network. Detecting communities is of great significance in network analysis. Despite the classical spectral clustering and statistical inference methods, we notice a significant development of deep learning techniques for community detection in recent years with their advantages in handling high dimensional network data. Hence, a comprehensive overview of community detection's latest progress through deep learning is timely to both academics and practitioners. This survey devises and proposes a new taxonomy covering different categories of the state-of-the-art methods, including deep learning-based models upon deep neural networks, deep nonnegative matrix factorization and deep sparse filtering. The main category, i.e., deep neural networks, is further divided into convolutional networks, graph attention networks, generative adversarial networks and autoencoders. The survey also summarizes the popular benchmark data sets, model evaluation metrics, and open-source implementations to address experimentation settings. We then discuss the practical applications of community detection in various domains and point to implementation scenarios. Finally, we outline future directions by suggesting challenging topics in this fast-growing deep learning field.
Graph Neural Networks (GNNs) have recently become increasingly popular due to their ability to learn complex systems of relations or interactions arising in a broad spectrum of problems ranging from biology and particle physics to social networks and recommendation systems. Despite the plethora of different models for deep learning on graphs, few approaches have been proposed thus far for dealing with graphs that present some sort of dynamic nature (e.g. evolving features or connectivity over time). In this paper, we present Temporal Graph Networks (TGNs), a generic, efficient framework for deep learning on dynamic graphs represented as sequences of timed events. Thanks to a novel combination of memory modules and graph-based operators, TGNs are able to significantly outperform previous approaches being at the same time more computationally efficient. We furthermore show that several previous models for learning on dynamic graphs can be cast as specific instances of our framework. We perform a detailed ablation study of different components of our framework and devise the best configuration that achieves state-of-the-art performance on several transductive and inductive prediction tasks for dynamic graphs.
Deep neural models in recent years have been successful in almost every field, including extremely complex problem statements. However, these models are huge in size, with millions (and even billions) of parameters, thus demanding more heavy computation power and failing to be deployed on edge devices. Besides, the performance boost is highly dependent on redundant labeled data. To achieve faster speeds and to handle the problems caused by the lack of data, knowledge distillation (KD) has been proposed to transfer information learned from one model to another. KD is often characterized by the so-called `Student-Teacher' (S-T) learning framework and has been broadly applied in model compression and knowledge transfer. This paper is about KD and S-T learning, which are being actively studied in recent years. First, we aim to provide explanations of what KD is and how/why it works. Then, we provide a comprehensive survey on the recent progress of KD methods together with S-T frameworks typically for vision tasks. In general, we consider some fundamental questions that have been driving this research area and thoroughly generalize the research progress and technical details. Additionally, we systematically analyze the research status of KD in vision applications. Finally, we discuss the potentials and open challenges of existing methods and prospect the future directions of KD and S-T learning.