Direct-to-satellite (DtS) communication has gained importance recently to support globally connected Internet of things (IoT) networks. However, relatively long distances of densely deployed satellite networks around the Earth cause a high path loss. In addition, since high complexity operations such as beamforming, tracking and equalization have to be performed in IoT devices partially, both the hardware complexity and the need for high-capacity batteries of IoT devices increase. The reconfigurable intelligent surfaces (RISs) have the potential to increase the energy-efficiency and to perform complex signal processing over the transmission environment instead of IoT devices. But, RISs need the information of the cascaded channel in order to change the phase of the incident signal. This study evaluates the pilot signal as a graph and incorporates this information into the graph attention networks (GATs) to track the phase relation through pilot signaling. The proposed GAT-based channel estimation method examines the performance of the DtS IoT networks for different RIS configurations to solve the challenging channel estimation problem. It is shown that the proposed GAT both demonstrates a higher performance with increased robustness under changing conditions and has lower computational complexity compared to conventional deep learning methods. Moreover, bit error rate performance is investigated for RIS designs with discrete and non-uniform phase shifts under channel estimation based on the proposed method. One of the findings in this study is that the channel models of the operating environment and the performance of the channel estimation method must be considered during RIS design to exploit performance improvement as far as possible.
Management of crowd information in public transportation (PT) systems is crucial, both to foster sustainable mobility, by increasing the user's comfort and satisfaction during normal operation, as well as to cope with emergency situations, such as pandemic crises, as recently experienced with COVID-19 limitations. This paper presents a taxonomy and review of sensing technologies based on Internet of Things (IoT) for real-time crowd analysis, which can be adopted in the different segments of the PT system (buses/trams/trains, railway/metro stations, and bus stops). To discuss such technologies in a clear systematic perspective, we introduce a reference architecture for crowd management, which employs modern information and communication technologies (ICT) in order to: (i) monitor and predict crowding events; (ii) implement crowd-aware policies for real-time and adaptive operation control in intelligent transportation systems (ITSs); (iii) inform in real-time the users of the crowding status of the PT system, by means of electronic displays installed inside vehicles or at bus stops/stations, and/or by mobile transport applications. It is envisioned that the innovative crowd management functionalities enabled by ICT/IoT sensing technologies can be incrementally implemented as an add-on to state-of-the-art ITS platforms, which are already in use by major PT companies operating in urban areas. Moreover, it is argued that, in this new framework, additional services can be delivered to the passengers, such as, e.g., on-line ticketing, vehicle access control and reservation in severely crowded situations, and evolved crowd-aware route planning.
Satellite communication in Low Earth Orbiting (LEO) constellations is an emerging topic of interest. Due to the high number of LEO satellites in a typical constellation, a centralized algorithm for minimum-delay packet routing would incur significant signaling and computational overhead. We can exploit the deterministic topology of the satellite constellation to calculate the minimum-delay path between any two nodes in the satellite network. But that does not take into account the traffic information at the nodes along this minimum-delay path. We propose a distributed probabilistic congestion control scheme to minimize end-to-end delay. In the proposed scheme, each satellite, while sending a packet to its neighbor, adds a header with a simple metric indicating its own congestion level. The decision to route packets is taken based on the latest traffic information received from the neighbors. We build this algorithm onto the Datagram Routing Algorithm (DRA), which provides the minimum delay path, and the decision for the next hop is taken by the congestion control algorithm. We compare the proposed congestion control mechanism with the existing congestion control used by the DRA via simulations, and show improvements over the same.
\cite{rohe2016co} proposed Stochastic co-Blockmodel (ScBM) as a tool for detecting community structure of binary directed graph data in network studies. However, ScBM completely ignores node weight, and is unable to explain the block structure of directed weighted network which appears in various areas, such as biology, sociology, physiology and computer science. Here, to model directed weighted network, we introduce a Directed Distribution-Free model by releasing ScBM's distribution restriction. We also build an extension of the proposed model by considering variation of node degree. Our models do not require a specific distribution on generating elements of adjacency matrix but only a block structure on the expected adjacency matrix. Spectral algorithms with theoretical guarantee on consistent estimation of node label are presented to identify communities. Our proposed methods are illustrated by simulated and empirical examples.
Signed and directed networks are ubiquitous in real-world applications. However, there has been relatively little work proposing spectral graph neural networks (GNNs) for such networks. Here we introduce a signed directed Laplacian matrix, which we call the magnetic signed Laplacian, as a natural generalization of both the signed Laplacian on signed graphs and the magnetic Laplacian on directed graphs. We then use this matrix to construct a novel efficient spectral GNN architecture and conduct extensive experiments on both node clustering and link prediction tasks. In these experiments, we consider tasks related to signed information, tasks related to directional information, and tasks related to both signed and directional information. We demonstrate that our proposed spectral GNN is effective for incorporating both signed and directional information, and attains leading performance on a wide range of data sets. Additionally, we provide a novel synthetic network model, which we refer to as the signed directed stochastic block model, and a number of novel real-world data sets based on lead-lag relationships in financial time series.
Reconfigurable intelligent surfaces (RISs) are envisioned to be a disruptive wireless communication technique that is capable of reconfiguring the wireless propagation environment. In this paper, we study a free-space RIS-assisted multiple-input single-output (MISO) communication system in far-field operation. To maximize the received power from the physical and electromagnetic nature point of view, a comprehensive optimization, including beamforming of the transmitter, phase shifts of the RIS, orientation and position of the RIS is formulated and addressed. After exploiting the property of line-of-sight (LoS) links, we derive closed-form solutions of beamforming and phase shifts. For the non-trivial RIS position optimization problem in arbitrary three-dimensional space, a dimensional-reducing theory is proved. The simulation results show that the proposed closed-form beamforming and phase shifts approach the upper bound of the received power. The robustness of our proposed solutions in terms of the perturbation is also verified. Moreover, the RIS significantly enhances the performance of the mmWave/THz communication system.
For a multi-robot team that collaboratively explores an unknown environment, it is of vital importance that collected information is efficiently shared among robots in order to support exploration and navigation tasks. Practical constraints of wireless channels, such as limited bandwidth and bit-rate, urge robots to carefully select information to be transmitted. In this paper, we consider the case where environmental information is modeled using a 3D Scene Graph, a hierarchical map representation that describes geometric and semantic aspects of the environment. Then, we leverage graph-theoretic tools, namely graph spanners, to design heuristic strategies that efficiently compress 3D Scene Graphs to enable communication under bandwidth constraints. Our compression strategies are navigation-oriented in that they are designed to approximately preserve shortest paths between locations of interest, while meeting a user-specified communication budget constraint. Effectiveness of the proposed algorithms is demonstrated via extensive numerical analysis and on synthetic robot navigation experiments in a realistic simulator.
Recent advances in artificial intelligence promote a wide range of computer vision applications in many different domains. Digital cameras, acting as human eyes, can perceive fundamental object properties, such as shapes and colors, and can be further used for conducting high-level tasks, such as image classification, and object detections. Human perceptions have been widely recognized as the ground truth for training and evaluating computer vision models. However, in some cases, humans can be deceived by what they have seen. Well-functioned human vision relies on stable external lighting while unnatural illumination would influence human perception of essential characteristics of goods. To evaluate the illumination effects on human and computer perceptions, the group presents a novel dataset, the Food Vision Dataset (FVD), to create an evaluation benchmark to quantify illumination effects, and to push forward developments of illumination estimation methods for fair and reliable consumer acceptability prediction from food appearances. FVD consists of 675 images captured under 3 different power and 5 different temperature settings every alternate day for five such days.
The Internet of Things (IoT) boom has revolutionized almost every corner of people's daily lives: healthcare, home, transportation, manufacturing, supply chain, and so on. With the recent development of sensor and communication technologies, IoT devices including smart wearables, cameras, smartwatches, and autonomous vehicles can accurately measure and perceive their surrounding environment. Continuous sensing generates massive amounts of data and presents challenges for machine learning. Deep learning models (e.g., convolution neural networks and recurrent neural networks) have been extensively employed in solving IoT tasks by learning patterns from multi-modal sensory data. Graph Neural Networks (GNNs), an emerging and fast-growing family of neural network models, can capture complex interactions within sensor topology and have been demonstrated to achieve state-of-the-art results in numerous IoT learning tasks. In this survey, we present a comprehensive review of recent advances in the application of GNNs to the IoT field, including a deep dive analysis of GNN design in various IoT sensing environments, an overarching list of public data and source code from the collected publications, and future research directions. To keep track of newly published works, we collect representative papers and their open-source implementations and create a Github repository at //github.com/GuiminDong/GNN4IoT.
Deep neural networks have revolutionized many machine learning tasks in power systems, ranging from pattern recognition to signal processing. The data in these tasks is typically represented in Euclidean domains. Nevertheless, there is an increasing number of applications in power systems, where data are collected from non-Euclidean domains and represented as the graph-structured data with high dimensional features and interdependency among nodes. The complexity of graph-structured data has brought significant challenges to the existing deep neural networks defined in Euclidean domains. Recently, many studies on extending deep neural networks for graph-structured data in power systems have emerged. In this paper, a comprehensive overview of graph neural networks (GNNs) in power systems is proposed. Specifically, several classical paradigms of GNNs structures (e.g., graph convolutional networks, graph recurrent neural networks, graph attention networks, graph generative networks, spatial-temporal graph convolutional networks, and hybrid forms of GNNs) are summarized, and key applications in power systems such as fault diagnosis, power prediction, power flow calculation, and data generation are reviewed in detail. Furthermore, main issues and some research trends about the applications of GNNs in power systems are discussed.
Graphs, which describe pairwise relations between objects, are essential representations of many real-world data such as social networks. In recent years, graph neural networks, which extend the neural network models to graph data, have attracted increasing attention. Graph neural networks have been applied to advance many different graph related tasks such as reasoning dynamics of the physical system, graph classification, and node classification. Most of the existing graph neural network models have been designed for static graphs, while many real-world graphs are inherently dynamic. For example, social networks are naturally evolving as new users joining and new relations being created. Current graph neural network models cannot utilize the dynamic information in dynamic graphs. However, the dynamic information has been proven to enhance the performance of many graph analytical tasks such as community detection and link prediction. Hence, it is necessary to design dedicated graph neural networks for dynamic graphs. In this paper, we propose DGNN, a new {\bf D}ynamic {\bf G}raph {\bf N}eural {\bf N}etwork model, which can model the dynamic information as the graph evolving. In particular, the proposed framework can keep updating node information by capturing the sequential information of edges, the time intervals between edges and information propagation coherently. Experimental results on various dynamic graphs demonstrate the effectiveness of the proposed framework.