Forecasting power consumptions of integrated electrical, heat or gas network systems is essential in order to operate more efficiently the whole energy network. Multi-energy systems are increasingly seen as a key component of future energy systems, and a valuable source of flexibility, which can significantly contribute to a cleaner and more sustainable whole energy system. Therefore, there is a stringent need for developing novel and performant models for forecasting multi-energy demand of integrated energy systems, which to account for the different types of interacting energy vectors and of the coupling between them. Previous efforts in demand forecasting focused mainly on the single electrical power consumption or, more recently, on the single heat or gas power consumptions. In order to address this gap, in this paper six novel prediction models based on Convolutional Neural Networks (CNNs) are developed, for either individual or joint prediction of multi-energy power consumptions: the single input/single output CNN model with determining the optimum number of epochs (CNN_1), the multiple input/single output CNN model (CNN_2), the single input/ single output CNN model with training/validation/testing datasets (CNN_3), the joint prediction CNN model (CNN_4), the multiple-building input/output CNN model (CNN_5) and the federated learning CNN model (CNN_6). All six novel CNN models are applied in a comprehensive manner on a novel integrated electrical, heat and gas network system, which only recently has started to be used for forecasting. The forecast horizon is short-term (next half an hour) and all the predictions results are evaluated in terms of the Signal to Noise Ratio (SNR) and the Normalized Root Mean Square Error (NRMSE), while the Mean Absolute Percentage Error (MAPE) is used for comparison purposes with other existent results from literature.
Safe operation of multi-robot systems is critical, especially in communication-degraded environments such as underwater for seabed mapping, underground caves for navigation, and in extraterrestrial missions for assembly and construction. We address safety of networked autonomous systems where the information exchanged between robots incurs communication delays. We formalize a notion of distributed control barrier function (CBF) for multi-robot systems, a safety certificate amenable to a distributed implementation, which provides formal ground to using graph neural networks to learn safe distributed controllers. Further, we observe that learning a distributed controller ignoring delays can severely degrade safety. Our main contribution is a predictor-based framework to train a safe distributed controller under communication delays, where the current state of nearby robots is predicted from received data and age-of-information. Numerical experiments on multi-robot collision avoidance show that our predictor-based approach can significantly improve the safety of a learned distributed controller under communication delays
An adequate fusion of the most significant salient information from multiple input channels is essential for many aerial imaging tasks. While multispectral recordings reveal features in various spectral ranges, synthetic aperture sensing makes occluded features visible. We present a first and hybrid (model- and learning-based) architecture for fusing the most significant features from conventional aerial images with the ones from integral aerial images that are the result of synthetic aperture sensing for removing occlusion. It combines the environment's spatial references with features of unoccluded targets that would normally be hidden by dense vegetation. Our method outperforms state-of-the-art two-channel and multi-channel fusion approaches visually and quantitatively in common metrics, such as mutual information, visual information fidelity, and peak signal-to-noise ratio. The proposed model does not require manually tuned parameters, can be extended to an arbitrary number and arbitrary combinations of spectral channels, and is reconfigurable for addressing different use cases. We demonstrate examples for search and rescue, wildfire detection, and wildlife observation.
Stacked intelligent metasurfaces (SIM) are capable of emulating reconfigurable physical neural networks by relying on electromagnetic (EM) waves as carriers. They can also perform various complex computational and signal processing tasks. A SIM is fabricated by densely integrating multiple metasurface layers, each consisting of a large number of small meta-atoms that can control the EM waves passing through it. In this paper, we harness a SIM for two-dimensional (2D) direction-of-arrival (DOA) estimation. In contrast to the conventional designs, an advanced SIM in front of the receiver array automatically carries out the 2D discrete Fourier transform (DFT) as the incident waves propagate through it. As a result, the receiver array directly observes the angular spectrum of the incoming signal. In this context, the DOA estimates can be readily obtained by using probes to detect the energy distribution on the receiver array. This avoids the need for power-thirsty radio frequency (RF) chains. To enable SIM to perform the 2D DFT, we formulate the optimization problem of minimizing the fitting error between the SIM's EM response and the 2D DFT matrix. Furthermore, a gradient descent algorithm is customized for iteratively updating the phase shift of each meta-atom in SIM. To further improve the DOA estimation accuracy, we configure the phase shift pattern in the zeroth layer of the SIM to generate a set of 2D DFT matrices associated with orthogonal spatial frequency bins. Additionally, we analytically evaluate the performance of the proposed SIM-based DOA estimator by deriving a tight upper bound for the mean square error (MSE). Our numerical simulations verify the capability of a well-trained SIM to perform DOA estimation and corroborate our theoretical analysis. It is demonstrated that a SIM having an optical computational speed achieves an MSE of $10^{-4}$ for DOA estimation.
Network telemetry based on data models is expected to become the standard mechanism for collecting operational data from network devices efficiently. But the wide variety of standard and proprietary data models along with the different implementations of telemetry protocols offered by network vendors, become a barrier when monitoring heterogeneous network infrastructures. To facilitate the integration and sharing of context information related to model-driven telemetry, this work proposes a semantic network inventory that integrates new information models specifically developed to capture context information in a vendor-agnostic fashion using current standards defined for context management. To automate the integration of this context information within the network inventory, a reference architecture is designed. Finally, a prototype of the solution is implemented and validated through a case study that illustrates how the network inventory can ease the operation of model-driven telemetry in multi-vendor networks.
Sponge attacks aim to increase the energy consumption and computation time of neural networks deployed on hardware accelerators. Existing sponge attacks can be performed during inference via sponge examples or during training via Sponge Poisoning. Sponge examples leverage perturbations added to the model's input to increase energy and latency, while Sponge Poisoning alters the objective function of a model to induce inference-time energy/latency effects. In this work, we propose a novel sponge attack called SpongeNet. SpongeNet is the first sponge attack that is performed directly on the parameters of a pre-trained model. Our experiments show that SpongeNet can successfully increase the energy consumption of vision models with fewer samples required than Sponge Poisoning. Our experiments indicate that poisoning defenses are ineffective if not adjusted specifically for the defense against Sponge Poisoning (i.e., they decrease batch normalization bias values). Our work shows that SpongeNet is more effective on StarGAN than the state-of-the-art. Additionally, SpongeNet is stealthier than the previous Sponge Poisoning attack as it does not require significant changes in the victim model's weights. Our experiments indicate that the SpongeNet attack can be performed even when an attacker has access to only 1% of the entire dataset and reach up to 11% energy increase.
The total energy cost of computing activities is steadily increasing and projections indicate that it will be one of the dominant global energy consumers in the coming decades. However, perhaps due to its relative youth, the video game sector has not yet developed the same level of environmental awareness as other computing technologies despite the estimated three billion regular video game players in the world. This work evaluates the energy consumption of the most widely used industry-scale video game engines: Unity and Unreal Engine. Specifically, our work uses three scenarios representing relevant aspects of video games (Physics, Statics Meshes, and Dynamic Meshes) to compare the energy consumption of the engines. The aim is to determine the influence of using each of the two engines on energy consumption. Our research has confirmed significant differences in the energy consumption of video game engines: 351% in Physics in favor of Unity, 17% in Statics Meshes in favor of Unity, and 26% in Dynamic Meshes in favor of Unreal Engine. These results represent an opportunity for worldwide potential savings of at least 51 TWh per year, equivalent to the annual consumption of nearly 13 million European households, that might encourage a new branch of research on energy-efficient video game engines.
Graph neural networks (GNNs) have been demonstrated to be a powerful algorithmic model in broad application fields for their effectiveness in learning over graphs. To scale GNN training up for large-scale and ever-growing graphs, the most promising solution is distributed training which distributes the workload of training across multiple computing nodes. However, the workflows, computational patterns, communication patterns, and optimization techniques of distributed GNN training remain preliminarily understood. In this paper, we provide a comprehensive survey of distributed GNN training by investigating various optimization techniques used in distributed GNN training. First, distributed GNN training is classified into several categories according to their workflows. In addition, their computational patterns and communication patterns, as well as the optimization techniques proposed by recent work are introduced. Second, the software frameworks and hardware platforms of distributed GNN training are also introduced for a deeper understanding. Third, distributed GNN training is compared with distributed training of deep neural networks, emphasizing the uniqueness of distributed GNN training. Finally, interesting issues and opportunities in this field are discussed.
Graph Convolutional Networks (GCNs) have been widely applied in various fields due to their significant power on processing graph-structured data. Typical GCN and its variants work under a homophily assumption (i.e., nodes with same class are prone to connect to each other), while ignoring the heterophily which exists in many real-world networks (i.e., nodes with different classes tend to form edges). Existing methods deal with heterophily by mainly aggregating higher-order neighborhoods or combing the immediate representations, which leads to noise and irrelevant information in the result. But these methods did not change the propagation mechanism which works under homophily assumption (that is a fundamental part of GCNs). This makes it difficult to distinguish the representation of nodes from different classes. To address this problem, in this paper we design a novel propagation mechanism, which can automatically change the propagation and aggregation process according to homophily or heterophily between node pairs. To adaptively learn the propagation process, we introduce two measurements of homophily degree between node pairs, which is learned based on topological and attribute information, respectively. Then we incorporate the learnable homophily degree into the graph convolution framework, which is trained in an end-to-end schema, enabling it to go beyond the assumption of homophily. More importantly, we theoretically prove that our model can constrain the similarity of representations between nodes according to their homophily degree. Experiments on seven real-world datasets demonstrate that this new approach outperforms the state-of-the-art methods under heterophily or low homophily, and gains competitive performance under homophily.
Deep neural networks have revolutionized many machine learning tasks in power systems, ranging from pattern recognition to signal processing. The data in these tasks is typically represented in Euclidean domains. Nevertheless, there is an increasing number of applications in power systems, where data are collected from non-Euclidean domains and represented as the graph-structured data with high dimensional features and interdependency among nodes. The complexity of graph-structured data has brought significant challenges to the existing deep neural networks defined in Euclidean domains. Recently, many studies on extending deep neural networks for graph-structured data in power systems have emerged. In this paper, a comprehensive overview of graph neural networks (GNNs) in power systems is proposed. Specifically, several classical paradigms of GNNs structures (e.g., graph convolutional networks, graph recurrent neural networks, graph attention networks, graph generative networks, spatial-temporal graph convolutional networks, and hybrid forms of GNNs) are summarized, and key applications in power systems such as fault diagnosis, power prediction, power flow calculation, and data generation are reviewed in detail. Furthermore, main issues and some research trends about the applications of GNNs in power systems are discussed.
Detecting carried objects is one of the requirements for developing systems to reason about activities involving people and objects. We present an approach to detect carried objects from a single video frame with a novel method that incorporates features from multiple scales. Initially, a foreground mask in a video frame is segmented into multi-scale superpixels. Then the human-like regions in the segmented area are identified by matching a set of extracted features from superpixels against learned features in a codebook. A carried object probability map is generated using the complement of the matching probabilities of superpixels to human-like regions and background information. A group of superpixels with high carried object probability and strong edge support is then merged to obtain the shape of the carried object. We applied our method to two challenging datasets, and results show that our method is competitive with or better than the state-of-the-art.