Nowadays, improving the energy efficiency of high-performance computing (HPC) systems is one of the main drivers in scientific and technological research. As large-scale HPC systems require some fault-tolerant method, the opportunities to reduce energy consumption should be explored. In particular, rollback-recovery methods using uncoordinated checkpoints prevent all processes from re-executing when a failure occurs. In this context, it is possible to take actions to reduce the energy consumption of the nodes whose processes do not re-execute. This work is an extension of a previous one, in which we proposed a series of strategies to manage energy consumption at failure-time. In this work, we have enriched our simulator and the experimentation by including non-blocking communications (with and without system buffering) and a largest number of candidate processes to be analyzed. We have called the latter as \textit{cascade analysis}, because it includes processes that gets blocked by communication indirectly with the failed process. The simulations show that the savings were negligible in the worst case, but in some scenarios, it was possible to achieve significant ones; the maximum saving achieved was 90\% in a time interval of 16 minutes. As a result, we show the feasibility of improving energy efficiency in HPC systems in the presence of a failure.
Integrating robot systems into manufacturing lines is a time-consuming process. In the era of digitalization, the research and development of new technologies is crucial for improving integration processes. Numerous challenges, including the lack of standardization, as well as intricate stakeholder relationships, complicate the process of robotic systems integration. This process typically consists of acquisition, integration, and deployment of the robot systems. This thesis focuses on three areas that help automate and simplify robotic systems integration. In the first area, related to acquisition, a constraint-based configurator is demonstrated that resolves compatibility challenges between robot devices, and automates the configuration process. This reduces the risk of integrating incompatible devices and decreases the need for experts during the configuration phase. In the second area, related to integration, the interoperable modeling format, Unified Robot Description Format (URDF), is investigated, where a detailed analysis is performed, revealing significant inconsistencies and critical improvements. This format is widely used for kinematic modeling and 3D visualization of robots, and its models can be reused across simulation tools. Improving this format benefits a wide range of users, including robotics engineers, researchers, and students. In the third area, related to deployment, Digital Twins (DTs) for robot systems are explored, as these improve efficiency and reduce downtime. A comprehensive literature review of DTs is conducted, and a case study of modular robot systems is developed. This research can accelerate the adoption of DTs in the robotics industry. These insights and approaches improve the process of robotic systems integration, offering valuable contributions that future research can build upon, ultimately driving efficiency, and reducing costs.
Wireless powered communication (WPC) involves the integration of energy harvesting and data transmission. This allows devices to communicate without constant battery replacements or wired power sources. Reconfigurable intelligent surfaces (RISs) can dynamically manipulate radio signals. In this paper, we explore the use of active elements to mitigate double-fading challenges inherent in RIS-aided links. We enhance the reliability performance for an energy-constrained user by combining active RIS and WPC. The theoretical closed-form analysis, which includes transmission rate, harvested energy, and outage probability, provides valuable insights that inform parameter selection.
Many researchers around the world are researching to get control solutions that enhance robots' ability to navigate in dynamic environments autonomously. However, until these days robots have limited capability and many navigation tasks on Earth and other planets have been difficult so far. This paperwork presents the development of a control system for a differential drive-wheeled mobile robot that autonomously controls its position, heading, and speed based on destination information given and surrounding data gathered through mounted proximity and GPS sensors. The intelligence of this control system is implemented by using a fuzzy logic algorithm which is a very powerful tool to handle un-modeled systems like the dynamically changing environment dealt with in this research. The fuzzy controller is used to address the problems associated with navigation in an obstacle-strewn environment. Such issues include position estimation, path planning, and obstacle avoidance. In this study modeling, design, and simulation of the system have been done. The simulation result shows that the developed mobile robot travels successfully from any location to the destination location without colliding with obstacles.
Quantum computing is finding promising applications in optimization, machine learning and physics, leading to the development of various models for representing quantum information. Because these representations are often studied in different contexts (many-body physics, machine learning, formal verification, simulation), little is known about fundamental trade-offs between their succinctness and the runtime of operations to update them. We therefore analytically investigate three widely-used quantum state representations: matrix product states (MPS), decision diagrams (DDs), and restricted Boltzmann machines (RBMs). We map the relative succinctness of these data structures and provide the complexity for relevant query and manipulation operations. Further, to chart the balance between succinctness and operation efficiency, we extend the concept of rapidity with support for the non-canonical data structures studied in this work, showing in particular that MPS is at least as rapid as some DDs. By providing a knowledge compilation map for quantum state representations, this paper contributes to the understanding of the inherent time and space efficiency trade-offs in this area.
Spiking neural networks (SNNs) take inspiration from the brain to enable energy-efficient computations. Since the advent of Transformers, SNNs have struggled to compete with artificial networks on modern sequential tasks, as they inherit limitations from recurrent neural networks (RNNs), with the added challenge of training with non-differentiable binary spiking activations. However, a recent renewed interest in efficient alternatives to Transformers has given rise to state-of-the-art recurrent architectures named state space models (SSMs). This work systematically investigates, for the first time, the intersection of state-of-the-art SSMs with SNNs for long-range sequence modelling. Results suggest that SSM-based SNNs can outperform the Transformer on all tasks of a well-established long-range sequence modelling benchmark. It is also shown that SSM-based SNNs can outperform current state-of-the-art SNNs with fewer parameters on sequential image classification. Finally, a novel feature mixing layer is introduced, improving SNN accuracy while challenging assumptions about the role of binary activations in SNNs. This work paves the way for deploying powerful SSM-based architectures, such as large language models, to neuromorphic hardware for energy-efficient long-range sequence modelling.
The success of artificial intelligence (AI), and deep learning models in particular, has led to their widespread adoption across various industries due to their ability to process huge amounts of data and learn complex patterns. However, due to their lack of explainability, there are significant concerns regarding their use in critical sectors, such as finance and healthcare, where decision-making transparency is of paramount importance. In this paper, we provide a comparative survey of methods that aim to improve the explainability of deep learning models within the context of finance. We categorize the collection of explainable AI methods according to their corresponding characteristics, and we review the concerns and challenges of adopting explainable AI methods, together with future directions we deemed appropriate and important.
Analyzing observational data from multiple sources can be useful for increasing statistical power to detect a treatment effect; however, practical constraints such as privacy considerations may restrict individual-level information sharing across data sets. This paper develops federated methods that only utilize summary-level information from heterogeneous data sets. Our federated methods provide doubly-robust point estimates of treatment effects as well as variance estimates. We derive the asymptotic distributions of our federated estimators, which are shown to be asymptotically equivalent to the corresponding estimators from the combined, individual-level data. We show that to achieve these properties, federated methods should be adjusted based on conditions such as whether models are correctly specified and stable across heterogeneous data sets.
Embedding entities and relations into a continuous multi-dimensional vector space have become the dominant method for knowledge graph embedding in representation learning. However, most existing models ignore to represent hierarchical knowledge, such as the similarities and dissimilarities of entities in one domain. We proposed to learn a Domain Representations over existing knowledge graph embedding models, such that entities that have similar attributes are organized into the same domain. Such hierarchical knowledge of domains can give further evidence in link prediction. Experimental results show that domain embeddings give a significant improvement over the most recent state-of-art baseline knowledge graph embedding models.
How can we estimate the importance of nodes in a knowledge graph (KG)? A KG is a multi-relational graph that has proven valuable for many tasks including question answering and semantic search. In this paper, we present GENI, a method for tackling the problem of estimating node importance in KGs, which enables several downstream applications such as item recommendation and resource allocation. While a number of approaches have been developed to address this problem for general graphs, they do not fully utilize information available in KGs, or lack flexibility needed to model complex relationship between entities and their importance. To address these limitations, we explore supervised machine learning algorithms. In particular, building upon recent advancement of graph neural networks (GNNs), we develop GENI, a GNN-based method designed to deal with distinctive challenges involved with predicting node importance in KGs. Our method performs an aggregation of importance scores instead of aggregating node embeddings via predicate-aware attention mechanism and flexible centrality adjustment. In our evaluation of GENI and existing methods on predicting node importance in real-world KGs with different characteristics, GENI achieves 5-17% higher NDCG@100 than the state of the art.
Recently, graph neural networks (GNNs) have revolutionized the field of graph representation learning through effectively learned node embeddings, and achieved state-of-the-art results in tasks such as node classification and link prediction. However, current GNN methods are inherently flat and do not learn hierarchical representations of graphs---a limitation that is especially problematic for the task of graph classification, where the goal is to predict the label associated with an entire graph. Here we propose DiffPool, a differentiable graph pooling module that can generate hierarchical representations of graphs and can be combined with various graph neural network architectures in an end-to-end fashion. DiffPool learns a differentiable soft cluster assignment for nodes at each layer of a deep GNN, mapping nodes to a set of clusters, which then form the coarsened input for the next GNN layer. Our experimental results show that combining existing GNN methods with DiffPool yields an average improvement of 5-10% accuracy on graph classification benchmarks, compared to all existing pooling approaches, achieving a new state-of-the-art on four out of five benchmark data sets.