With the introduction of the term blockchain in 2008, its interest has been increasing in the community since the idea was coined. The reason for this interest is because it provides anonymity, security and integrity without any central third party organisation in control of data and transaction. It has attracted huge interest in research areas due to its advances in various platforms, limitations and challenges. There are various Distributed Ledger Technologies that demonstrates their special features which overcome limitations of other platforms. However, implementations of various distributed ledger technologies differ substantially based on their data structures, consensus protocol and fault tolerant among others. Due to these variations, they have a quite different cost, performance, latency and security. In this paper, working and in-depth comparison of major distributed ledger technologies including their special features, strengths and weaknesses is presented and discussed by identifying various criteria.
Robots that assist humans will need to interact with articulated objects such as cabinets or microwaves. Early work on creating systems for doing so used proprioceptive sensing to estimate joint mechanisms during contact. However, nowadays, almost all systems use only vision and no longer consider proprioceptive information during contact. We believe that proprioceptive information during contact is a valuable source of information and did not find clear motivation for not using it in the literature. Therefore, in this paper, we create a system that, starting from a given grasp, uses proprioceptive sensing to open cabinets with a position-controlled robot and a parallel gripper. We perform a qualitative evaluation of this system, where we find that slip between the gripper and handle limits the performance. Nonetheless, we find that the system already performs quite well. This poses the question: should we make more use of proprioceptive information during contact in articulated object manipulation systems, or is it not worth the added complexity, and can we manage with vision alone? We do not have an answer to this question, but we hope to spark some discussion on the matter. The codebase and videos of the system are available at //tlpss.github.io/revisiting-proprioception-for-articulated-manipulation/.
As an evolving successor to the mobile Internet, the Metaverse creates the impression of an immersive environment, integrating the virtual as well as the real world. In contrast to the traditional mobile Internet based on servers, the Metaverse is constructed by billions of cooperating users by harnessing their smart edge devices having limited communication and computation resources. In this immersive environment an unprecedented amount of multi-modal data has to be processed. To circumvent this impending bottleneck, low-rate semantic communication might be harnessed in support of the Metaverse. But given that private multi-modal data is exchanged in the Metaverse, we have to guard against security breaches and privacy invasions. Hence we conceive a trust-worthy semantic communication system for the Metaverse based on a federated learning architecture by exploiting its distributed decision-making and privacy-preserving capability. We conclude by identifying a suite of promising research directions and open issues.
This paper proposes a novel method for estimating the set of plausible poses of a rigid object from a set of points with volumetric information, such as whether each point is in free space or on the surface of the object. In particular, we study how pose can be estimated from force and tactile data arising from contact. Using data derived from contact is challenging because it is inherently less information-dense than visual data, and thus the pose estimation problem is severely under-constrained when there are few contacts. Rather than attempting to estimate the true pose of the object, which is not tractable without a large number of contacts, we seek to estimate a plausible set of poses which obey the constraints imposed by the sensor data. Existing methods struggle to estimate this set because they are either designed for single pose estimates or require informative priors to be effective. Our approach to this problem, Constrained pose Hypothesis Set Elimination (CHSEL), has three key attributes: 1) It considers volumetric information, which allows us to account for known free space; 2) It uses a novel differentiable volumetric cost function to take advantage of powerful gradient-based optimization tools; and 3) It uses methods from the Quality Diversity (QD) optimization literature to produce a diverse set of high-quality poses. To our knowledge, QD methods have not been used previously for pose registration. We also show how to update our plausible pose estimates online as more data is gathered by the robot. Our experiments suggest that CHSEL shows large performance improvements over several baseline methods for both simulated and real-world data.
Research challenges such as climate change and the search for habitable planets increasingly use academic and commercial computing resources distributed across different institutions and physical sites. Furthermore, such analyses often require a level of automation that precludes direct human interaction, and securing these workflows involves adherence to security policies across institutions. In this paper, we present a decentralized authorization and security framework that enables researchers to utilize resources across different sites while allowing service providers to maintain autonomy over their secrets and authorization policies. We describe this framework as part of the Tapis platform, a web-based, hosted API used by researchers from multiple institutions, and we measure the performance of various authorization and security queries, including cross-site queries. We conclude with two use case studies -- a project at the University of Hawaii to study climate change and the NASA NEID telescope project that searches the galaxy for exoplanets.
In the modern era of digital transformation, the evolution of the fifth-generation (5G) wireless network has played a pivotal role in revolutionizing communication technology and accelerating the growth of smart technology applications. Enabled by the high-speed, low-latency characteristics of 5G, these applications have shown significant potential in various sectors, from healthcare and transportation to energy management and beyond. As a crucial component of smart technology, IoT systems for service delivery often face concept drift issues in network data stream analytics due to dynamic IoT environments, resulting in performance degradation. In this article, we propose a drift-adaptive framework called Adaptive Exponentially Weighted Average Ensemble (AEWAE) consisting of three stages: IoT data preprocessing, base model learning, and online ensembling. It is a data stream analytics framework that integrates dynamic adjustments of ensemble methods to tackle various scenarios. Experimental results on two public IoT datasets demonstrate that our proposed framework outperforms state-of-the-art methods, achieving high accuracy and efficiency in IoT data stream analytics.
This paper systematizes log based Transparency Enhancing Technologies. Based on established work on transparency from multiple disciplines we outline the purpose, usefulness, and pitfalls of transparency. We outline the mechanisms that allow log based transparency enhancing technologies to be implemented, in particular logging mechanisms, sanitisation mechanisms and the trade-offs with privacy, data release and query mechanisms, and how transparency relates to the external mechanisms that can provide the ability to contest a system and hold system operators accountable. We illustrate the role these mechanisms play with two case studies, Certificate Transparency and cryptocurrencies, and show the role that transparency plays in their function as well as the issues these systems face in delivering transparency.
In recent years decentralized currencies developed through Blockchains are increasingly becoming popular because of their transparent nature and absence of a central controlling authority. Though a lot of computation power, disk space, and energy are being used to run this system, most of these resources are dedicated to just keeping the bad actors away by using Proof of Work, Proof of Stake, Proof of Space, etc., consensus. In this paper, we discuss a way to combine those consensus mechanism and modify the defense system to create actual values for the end-users by providing a solution for securely storing their data in a decentralized manner without compromising the integrity of the blockchain.
Concept-based interpretability methods aim to explain deep neural network model predictions using a predefined set of semantic concepts. These methods evaluate a trained model on a new, "probe" dataset and correlate model predictions with the visual concepts labeled in that dataset. Despite their popularity, they suffer from limitations that are not well-understood and articulated by the literature. In this work, we analyze three commonly overlooked factors in concept-based explanations. First, the choice of the probe dataset has a profound impact on the generated explanations. Our analysis reveals that different probe datasets may lead to very different explanations, and suggests that the explanations are not generalizable outside the probe dataset. Second, we find that concepts in the probe dataset are often less salient and harder to learn than the classes they claim to explain, calling into question the correctness of the explanations. We argue that only visually salient concepts should be used in concept-based explanations. Finally, while existing methods use hundreds or even thousands of concepts, our human studies reveal a much stricter upper bound of 32 concepts or less, beyond which the explanations are much less practically useful. We make suggestions for future development and analysis of concept-based interpretability methods. Code for our analysis and user interface can be found at \url{//github.com/princetonvisualai/OverlookedFactors}
Graph neural networks (GNNs) are a type of deep learning models that learning over graphs, and have been successfully applied in many domains. Despite the effectiveness of GNNs, it is still challenging for GNNs to efficiently scale to large graphs. As a remedy, distributed computing becomes a promising solution of training large-scale GNNs, since it is able to provide abundant computing resources. However, the dependency of graph structure increases the difficulty of achieving high-efficiency distributed GNN training, which suffers from the massive communication and workload imbalance. In recent years, many efforts have been made on distributed GNN training, and an array of training algorithms and systems have been proposed. Yet, there is a lack of systematic review on the optimization techniques from graph processing to distributed execution. In this survey, we analyze three major challenges in distributed GNN training that are massive feature communication, the loss of model accuracy and workload imbalance. Then we introduce a new taxonomy for the optimization techniques in distributed GNN training that address the above challenges. The new taxonomy classifies existing techniques into four categories that are GNN data partition, GNN batch generation, GNN execution model, and GNN communication protocol.We carefully discuss the techniques in each category. In the end, we summarize existing distributed GNN systems for multi-GPUs, GPU-clusters and CPU-clusters, respectively, and give a discussion about the future direction on scalable GNNs.
Many current applications use recommendations in order to modify the natural user behavior, such as to increase the number of sales or the time spent on a website. This results in a gap between the final recommendation objective and the classical setup where recommendation candidates are evaluated by their coherence with past user behavior, by predicting either the missing entries in the user-item matrix, or the most likely next event. To bridge this gap, we optimize a recommendation policy for the task of increasing the desired outcome versus the organic user behavior. We show this is equivalent to learning to predict recommendation outcomes under a fully random recommendation policy. To this end, we propose a new domain adaptation algorithm that learns from logged data containing outcomes from a biased recommendation policy and predicts recommendation outcomes according to random exposure. We compare our method against state-of-the-art factorization methods, in addition to new approaches of causal recommendation and show significant improvements.