The centralized PKI is not a suitable solution to provide identities in large-scale IoT systems. The main problem is the high cost of managing X.509 certificates throughout their lifecycle, from installation to regular updates and revocation. The Self-Sovereign Identity (SSI) is a decentralised option that reduces the need for human intervention, and therefore has the potential to significantly reduce the complexity and cost associated to identity management in large-scale IoT systems. However, to leverage the full potential of SSI, the authentication of IoT nodes needs to be moved from the application to the Transport Layer Security (TLS) level. This paper contributes to the adoption of SSI in large-scale IoT systems by addressing, for the first time, the extension of the original TLS 1.3 handshake to support two new SSI authentication modes while maintaining the interoperability with nodes implementing the original handshake protocol. The open source implementation of the new TLS 1.3 handshake protocol in OpenSSL is used to experimentally prove the feasibility of the approach.
While operating communication networks adaptively may improve utilization and performance, frequent adjustments also introduce an algorithmic challenge: the re-optimization of traffic engineering solutions is time-consuming and may limit the granularity at which a network can be adjusted. This paper is motivated by question whether the reactivity of a network can be improved by re-optimizing solutions dynamically rather than from scratch, especially if inputs such as link weights do not change significantly. This paper explores to what extent dynamic algorithms can be used to speed up fundamental tasks in network operations. We specifically investigate optimizations related to traffic engineering (namely shortest paths and maximum flow computations), but also consider spanning tree and matching applications. While prior work on dynamic graph algorithms focuses on link insertions and deletions, we are interested in the practical problem of link weight changes. We revisit existing upper bounds in the weight-dynamic model, and present several novel lower bounds on the amortized runtime for recomputing solutions. In general, we find that the potential performance gains depend on the application, and there are also strict limitations on what can be achieved, even if link weights change only slightly.
Classical locally recoverable codes (LRCs) have become indispensable in distributed storage systems. They provide efficient recovery in terms of localized errors. Quantum LRCs have very recently been introduced for their potential application in quantum data storage. In this paper, we use classical LRCs to investigate quantum LRCs. We prove that the parameters of quantum LRCs are bounded by their classical counterparts. We deduce the bounds on the parameters of quantum LRCs from the bounds on the parameters of the classical ones. We establish a characterization of optimal pure quantum LRCs based on classical codes with specific properties. Using well-crafted classical LRCs as ingredients in the construction of quantum CSS codes, we offer the first construction of several families of optimal pure quantum LRCs.
Small obstacles on the ground often lead to a fall when caught with commercial prosthetic feet. Despite some recently developed feet can actively control the ankle angle, for instance over slopes, their flat and rigid sole remains a cause of instability on uneven grounds. Soft robotic feet were recently proposed to tackle that issue; however, they lack consistent experimental validation. Therefore, this paper describes the experimental setup realized to test soft and rigid prosthetic feet with lower-limb prosthetic users. It includes a wooden walkway and differently shaped obstacles. It was preliminary validated with an able-bodied subject, the same subject walking on commercial prostheses through modified walking boots, and with a prosthetic user. They performed walking firstly on even ground, and secondly on even ground stepping on one of the obstacles. Results in terms of vertical ground reaction force and knee moments in both the sagittal and frontal planes show how the poor performance of commonly used prostheses is exacerbated in case of obstacles. The prosthetic user, indeed, noticeably relies on the sound leg to compensate for the stiff and unstable interaction of the prosthetic limb with the obstacle. Therefore, since the limitations of non-adaptive prosthetic feet in obstacle-dealing emerge from the experiments, as expected, this study justifies the use of the setup for investigating the performance of soft feet on uneven grounds and obstacle negotiation.
Existing out-of-distribution (OOD) methods have shown great success on balanced datasets but become ineffective in long-tailed recognition (LTR) scenarios where 1) OOD samples are often wrongly classified into head classes and/or 2) tail-class samples are treated as OOD samples. To address these issues, current studies fit a prior distribution of auxiliary/pseudo OOD data to the long-tailed in-distribution (ID) data. However, it is difficult to obtain such an accurate prior distribution given the unknowingness of real OOD samples and heavy class imbalance in LTR. A straightforward solution to avoid the requirement of this prior is to learn an outlier class to encapsulate the OOD samples. The main challenge is then to tackle the aforementioned confusion between OOD samples and head/tail-class samples when learning the outlier class. To this end, we introduce a novel calibrated outlier class learning (COCL) approach, in which 1) a debiased large margin learning method is introduced in the outlier class learning to distinguish OOD samples from both head and tail classes in the representation space and 2) an outlier-class-aware logit calibration method is defined to enhance the long-tailed classification confidence. Extensive empirical results on three popular benchmarks CIFAR10-LT, CIFAR100-LT, and ImageNet-LT demonstrate that COCL substantially outperforms state-of-the-art OOD detection methods in LTR while being able to improve the classification accuracy on ID data. Code is available at //github.com/mala-lab/COCL.
Solving partially observable Markov decision processes (POMDPs) with high dimensional and continuous observations, such as camera images, is required for many real life robotics and planning problems. Recent researches suggested machine learned probabilistic models as observation models, but their use is currently too computationally expensive for online deployment. We deal with the question of what would be the implication of using simplified observation models for planning, while retaining formal guarantees on the quality of the solution. Our main contribution is a novel probabilistic bound based on a statistical total variation distance of the simplified model. We show that it bounds the theoretical POMDP value w.r.t. original model, from the empirical planned value with the simplified model, by generalizing recent results of particle-belief MDP concentration bounds. Our calculations can be separated into offline and online parts, and we arrive at formal guarantees without having to access the costly model at all during planning, which is also a novel result. Finally, we demonstrate in simulation how to integrate the bound into the routine of an existing continuous online POMDP solver.
With the rapid progress in virtual reality (VR) technology, the scope of VR applications has greatly expanded across various domains. However, the superiority of VR training over traditional methods and its impact on learning efficacy are still uncertain. To investigate whether VR training is more effective than traditional methods, we designed virtual training systems for mechanical assembly on both VR and desktop platforms, subsequently conducting pre-test and post-test experiments. A cohort of 53 students, all enrolled in engineering drawing course without prior knowledge distinctions, was randomly divided into three groups: physical training, desktop virtual training, and immersive VR training. Our investigation utilized analysis of covariance (ANCOVA) to examine the differences in post-test scores among the three groups while controlling for pre-test scores. The group that received VR training showed the highest scores on the post-test. Another facet of our study delved into the presence of the virtual system. We developed a specialized scale to assess this aspect for our research objectives. Our findings indicate that VR training can enhance the sense of presence, particularly in terms of sensory factors and realism factors. Moreover, correlation analysis uncovers connections between the various dimensions of presence. This study confirms that using VR training can improve learning efficacy and the presence in the context of mechanical assembly, surpassing traditional training methods. Furthermore, it provides empirical evidence supporting the integration of VR technology in higher education and engineering training. This serves as a reference for the practical application of VR technology in different fields.
Using machine learning (ML) techniques to predict material properties is a crucial research topic. These properties depend on numerical data and semantic factors. Due to the limitations of small-sample datasets, existing methods typically adopt ML algorithms to regress numerical properties or transfer other pre-trained knowledge graphs (KGs) to the material. However, these methods cannot simultaneously handle semantic and numerical information. In this paper, we propose a numerical reasoning method for material KGs (NR-KG), which constructs a cross-modal KG using semantic nodes and numerical proxy nodes. It captures both types of information by projecting KG into a canonical KG and utilizes a graph neural network to predict material properties. In this process, a novel projection prediction loss is proposed to extract semantic features from numerical information. NR-KG facilitates end-to-end processing of cross-modal data, mining relationships and cross-modal information in small-sample datasets, and fully utilizes valuable experimental data to enhance material prediction. We further propose two new High-Entropy Alloys (HEA) property datasets with semantic descriptions. NR-KG outperforms state-of-the-art (SOTA) methods, achieving relative improvements of 25.9% and 16.1% on two material datasets. Besides, NR-KG surpasses SOTA methods on two public physical chemistry molecular datasets, showing improvements of 22.2% and 54.3%, highlighting its potential application and generalizability. We hope the proposed datasets, algorithms, and pre-trained models can facilitate the communities of KG and AI for materials.
Integrated sensing and communication (ISAC) has the advantages of efficient spectrum utilization and low hardware cost. It is promising to be implemented in the fifth-generation-advanced (5G-A) and sixth-generation (6G) mobile communication systems, having the potential to be applied in intelligent applications requiring both communication and high-accurate sensing capabilities. As the fundamental technology of ISAC, ISAC signal directly impacts the performance of sensing and communication. This article systematically reviews the literature on ISAC signals from the perspective of mobile communication systems, including ISAC signal design, ISAC signal processing algorithms and ISAC signal optimization. We first review the ISAC signal design based on 5G, 5G-A and 6G mobile communication systems. Then, radar signal processing methods are reviewed for ISAC signals, mainly including the channel information matrix method, spectrum lines estimator method and super resolution method. In terms of signal optimization, we summarize peak-to-average power ratio (PAPR) optimization, interference management, and adaptive signal optimization for ISAC signals. This article may provide the guidelines for the research of ISAC signals in 5G-A and 6G mobile communication systems.
We propose a novel method for automatic reasoning on knowledge graphs based on debate dynamics. The main idea is to frame the task of triple classification as a debate game between two reinforcement learning agents which extract arguments -- paths in the knowledge graph -- with the goal to promote the fact being true (thesis) or the fact being false (antithesis), respectively. Based on these arguments, a binary classifier, called the judge, decides whether the fact is true or false. The two agents can be considered as sparse, adversarial feature generators that present interpretable evidence for either the thesis or the antithesis. In contrast to other black-box methods, the arguments allow users to get an understanding of the decision of the judge. Since the focus of this work is to create an explainable method that maintains a competitive predictive accuracy, we benchmark our method on the triple classification and link prediction task. Thereby, we find that our method outperforms several baselines on the benchmark datasets FB15k-237, WN18RR, and Hetionet. We also conduct a survey and find that the extracted arguments are informative for users.
Object detection typically assumes that training and test data are drawn from an identical distribution, which, however, does not always hold in practice. Such a distribution mismatch will lead to a significant performance drop. In this work, we aim to improve the cross-domain robustness of object detection. We tackle the domain shift on two levels: 1) the image-level shift, such as image style, illumination, etc, and 2) the instance-level shift, such as object appearance, size, etc. We build our approach based on the recent state-of-the-art Faster R-CNN model, and design two domain adaptation components, on image level and instance level, to reduce the domain discrepancy. The two domain adaptation components are based on H-divergence theory, and are implemented by learning a domain classifier in adversarial training manner. The domain classifiers on different levels are further reinforced with a consistency regularization to learn a domain-invariant region proposal network (RPN) in the Faster R-CNN model. We evaluate our newly proposed approach using multiple datasets including Cityscapes, KITTI, SIM10K, etc. The results demonstrate the effectiveness of our proposed approach for robust object detection in various domain shift scenarios.