Automatic sensor-based detection of motor failures such as bearing faults is crucial for predictive maintenance in various industries. Numerous methodologies have been developed over the years to detect bearing faults. Despite the appearance of numerous different approaches for diagnosing faults in motors have been proposed, vibration-based methods have become the de facto standard and the most commonly used techniques. However, acquiring reliable vibration signals, especially from rotating machinery, can sometimes be infeasibly difficult due to challenging installation and operational conditions (e.g., variations on accelerometer locations on the motor body), which will not only alter the signal patterns significantly but may also induce severe artifacts. Moreover, sensors are costly and require periodic maintenance to sustain a reliable signal acquisition. To address these drawbacks and void the need for vibration sensors, in this study, we propose a novel sound-to-vibration transformation method that can synthesize realistic vibration signals directly from the sound measurements regardless of the working conditions, fault type, and fault severity. As a result, using this transformation, the data acquired by a simple sound recorder, e.g., a mobile phone, can be transformed into the vibration signal, which can then be used for fault detection by a pre-trained model. The proposed method is extensively evaluated over the benchmark Qatar University Dual-Machine Bearing Fault Benchmark dataset (QU-DMBF), which encapsulates sound and vibration data from two different machines operating under various conditions. Experimental results show that this novel approach can synthesize such realistic vibration signals that can directly be used for reliable and highly accurate motor health monitoring.
Leveraging the SIMD capability of modern CPU architectures is mandatory to take full benefit of their increasing performance. To exploit this feature, binary executables must be explicitly vectorized by the developers or an automatic vectorization tool. This why the compilation research community has created several strategies to transform a scalar code into a vectorized implementation. However, the majority of the approaches focus on regular algorithms, such as affine loops, that can be vectorized with few data transformations. In this paper, we present a new approach that allow automatically vectorizing scalar codes with chaotic data accesses as long as their operations can be statically inferred. We describe how our method transforms a graph of scalar instructions into a vectorized one using different heuristics with the aim of reducing the number or cost of the instructions. Finally, we demonstrate the interest of our approach on various computational kernels using Intel AVX-512 and ARM SVE.
Recent years have witnessed the fast penetration of Virtual Reality (VR) and Augmented Reality (AR) systems into our daily life, the security and privacy issues of the VR/AR applications have been attracting considerable attention. Most VR/AR systems adopt head-mounted devices (i.e., smart headsets) to interact with users and the devices usually store the users' private data. Hence, authentication schemes are desired for the head-mounted devices. Traditional knowledge-based authentication schemes for general personal devices have been proved vulnerable to shoulder-surfing attacks, especially considering the headsets may block the sight of the users. Although the robustness of the knowledge-based authentication can be improved by designing complicated secret codes in virtual space, this approach induces a compromise of usability. Another choice is to leverage the users' biometrics; however, it either relies on highly advanced equipments which may not always be available in commercial headsets or introduce heavy cognitive load to users. In this paper, we propose a vibration-based authentication scheme, VibHead, for smart headsets. Since the propagation of vibration signals through human heads presents unique patterns for different individuals, VibHead employs a CNN-based model to classify registered legitimate users based the features extracted from the vibration signals. We also design a two-step authentication scheme where the above user classifiers are utilized to distinguish the legitimate user from illegitimate ones. We implement VibHead on a Microsoft HoloLens equipped with a linear motor and an IMU sensor which are commonly used in off-the-shelf personal smart devices. According to the results of our extensive experiments, with short vibration signals ($\leq 1s$), VibHead has an outstanding authentication accuracy; both FAR and FRR are around 5%.
Despite its importance in both industrial and service robotics, mobile manipulation remains a significant challenge as it requires a seamless integration of end-effector trajectory generation with navigation skills as well as reasoning over long-horizons. Existing methods struggle to control the large configuration space, and to navigate dynamic and unknown environments. In previous work, we proposed to decompose mobile manipulation tasks into a simplified motion generator for the end-effector in task space and a trained reinforcement learning agent for the mobile base to account for kinematic feasibility of the motion. In this work, we introduce Neural Navigation for Mobile Manipulation (N$^2$M$^2$) which extends this decomposition to complex obstacle environments and enables it to tackle a broad range of tasks in real world settings. The resulting approach can perform unseen, long-horizon tasks in unexplored environments while instantly reacting to dynamic obstacles and environmental changes. At the same time, it provides a simple way to define new mobile manipulation tasks. We demonstrate the capabilities of our proposed approach in extensive simulation and real-world experiments on multiple kinematically diverse mobile manipulators. Code and videos are publicly available at //mobile-rl.cs.uni-freiburg.de.
Recent studies have demonstrated the susceptibility of deep neural networks to backdoor attacks. Given a backdoored model, its prediction of a poisoned sample with trigger will be dominated by the trigger information, though trigger information and benign information coexist. Inspired by the mechanism of the optical polarizer that a polarizer could pass light waves with particular polarizations while filtering light waves with other polarizations, we propose a novel backdoor defense method by inserting a learnable neural polarizer into the backdoored model as an intermediate layer, in order to purify the poisoned sample via filtering trigger information while maintaining benign information. The neural polarizer is instantiated as one lightweight linear transformation layer, which is learned through solving a well designed bi-level optimization problem, based on a limited clean dataset. Compared to other fine-tuning-based defense methods which often adjust all parameters of the backdoored model, the proposed method only needs to learn one additional layer, such that it is more efficient and requires less clean data. Extensive experiments demonstrate the effectiveness and efficiency of our method in removing backdoors across various neural network architectures and datasets, especially in the case of very limited clean data.
Anomaly detection is an important field that aims to identify unexpected patterns or data points, and it is closely related to many real-world problems, particularly to applications in finance, manufacturing, cyber security, and so on. While anomaly detection has been studied extensively in various fields, detecting future anomalies before they occur remains an unexplored territory. In this paper, we present a novel type of anomaly detection, called \emph{\textbf{P}recursor-of-\textbf{A}nomaly} (PoA) detection. Unlike conventional anomaly detection, which focuses on determining whether a given time series observation is an anomaly or not, PoA detection aims to detect future anomalies before they happen. To solve both problems at the same time, we present a neural controlled differential equation-based neural network and its multi-task learning algorithm. We conduct experiments using 17 baselines and 3 datasets, including regular and irregular time series, and demonstrate that our presented method outperforms the baselines in almost all cases. Our ablation studies also indicate that the multitasking training method significantly enhances the overall performance for both anomaly and PoA detection.
Electronic Health Records (EHRs) have undergone numerous technical improvements in recent years, including the incorporation of mobile devices with the cloud computing technologies to facilitate medical data exchanges between patients and the healthcare professionals. This cutting-edge architecture enables cyber physical systems housed in the cloud to provide healthcare services with minimal operational costs, high flexibility, security, and EHR accessibility. If patient health information is stored in the hospital database, there will always be a risk of intrusion, i.e., unauthorized file access and information modification by attackers. To address this concern, we propose a decentralized EHR system based on Blockchain technology. To facilitate secure EHR exchange across various patients and medical providers, we develop a reliable access control method based on smart contracts. We incorporate Cryptocurrency, specifically Ethereum, in the suggested system to protect sensitive health information from potential attackers. In our suggested approach, both physicians and patients are required to be authenticated. Patients can register, and a block with a unique hash value will be generated. Once the patient discusses the disease with the physician, the physician can check the patient's condition and offer drugs. For experimental findings, we employ the public Block chain Ganache and solidity remix-based smart contracts to protect privacy. Ethers are used as the crypto currencies.
Artificial intelligence (AI) and deep learning techniques have gained significant attraction in recent years, owing to their remarkable capability of achieving high performance across a broad range of applications. However, a crucial challenge in training such models is the acquisition of vast amounts of data, which is often limited in fields like healthcare. In this domain, medical data is typically scattered across various sources such as hospitals, clinics, and wearable devices. The aggregated data collected from multiple sources in the healthcare domain is sufficient for training advanced deep learning models. However, these sources are frequently hesitant to share such data due to privacy considerations. To address this challenge, researchers have proposed the integration of blockchain and federated learning to develop a system that facilitates the secure sharing of medical records. This work provides a succinct review of the current state of the art in the use of blockchain and federated learning in the decentralized healthcare domain.
Face recognition technology has advanced significantly in recent years due largely to the availability of large and increasingly complex training datasets for use in deep learning models. These datasets, however, typically comprise images scraped from news sites or social media platforms and, therefore, have limited utility in more advanced security, forensics, and military applications. These applications require lower resolution, longer ranges, and elevated viewpoints. To meet these critical needs, we collected and curated the first and second subsets of a large multi-modal biometric dataset designed for use in the research and development (R&D) of biometric recognition technologies under extremely challenging conditions. Thus far, the dataset includes more than 350,000 still images and over 1,300 hours of video footage of approximately 1,000 subjects. To collect this data, we used Nikon DSLR cameras, a variety of commercial surveillance cameras, specialized long-rage R&D cameras, and Group 1 and Group 2 UAV platforms. The goal is to support the development of algorithms capable of accurately recognizing people at ranges up to 1,000 m and from high angles of elevation. These advances will include improvements to the state of the art in face recognition and will support new research in the area of whole-body recognition using methods based on gait and anthropometry. This paper describes methods used to collect and curate the dataset, and the dataset's characteristics at the current stage.
Deployment of Internet of Things (IoT) devices and Data Fusion techniques have gained popularity in public and government domains. This usually requires capturing and consolidating data from multiple sources. As datasets do not necessarily originate from identical sensors, fused data typically results in a complex data problem. Because military is investigating how heterogeneous IoT devices can aid processes and tasks, we investigate a multi-sensor approach. Moreover, we propose a signal to image encoding approach to transform information (signal) to integrate (fuse) data from IoT wearable devices to an image which is invertible and easier to visualize supporting decision making. Furthermore, we investigate the challenge of enabling an intelligent identification and detection operation and demonstrate the feasibility of the proposed Deep Learning and Anomaly Detection models that can support future application that utilizes hand gesture data from wearable devices.
With the advances of data-driven machine learning research, a wide variety of prediction problems have been tackled. It has become critical to explore how machine learning and specifically deep learning methods can be exploited to analyse healthcare data. A major limitation of existing methods has been the focus on grid-like data; however, the structure of physiological recordings are often irregular and unordered which makes it difficult to conceptualise them as a matrix. As such, graph neural networks have attracted significant attention by exploiting implicit information that resides in a biological system, with interactive nodes connected by edges whose weights can be either temporal associations or anatomical junctions. In this survey, we thoroughly review the different types of graph architectures and their applications in healthcare. We provide an overview of these methods in a systematic manner, organized by their domain of application including functional connectivity, anatomical structure and electrical-based analysis. We also outline the limitations of existing techniques and discuss potential directions for future research.