With rising technologies, the protection of privacy-sensitive information is becoming increasingly important. In industry and production facilities, image or video recordings are beneficial for documentation, tracing production errors or coordinating workflows. Individuals in images or videos need to be anonymized. However, the anonymized data should be reusable for further applications. In this work, we apply the Deep Learning-based full-body anonymization framework DeepPrivacy2, which generates artificial identities, to industrial image and video data. We compare its performance with conventional anonymization techniques. Therefore, we consider the quality of identity generation, temporal consistency, and the applicability of pose estimation and action recognition.
To meet the ever-increasing demand for higher data rates, 5G and 6G technologies are shifting transceivers to higher carrier frequencies, to support wider bandwidths and more antenna elements. Nevertheless, this solution poses several key challenges: i) increasing the carrier frequency and bandwidth leads to greater channel frequency selectivity in time and frequency domains, and ii) the greater the number of antennas the greater the the pilot overhead for channel estimation and the more prohibitively complex it becomes to determine the optimal precoding matrix. This paper presents two deep-learning frameworks to solve these issues. Firstly, we propose a 3D convolutional neural network (CNN) that is based on image super-resolution and captures the correlations between the transmitting and receiving antennas and the frequency domains to combat frequency selectivity. Secondly, we devise a deep learning-based framework to combat the time selectivity of the channel that treats channel aging as a distortion that can be mitigated through deep learning-based image restoration techniques. Simulation results show that combining both frameworks leads to a significant improvement in performance compared to existing techniques with little increase in complexity.
Emotion mining has become a crucial tool for understanding human emotions during disasters, leveraging the extensive data generated on social media platforms. This paper aims to summarize existing research on emotion mining within disaster contexts, highlighting both significant discoveries and persistent issues. On the one hand, emotion mining techniques have achieved acceptable accuracy enabling applications such as rapid damage assessment and mental health surveillance. On the other hand, with many studies adopting data-driven approaches, several methodological issues remain. These include arbitrary emotion classification, ignoring biases inherent in data collection from social media, such as the overrepresentation of individuals from higher socioeconomic status on Twitter, and the lack of application of theoretical frameworks like cross-cultural comparisons. These problems can be summarized as a notable lack of theory-driven research and ignoring insights from social and behavioral sciences. This paper underscores the need for interdisciplinary collaboration between computer scientists and social scientists to develop more robust and theoretically grounded approaches in emotion mining. By addressing these gaps, we aim to enhance the effectiveness and reliability of emotion mining methodologies, ultimately contributing to improved disaster preparedness, response, and recovery. Keywords: emotion mining, sentiment analysis, natural disasters, psychology, technological disasters
In the following contribution, a method is introduced that integrates domain expert-centric ontology design with the Cross-Industry Standard Process for Data Mining (CRISP-DM). This approach aims to efficiently build an application-specific ontology tailored to the corrective maintenance of Cyber-Physical Systems (CPS). The proposed method is divided into three phases. In phase one, ontology requirements are systematically specified, defining the relevant knowledge scope. Accordingly, CPS life cycle data is contextualized in phase two using domain-specific ontological artifacts. This formalized domain knowledge is then utilized in the CRISP-DM to efficiently extract new insights from the data. Finally, the newly developed data-driven model is employed to populate and expand the ontology. Thus, information extracted from this model is semantically annotated and aligned with the existing ontology in phase three. The applicability of this method has been evaluated in an anomaly detection case study for a modular process plant.
One of the challenges of Industry 4.0, is to determine and optimize the flow of data, products and materials in manufacturing companies. To realize these challenges, many solutions have been defined such as the utilization of automated guided vehicles (AGVs). However, being guided is a handicap for these vehicles to fully meet the requirements of Industry 4.0 in terms of adaptability and flexibility: the autonomy of vehicles cannot be reduced to predetermined trajectories. Therefore, it is necessary to develop their autonomy. This will be possible by designing new generations of industrial autonomous vehicles (IAVs), in the form of intelligent and cooperative autonomous mobile robots.In the field of road transport, research is very active to make the car autonomous. Many algorithms, solving problematic traffic situations similar to those that can occur in an industrial environment, can be transposed in the industrial field and therefore for IAVs. The technologies standardized in dedicated bodies (e.g., ETSI TC ITS), such as those concerning the exchange of messages between vehicles to increase their awareness or their ability to cooperate, can also be transposed to the industrial context. The deployment of intelligent autonomous vehicle fleets raises several challenges: acceptability by employees, vehicle location, traffic fluidity, vehicle perception of changing environments (dynamic), vehicle-infrastructure cooperation, or vehicles heterogeneity. In this context, developing the autonomy of IAVs requires a relevant working method. The identification of reusable or adaptable algorithms to the various problems raised by the increase in the autonomy of IAVs is not sufficient, it is also necessary to be able to model, to simulate, to test and to experiment with the proposed solutions. Simulation is essential since it allows both to adapt and to validate the algorithms, but also to design and to prepare the experiments.To improve the autonomy of a fleet, we consider the approach relying on a collective intelligence to make the behaviours of vehicles adaptive. In this chapter, we will focus on a class of problems faced by IAVs related to collision and obstacle avoidance. Among these problems, we are particularly interested when two vehicles need to cross an intersection at the same time, known as a deadlock situation. But also, when obstacles are present in the aisles and need to be avoided by the vehicles safely.
State machines are used in engineering many types of software-intensive systems. UML State Machines extend simple finite state machines with powerful constructs. Among the many extensions, there is one seemingly simple and innocent language construct that fundamentally changes state machines' reactive model of computation: doActivity behaviors. DoActivity behaviors describe behavior that is executed independently from the state machine once entered in a given state, typically modeling complex computation or communication as background tasks. However, the UML specification or textbooks are vague about how the doActivity behavior construct should be appropriately used. This lack of guidance is a severe issue as, when improperly used, doActivities can cause concurrent, non-deterministic bugs that are especially challenging to find and could ruin a seemingly correct software design. The Precise Semantics of UML State Machines (PSSM) specification introduced detailed operational semantics for state machines. To the best of our knowledge, there is no rigorous review yet of doActivity's semantics as specified in PSSM. We analyzed the semantics by collecting evidence from cross-checking the text of the specification, its semantic model and executable test cases, and the simulators supporting PSSM. We synthesized insights about subtle details and emergent behaviors relevant to tool developers and advanced modelers. We reported inconsistencies and missing clarifications in more than 20 issues to the standardization committee. Based on these insights, we studied 11 patterns for doActivities detailing the consequences of using a doActivity in a given situation and discussing countermeasures or alternative design choices. We hope that our analysis of the semantics and the patterns help vendors develop conformant simulators or verification tools and engineers design better state machine models.
Justifying the correct implementation of the non-functional requirements (e.g., safety, security) of mission-critical systems is crucial to prevent system failure. The later could have severe consequences such as the death of people and financial losses. Assurance cases can be used to prevent system failure, They are structured arguments that allow arguing and relaying various safety-critical systems' requirements extensively as well as checking the compliance of such systems with industrial standards to support their certification. Still, the creation of assurance cases is usually manual, error-prone, and time-consuming. Besides, it may involve numerous alterations as the system evolves. To overcome the bottlenecks in creating assurance cases, existing approaches usually promote the reuse of common structured evidence-based arguments (i.e. patterns) to aid the creation of assurance cases. To gain insights into the advancements of the research on assurance case patterns, we relied on SEGRESS to conduct a bibliometric analysis of 92 primary studies published within the past two decades. This allows capturing the evolutionary trends and patterns characterizing the research in that field. Our findings notably indicate the emergence of new assurance case patterns to support the assurance of ML-enabled systems that are characterized by their evolving requirements (e.g., cybersecurity and ethics).
In dynamic environments, the ability to detect and track moving objects in real-time is crucial for autonomous robots to navigate safely and effectively. Traditional methods for dynamic object detection rely on high accuracy odometry and maps to detect and track moving objects. However, these methods are not suitable for long-term operation in dynamic environments where the surrounding environment is constantly changing. In order to solve this problem, we propose a novel system for detecting and tracking dynamic objects in real-time using only LiDAR data. By emphasizing the extraction of low-frequency components from LiDAR data as feature points for foreground objects, our method significantly reduces the time required for object clustering and movement analysis. Additionally, we have developed a tracking approach that employs intensity-based ego-motion estimation along with a sliding window technique to assess object movements. This enables the precise identification of moving objects and enhances the system's resilience to odometry drift. Our experiments show that this system can detect and track dynamic objects in real-time with an average detection accuracy of 88.7\% and a recall rate of 89.1\%. Furthermore, our system demonstrates resilience against the prolonged drift typically associated with front-end only LiDAR odometry. All of the source code, labeled dataset, and the annotation tool are available at: //github.com/MISTLab/lidar_dynamic_objects_detection.git
As data are increasingly being stored in different silos and societies becoming more aware of data privacy issues, the traditional centralized training of artificial intelligence (AI) models is facing efficiency and privacy challenges. Recently, federated learning (FL) has emerged as an alternative solution and continue to thrive in this new reality. Existing FL protocol design has been shown to be vulnerable to adversaries within or outside of the system, compromising data privacy and system robustness. Besides training powerful global models, it is of paramount importance to design FL systems that have privacy guarantees and are resistant to different types of adversaries. In this paper, we conduct the first comprehensive survey on this topic. Through a concise introduction to the concept of FL, and a unique taxonomy covering: 1) threat models; 2) poisoning attacks and defenses against robustness; 3) inference attacks and defenses against privacy, we provide an accessible review of this important topic. We highlight the intuitions, key techniques as well as fundamental assumptions adopted by various attacks and defenses. Finally, we discuss promising future research directions towards robust and privacy-preserving federated learning.
Deep neural networks (DNNs) are successful in many computer vision tasks. However, the most accurate DNNs require millions of parameters and operations, making them energy, computation and memory intensive. This impedes the deployment of large DNNs in low-power devices with limited compute resources. Recent research improves DNN models by reducing the memory requirement, energy consumption, and number of operations without significantly decreasing the accuracy. This paper surveys the progress of low-power deep learning and computer vision, specifically in regards to inference, and discusses the methods for compacting and accelerating DNN models. The techniques can be divided into four major categories: (1) parameter quantization and pruning, (2) compressed convolutional filters and matrix factorization, (3) network architecture search, and (4) knowledge distillation. We analyze the accuracy, advantages, disadvantages, and potential solutions to the problems with the techniques in each category. We also discuss new evaluation metrics as a guideline for future research.
The recent proliferation of knowledge graphs (KGs) coupled with incomplete or partial information, in the form of missing relations (links) between entities, has fueled a lot of research on knowledge base completion (also known as relation prediction). Several recent works suggest that convolutional neural network (CNN) based models generate richer and more expressive feature embeddings and hence also perform well on relation prediction. However, we observe that these KG embeddings treat triples independently and thus fail to cover the complex and hidden information that is inherently implicit in the local neighborhood surrounding a triple. To this effect, our paper proposes a novel attention based feature embedding that captures both entity and relation features in any given entity's neighborhood. Additionally, we also encapsulate relation clusters and multihop relations in our model. Our empirical study offers insights into the efficacy of our attention based model and we show marked performance gains in comparison to state of the art methods on all datasets.