As the Internet of Things (IoT) continues to expand, data security has become increasingly important for ensuring privacy and safety, especially given the sensitive and, sometimes, critical nature of the data handled by IoT devices. There exist hardware-based trusted execution environments used to protect data, but they are not compatible with low-cost devices that lack hardware-assisted security features. The research in this paper presents software-based protection and encryption mechanisms explicitly designed for embedded devices. The proposed architecture is designed to work with low-cost, low-end devices without requiring the usual changes on the underlying hardware. It protects against hardware attacks and supports runtime updates, enabling devices to write data in protected memory. The proposed solution is an alternative data security approach for low-cost IoT devices without compromising performance or functionality. Our work underscores the importance of developing secure and cost-effective solutions for protecting data in the context of IoT.
Bluetooth Low Energy (BLE) has become the primary transmission media due to its extremely low energy consumption, good network scope, and data transfer speed for the Internet of Things (IoT) and smart wearable devices. With the exponential boom of the Internet of Things (IoT) and the Bluetooth Low Energy (BLE) connection protocol, a requirement to discover defensive techniques to protect it with practical security analysis. Unfortunately, IoT-BLE is at risk of spoofing assaults where an attacker can pose as a gadget and provide its users a harmful information. Furthermore, due to the simplified strategy of this protocol, there were many security and privacy vulnerabilities. Justifying this quantitative security analysis with STRIDE Methodology change to create a framework to deal with protection issues for the IoT-BLE sensors. Therefore, providing probable attack scenarios for various exposures in this analysis, and offer mitigating strategies. In light of this authors performed STRIDE threat modeling to understand the attack surface for smart wearable devices supporting BLE. The study evaluates different exploitation scenarios Denial of Service (DoS), Elevation of privilege, Information disclosure, spoofing, Tampering, and repudiation on MI Band, One plus Band, Boat Storm smartwatch, and Fire Bolt Invincible.
Localizing root causes for multi-dimensional data is critical to ensure online service systems' reliability. When a fault occurs, only the measure values within specific attribute combinations are abnormal. Such attribute combinations are substantial clues to the underlying root causes and thus are called root causes of multidimensional data. This paper proposes a generic and robust root cause localization approach for multi-dimensional data, PSqueeze. We propose a generic property of root cause for multi-dimensional data, generalized ripple effect (GRE). Based on it, we propose a novel probabilistic cluster method and a robust heuristic search method. Moreover, we identify the importance of determining external root causes and propose an effective method for the first time in literature. Our experiments on two real-world datasets with 5400 faults show that the F1-score of PSqueeze outperforms baselines by 32.89%, while the localization time is around 10 seconds across all cases. The F1-score in determining external root causes of PSqueeze achieves 0.90. Furthermore, case studies in several production systems demonstrate that PSqueeze is helpful to fault diagnosis in the real world.
Achieving resource efficiency while preserving end-user experience is non-trivial for cloud application operators. As cloud applications progressively adopt microservices, resource managers are faced with two distinct levels of system behavior: the end-to-end application latency and per-service resource usage. Translation between these two levels, however, is challenging because user requests traverse heterogeneous services that collectively (but unevenly) contribute to the end-to-end latency. This paper presents Autothrottle, a bi-level learning-assisted resource management framework for SLO-targeted microservices. It architecturally decouples mechanisms of application SLO feedback and service resource control, and bridges them with the notion of performance targets. This decoupling enables targeted control policies for these two mechanisms, where we combine lightweight heuristics and learning techniques. We evaluate Autothrottle on three microservice applications, with workload traces from production scenarios. Results show its superior CPU resource saving, up to 26.21% over the best-performing baseline, and up to 93.84% over all baselines.
Modern society is getting accustomed to the Internet of Things (IoT) and Cyber-Physical Systems (CPS) for a variety of applications that involves security-critical user data and information transfers. In the lower end of the spectrum, these devices are resource-constrained with no attack protection. They become a soft target for malicious code modification attacks that steals and misuses device data in malicious activities. The resilient system requires continuous detection, prevention, and/or recovery and correct code execution (including in degraded mode). By end large, existing security primitives (e.g., secure-boot, Remote Attestation RA, Control Flow Attestation (CFA) and Data Flow Attestation (DFA)) focuses on detection and prevention, leaving the proof of code execution and recovery unanswered. To this end, the proposed work presents lightweight RARES -- Runtime Attack Resilient Embedded System design using verified Proof-of-Execution. It presents first custom hardware control register (Ctrl_register) based runtime memory modification attacks classification and detection technique. It further demonstrates the Proof Of Concept (POC) implementation of use-case-specific attacks prevention and onboard recovery techniques. The prototype implementation on Artix 7 Field Programmable Gate Array (FPGA) and state-of-the-art comparison demonstrates very low (2.3%) resource overhead and efficacy of the proposed solution.
Applications with low data reuse and frequent irregular memory accesses, such as graph or sparse linear algebra workloads, fail to scale well due to memory bottlenecks and poor core utilization. While prior work with prefetching, decoupling, or pipelining can mitigate memory latency and improve core utilization, memory bottlenecks persist due to limited off-chip bandwidth. Approaches doing processing in-memory (PIM) with Hybrid Memory Cube (HMC) overcome bandwidth limitations but fail to achieve high core utilization due to poor task scheduling and synchronization overheads. Moreover, the high memory-per-core ratio available with HMC limits strong scaling. We introduce Dalorex, a hardware-software co-design that achieves high parallelism and energy efficiency, demonstrating strong scaling with >16,000 cores when processing graph and sparse linear algebra workloads. Over the prior work in PIM, both using 256 cores, Dalorex improves performance and energy consumption by two orders of magnitude through (1) a tile-based distributed-memory architecture where each processing tile holds an equal amount of data, and all memory operations are local; (2) a task-based parallel programming model where tasks are executed by the processing unit that is co-located with the target data; (3) a network design optimized for irregular traffic, where all communication is one-way, and messages do not contain routing metadata; (4) novel traffic-aware task scheduling hardware that maintains high core utilization; and (5) a data placement strategy that improves work balance. This work proposes architectural and software innovations to provide the greatest scalability to date for running graph algorithms while still being programmable for other domains.
With the increasing popularity of Internet of Things (IoT) devices, security concerns have become a major challenge: confidential information is constantly being transmitted (sometimes inadvertently) from user devices to untrusted cloud services. This work proposes a design to enhance security and privacy in IoT based systems by isolating hardware peripheral drivers in a trusted execution environment (TEE), and leveraging secure machine learning classification techniques to filter out sensitive data, e.g., speech, images, etc. from the associated peripheral devices before it makes its way to an untrusted party in the cloud.
Clustering is at the very core of machine learning, and its applications proliferate with the increasing availability of data. However, as datasets grow, comparing clusterings with an adjustment for chance becomes computationally difficult, preventing unbiased ground-truth comparisons and solution selection. We propose FastAMI, a Monte Carlo-based method to efficiently approximate the Adjusted Mutual Information (AMI) and extend it to the Standardized Mutual Information (SMI). The approach is compared with the exact calculation and a recently developed variant of the AMI based on pairwise permutations, using both synthetic and real data. In contrast to the exact calculation our method is fast enough to enable these adjusted information-theoretic comparisons for large datasets while maintaining considerably more accurate results than the pairwise approach.
Embedded real-time devices for monitoring, controlling, and collaboration purposes in cyber-physical systems are now commonly equipped with IP networking capabilities. However, the reception and processing of IP packets generates workloads in unpredictable frequencies as networks are outside of a developer's control and difficult to anticipate, especially when networks are connected to the internet. As of now, embedded network controllers and IP stacks are not designed for real-time capabilities, even when used in real-time environments and operating systems. Our work focuses on real-time aware packet reception from open network connections, without a real-time networking infrastructure. This article presents two experimentally evaluated modifications to the IP processing subsystem and embedded network interface controllers of constrained IoT devices. The first, our software approach, introduces early packet classification and priority-aware processing in the network driver. In our experiments this allowed the network subsystem to remain active at a seven-fold increase in network traffic load before disabling the receive interrupts as a last resort. The second, our hardware approach, makes changes to the network interface controller, applying interrupt moderation based on real-time priorities to minimize the number of network-generated interrupts. Furthermore, this article provides an outlook on how the software and hardware approaches can be combined in a co-designed packet receive architecture.
The rapid expansion of Internet of Things (IoT) devices in smart homes has significantly improved the quality of life, offering enhanced convenience, automation, and energy efficiency. However, this proliferation of connected devices raises critical concerns regarding security and privacy of the user data. In this paper, we propose a differential privacy-based system to ensure comprehensive security for data generated by smart homes. We employ the randomized response technique for the data and utilize Local Differential Privacy (LDP) to achieve data privacy. The data is then transmitted to an aggregator, where an obfuscation method is applied to ensure individual anonymity. Furthermore, we implement the Hidden Markov Model (HMM) technique at the aggregator level and apply differential privacy to the private data received from smart homes. Consequently, our approach achieves a dual layer of privacy protection, addressing the security concerns associated with IoT devices in smart cities.
Interest in the field of Explainable Artificial Intelligence has been growing for decades and has accelerated recently. As Artificial Intelligence models have become more complex, and often more opaque, with the incorporation of complex machine learning techniques, explainability has become more critical. Recently, researchers have been investigating and tackling explainability with a user-centric focus, looking for explanations to consider trustworthiness, comprehensibility, explicit provenance, and context-awareness. In this chapter, we leverage our survey of explanation literature in Artificial Intelligence and closely related fields and use these past efforts to generate a set of explanation types that we feel reflect the expanded needs of explanation for today's artificial intelligence applications. We define each type and provide an example question that would motivate the need for this style of explanation. We believe this set of explanation types will help future system designers in their generation and prioritization of requirements and further help generate explanations that are better aligned to users' and situational needs.