亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The Internet of Things (IoT) has boomed in recent years, with an ever-growing number of connected devices and a corresponding exponential increase in network traffic. As a result, IoT devices have become potential witnesses of the surrounding environment and people living in it, creating a vast new source of forensic evidence. To address this need, a new field called IoT Forensics has emerged. In this paper, we present \textit{CSI Sniffer}, a tool that integrates the collection and management of Channel State Information (CSI) in Wi-Fi Access Points. CSI is a physical layer indicator that enables human sensing, including occupancy monitoring and activity recognition. After a description of the tool architecture and implementation, we demonstrate its capabilities through two application scenarios that use binary classification techniques to classify user behavior based on CSI features extracted from IoT traffic. Our results show that the proposed tool can enhance the capabilities of forensic investigations by providing additional sources of evidence. Wi-Fi Access Points integrated with \textit{CSI Sniffer} can be used by ISP or network managers to facilitate the collection of information from IoT devices and the surrounding environment. We conclude the work by analyzing the storage requirements of CSI sample collection and discussing the impact of lossy compression techniques on classification performance.

相關內容

《計算機信息》雜志發表高質量的論文,擴大了運籌學和計算的范圍,尋求有關理論、方法、實驗、系統和應用方面的原創研究論文、新穎的調查和教程論文,以及描述新的和有用的軟件工具的論文。官網鏈接: · 設計 · Integration · 論文 · Better ·
2023 年 7 月 5 日

A major security threat to an integrated circuit (IC) design is the Hardware Trojan attack which is a malicious modification of the design. Previously several papers have investigated into side-channel analysis to detect the presence of Hardware Trojans. The side channel analysis were prescribed in these papers as an alternative to the conventional logic testing for detecting malicious modification in the design. It has been found that these conventional logic testing are ineffective when it comes to detecting small Trojans due to decrease in the sensitivity due to process variations encountered in the manufacturing techniques. The main paper under consideration in this survey report focuses on proposing a new technique to detect Trojans by using multiple-parameter side-channel analysis. The novel idea will be explained thoroughly in this survey report. We also look into several other papers, which talk about single parameter analysis and how they are implemented. We analyzed the short comings of those single parameter analysis techniques and we then show how this multi-parameter analysis technique is better. Finally we will talk about the combined side-channel analysis and logic testing approach in which there is higher detection coverage for hardware Trojan circuits of different types and sizes.

Intelligent reflecting surfaces (IRSs), active and/or passive, can be densely deployed in complex environments to significantly enhance wireless network coverage for both wireless information transfer (WIT) and wireless power transfer (WPT). In this letter, we study the downlink WIT/WPT from a multi-antenna base station to a single-antenna user over a multi-active/passive IRS (AIRS/PIRS)-enabled wireless link. In particular, we aim to optimize the location of the AIRS with those of the other PIRSs being fixed to maximize the received signal-to-noise ratio (SNR) and signal power at the user in the cases of WIT and WPT, respectively. We derive the optimal solutions for these two cases in closed-form, which reveals that the optimal AIRS deployment is generally different for WIT versus WPT. Furthermore, both analytical and numerical results are provided to show the conditions under which the proposed AIRS deployment strategy yields superior performance to other baseline deployment strategies as well as the conventional all- PIRS enabled WIT/WPT.

The Fifth Generation (5G) of mobile networks offers new and advanced services with stricter requirements. Multi-access Edge Computing (MEC) is a key technology that enables these new services by deploying multiple devices with computing and storage capabilities at the edge of the network, close to end-users. MEC enhances network efficiency by reducing latency, enabling real-time awareness of the local environment, allowing cloud offloading, and reducing traffic congestion. New mission-critical applications require high security and dependability, which are rarely addressed alongside performance. This survey paper fills this gap by presenting 5G MEC's three aspects: security, dependability, and performance. The paper provides an overview of MEC, introduces taxonomy, state-of-the-art, and challenges related to each aspect. Finally, the paper presents the challenges of jointly addressing these three aspects.

Unmanned aerial vehicles (UAVs) have become extremely popular for both military and civilian applications due to their ease of deployment, cost-effectiveness, high maneuverability, and availability. Both applications, however, need reliable communication for command and control (C2) and/or data transmission. Utilizing commercial cellular networks for drone communication can enable beyond visual line of sight (BVLOS) operation, high data rate transmission, and secure communication. However, deployment of cellular-connected drones over commercial LTE/5G networks still presents various challenges such as sparse coverage outside urban areas, and interference caused to the network as the UAV is visible to many towers. Commercial 5G networks can offer various features for aerial user equipment (UE) far beyond what LTE could provide by taking advantage of mmWave, flexible numerology, slicing, and the capability of applying AI-based solutions. Limited experimental data is available to investigate the operation of aerial UEs over current, without any modification, commercial 5G networks, particularly in suburban and NON-URBAN areas. In this paper, we perform a comprehensive study of drone communications over the existing low-band and mid-band 5G networks in a suburban area for different velocities and elevations, comparing the performance against that of LTE. It is important to acknowledge that the network examined in this research is primarily designed and optimized to meet the requirements of terrestrial users, and may not adequately address the needs of aerial users. This paper not only reports the Key Performance Indicators (KPIs) compared among all combinations of the test cases but also provides recommendations for aerial users to enhance their communication quality by controlling their trajectory.

Identity and Access Management (IAM) is an access control service in cloud platforms. To securely manage cloud resources, customers are required to configure IAM to specify the access control rules for their cloud organizations. However, IAM misconfiguration may be exploited to perform privilege escalation attacks, which can cause severe economic loss. To detect privilege escalations due to IAM misconfigurations, existing third-party cloud security services apply whitebox penetration testing techniques, requiring the access of complete IAM configurations. To prevent sensitive information disclosure, this requirement places a considerable burden on customers, demanding lots of manual efforts for the anonymization of their configurations. In this paper, we propose a precise greybox penetration testing approach called TAC for third-party services to detect IAM privilege escalations. To mitigate the dual challenges of labor-intensive anonymizations and potential sensitive information disclosures, TAC interacts with customers by selectively querying only the essential information needed. To accomplish this, we first propose abstract IAM modeling, enabling TAC to detect IAM privilege escalations based on the partial information collected from queries. Moreover, to improve the efficiency and applicability of TAC, we minimize the interactions with customers by applying Reinforcement Learning (RL) with Graph Neural Networks (GNNs), allowing TAC to learn to make as few queries as possible. To pretrain and evaluate TAC with enough diverse tasks, we propose an IAM privilege escalation task generator called IAMVulGen. Experimental results on both our task set and the only publicly available task set IAM Vulnerable show that, in comparison to state-of-the-art whitebox approaches, TAC detects IAM privilege escalations with competitively low false negative rates, employing a limited number of queries.

This paper addresses a novel multi-agent deep reinforcement learning (MADRL)-based positioning algorithm for multiple unmanned aerial vehicles (UAVs) collaboration (i.e., UAVs work as mobile base stations). The primary objective of the proposed algorithm is to establish dependable mobile access networks for cellular vehicle-to-everything (C-V2X) communication, thereby facilitating the realization of high-quality intelligent transportation systems (ITS). The reliable mobile access services can be achieved in following two ways, i.e., i) energy-efficient UAV operation and ii) reliable wireless communication services. For energy-efficient UAV operation, the reward of our proposed MADRL algorithm contains the features for UAV energy consumption models in order to realize efficient operations. Furthermore, for reliable wireless communication services, the quality of service (QoS) requirements of individual users are considered as a part of rewards and 60GHz mmWave radio is used for mobile access. This paper considers the 60GHz mmWave access for utilizing the benefits of i) ultra-wide-bandwidth for multi-Gbps high-speed communications and ii) high-directional communications for spatial reuse that is obviously good for densely deployed users. Lastly, the comprehensive and data-intensive performance evaluation of the proposed MADRL-based algorithm for multi-UAV positioning is conducted in this paper. The results of these evaluations demonstrate that the proposed algorithm outperforms other existing algorithms.

Recent results of machine learning for automatic vulnerability detection have been very promising indeed: Given only the source code of a function $f$, models trained by machine learning techniques can decide if $f$ contains a security flaw with up to 70% accuracy. But how do we know that these results are general and not specific to the datasets? To study this question, researchers proposed to amplify the testing set by injecting semantic preserving changes and found that the model's accuracy significantly drops. In other words, the model uses some unrelated features during classification. In order to increase the robustness of the model, researchers proposed to train on amplified training data, and indeed model accuracy increased to previous levels. In this paper, we replicate and continue this investigation, and provide an actionable model benchmarking methodology to help researchers better evaluate advances in machine learning for vulnerability detection. Specifically, we propose (i) a cross validation algorithm, where a semantic preserving transformation is applied during the amplification of either the training set or the testing set, and (ii) the amplification of the testing set with code snippets where the vulnerabilities are fixed. Using 11 transformations, 3 ML techniques, and 2 datasets, we find that the improved robustness only applies to the specific transformations used during training data amplification. In other words, the robustified models still rely on unrelated features for predicting the vulnerabilities in the testing data. Additionally, we find that the trained models are unable to generalize to the modified setting which requires to distinguish vulnerable functions from their patches.

Edge computing facilitates low-latency services at the network's edge by distributing computation, communication, and storage resources within the geographic proximity of mobile and Internet-of-Things (IoT) devices. The recent advancement in Unmanned Aerial Vehicles (UAVs) technologies has opened new opportunities for edge computing in military operations, disaster response, or remote areas where traditional terrestrial networks are limited or unavailable. In such environments, UAVs can be deployed as aerial edge servers or relays to facilitate edge computing services. This form of computing is also known as UAV-enabled Edge Computing (UEC), which offers several unique benefits such as mobility, line-of-sight, flexibility, computational capability, and cost-efficiency. However, the resources on UAVs, edge servers, and IoT devices are typically very limited in the context of UEC. Efficient resource management is, therefore, a critical research challenge in UEC. In this article, we present a survey on the existing research in UEC from the resource management perspective. We identify a conceptual architecture, different types of collaborations, wireless communication models, research directions, key techniques and performance indicators for resource management in UEC. We also present a taxonomy of resource management in UEC. Finally, we identify and discuss some open research challenges that can stimulate future research directions for resource management in UEC.

The Internet of Things (IoT) boom has revolutionized almost every corner of people's daily lives: healthcare, home, transportation, manufacturing, supply chain, and so on. With the recent development of sensor and communication technologies, IoT devices including smart wearables, cameras, smartwatches, and autonomous vehicles can accurately measure and perceive their surrounding environment. Continuous sensing generates massive amounts of data and presents challenges for machine learning. Deep learning models (e.g., convolution neural networks and recurrent neural networks) have been extensively employed in solving IoT tasks by learning patterns from multi-modal sensory data. Graph Neural Networks (GNNs), an emerging and fast-growing family of neural network models, can capture complex interactions within sensor topology and have been demonstrated to achieve state-of-the-art results in numerous IoT learning tasks. In this survey, we present a comprehensive review of recent advances in the application of GNNs to the IoT field, including a deep dive analysis of GNN design in various IoT sensing environments, an overarching list of public data and source code from the collected publications, and future research directions. To keep track of newly published works, we collect representative papers and their open-source implementations and create a Github repository at //github.com/GuiminDong/GNN4IoT.

Data processing and analytics are fundamental and pervasive. Algorithms play a vital role in data processing and analytics where many algorithm designs have incorporated heuristics and general rules from human knowledge and experience to improve their effectiveness. Recently, reinforcement learning, deep reinforcement learning (DRL) in particular, is increasingly explored and exploited in many areas because it can learn better strategies in complicated environments it is interacting with than statically designed algorithms. Motivated by this trend, we provide a comprehensive review of recent works focusing on utilizing DRL to improve data processing and analytics. First, we present an introduction to key concepts, theories, and methods in DRL. Next, we discuss DRL deployment on database systems, facilitating data processing and analytics in various aspects, including data organization, scheduling, tuning, and indexing. Then, we survey the application of DRL in data processing and analytics, ranging from data preparation, natural language processing to healthcare, fintech, etc. Finally, we discuss important open challenges and future research directions of using DRL in data processing and analytics.

北京阿比特科技有限公司