亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The evolution of communication technologies, exemplified by the Internet of Things (IoT) and cloud computing, has significantly enhanced the speed and accessibility of Public Safety (PS) services, critical to ensuring the safety and security of our environment. However, these advancements also introduce inherent security and privacy challenges. In response, this research presents a novel and adaptable access control scheme tailored to PS services in cloud-supported IoT environments. Our proposed access control protocol leverages the strengths of Key Policy Attribute Based Encryption (KP-ABE) and Identity-Based Broadcast Encryption (IDBB), combining them to establish a robust security framework for cloud-supported IoT in the context of PS services. Through the implementation of an Elliptic Curve Diffie-Hellman (ECDH) scheme between entities, we ensure entity authentication, data confidentiality, and integrity, addressing fundamental security requirements. A noteworthy aspect of our lightweight protocol is the delegation of user private key generation within the KP-ABE scheme to an untrusted cloud entity. This strategic offloading of computational and communication overhead preserves data privacy, as the cloud is precluded from accessing sensitive information. To achieve this, we employ an IDBB scheme to generate secret private keys for system users based on their roles, requiring the logical conjunction ('AND') of user attributes to access data. This architecture effectively conceals user identities from the cloud service provider. Comprehensive analysis validates the efficacy of the proposed protocol, confirming its ability to ensure system security and availability within acceptable parameters.

相關內容

Owing to the promising ability of saving hardware cost and spectrum resources, integrated sensing and communication (ISAC) is regarded as a revolutionary technology for future sixth-generation (6G) networks. The mono-static ISAC systems considered in most of existing works can only achieve limited sensing performance due to the single observation angle and easily blocked transmission links, which motivates researchers to investigate cooperative ISAC networks. In order to further improve the degrees of freedom (DoFs) of cooperative ISAC networks, the transmitter-receiver selection, i.e., base station (BS) mode selection problem, is meaningful to be studied. However, to our best knowledge, this crucial problem has not been extensively studied in existing works. In this paper, we consider the joint BS mode selection, transmit beamforming, and receive filter design for cooperative cell-free ISAC networks, where multi-BSs cooperatively serve communication users and detect targets. We aim to maximize the sum of sensing signal-to-interference-plus-noise ratio (SINR) under the communication SINR requirements, total power budget, and constraints on the numbers of transmit/receive BSs. An efficient joint beamforming design algorithm and three different heuristic BS mode selection methods are proposed to solve this non-convex NP-hard problem. Simulation results demonstrates the advantages of cooperative ISAC networks, the importance of BS mode selection, and the effectiveness of our proposed algorithms.

Video instance segmentation, also known as multi-object tracking and segmentation, is an emerging computer vision research area introduced in 2019, aiming at detecting, segmenting, and tracking instances in videos simultaneously. By tackling the video instance segmentation tasks through effective analysis and utilization of visual information in videos, a range of computer vision-enabled applications (e.g., human action recognition, medical image processing, autonomous vehicle navigation, surveillance, etc) can be implemented. As deep-learning techniques take a dominant role in various computer vision areas, a plethora of deep-learning-based video instance segmentation schemes have been proposed. This survey offers a multifaceted view of deep-learning schemes for video instance segmentation, covering various architectural paradigms, along with comparisons of functional performance, model complexity, and computational overheads. In addition to the common architectural designs, auxiliary techniques for improving the performance of deep-learning models for video instance segmentation are compiled and discussed. Finally, we discuss a range of major challenges and directions for further investigations to help advance this promising research field.

The rapid development of intelligent transportation systems and connected vehicles has highlighted the need for secure and efficient key management systems (KMS). In this paper, we introduce VDKMS (Vehicular Decentralized Key Management System), a novel Decentralized Key Management System designed specifically as an infrastructure for Cellular Vehicular-to-Everything (V2X) networks, utilizing a blockchain-based approach. The proposed VDKMS addresses the challenges of secure communication, privacy preservation, and efficient key management in V2X scenarios. It integrates blockchain technology, Self-Sovereign Identity (SSI) principles, and Decentralized Identifiers (DIDs) to enable secure and trustworthy V2X applications among vehicles, infrastructures, and networks. We first provide a comprehensive overview of the system architecture, components, protocols, and workflows, covering aspects such as provisioning, registration, verification, and authorization. We then present a detailed performance evaluation, discussing the security properties and compatibility of the proposed solution, as well as a security analysis. Finally, we present potential applications in the vehicular ecosystem that can leverage the advantages of our approach.

An experimental Quantum Key Distribution (QKD) implementation requires advanced costly hardware, unavailable in most research environments, making protocol testing and performance evaluation complicated. Historically, this has been a major motivation for the development of QKD simulation frameworks, to allow researchers to obtain insight before proceeding into practical implementations. Several simulators have been introduced over the recent years. However, only four are publicly available, only one of which models equipment imperfections. Currently, no open-source simulator includes all following capabilities: channel attenuation modelling, equipment imperfections and effect on key rates, estimation of elapsed time during quantum channel processes, use of truly random binary sequences for qubits and measurement bases, shared-bit fraction customization. In this paper, we present NuQKD, an open-source modular, intuitive simulator, featuring all the above capabilities. NuQKD establishes communication between two computer terminals, accepts custom inputs (iterations, raw key size, interception rate etc.) and evaluates the sifted key length, Quantum Bit Error Rate (QBER), elapsed communication time and more). NuQKD capabilities include optical fiber and free-space simulation, modeling of equipment/channel imperfections, bitstrings from True Random Number Generator, modular design and automated evaluation of performance metrics. We expect NuQKD to enable convenient and accurate representation of actual experimental conditions.

The widespread utilization of smartphones has provided extensive availability to Inertial Measurement Units, providing a wide range of sensory data that can be advantageous for the detection of transportation modes. The objective of this study is to propose a novel end-to-end approach to effectively explore a reduced amount of sensory data collected from a smartphone to achieve accurate mode detection in common daily traveling activities. Our approach, called Feature Pyramid biLSTM (FPbiLSTM), is characterized by its ability to reduce the number of sensors required and processing demands, resulting in a more efficient modeling process without sacrificing the quality of the outcomes than the other current models. FPbiLSTM extends an existing CNN biLSTM model with the Feature Pyramid Network, leveraging the advantages of both shallow layer richness and deeper layer feature resilience for capturing temporal moving patterns in various transportation modes. It exhibits an excellent performance by employing the data collected from only three out of seven sensors, i.e. accelerometers, gyroscopes, and magnetometers, in the 2018 Sussex-Huawei Locomotion (SHL) challenge dataset, attaining a noteworthy accuracy of 95.1% and an F1-score of 94.7% in detecting eight different transportation modes.

Over the past few years, the explosion in sparse tensor algebra workloads has led to a corresponding rise in domain-specific accelerators to service them. Due to the irregularity present in sparse tensors, these accelerators employ a wide variety of novel solutions to achieve good performance. At the same time, prior work on design-flexible sparse accelerator modeling does not express this full range of design features, making it difficult to understand the impact of each design choice and compare or extend the state-of-the-art. To address this, we propose TeAAL: a language and simulator generator for the concise and precise specification and evaluation of sparse tensor algebra accelerators. We use TeAAL to represent and evaluate four disparate state-of-the-art accelerators -- ExTensor, Gamma, OuterSPACE, and SIGMA -- and verify that it reproduces their performance with high accuracy. Finally, we demonstrate the potential of TeAAL as a tool for designing new accelerators by showing how it can be used to speed up vertex-centric programming accelerators -- achieving $1.9\times$ on BFS and $1.2\times$ on SSSP over GraphDynS.

Over the past few years, the rapid development of deep learning technologies for computer vision has greatly promoted the performance of medical image segmentation (MedISeg). However, the recent MedISeg publications usually focus on presentations of the major contributions (e.g., network architectures, training strategies, and loss functions) while unwittingly ignoring some marginal implementation details (also known as "tricks"), leading to a potential problem of the unfair experimental result comparisons. In this paper, we collect a series of MedISeg tricks for different model implementation phases (i.e., pre-training model, data pre-processing, data augmentation, model implementation, model inference, and result post-processing), and experimentally explore the effectiveness of these tricks on the consistent baseline models. Compared to paper-driven surveys that only blandly focus on the advantages and limitation analyses of segmentation models, our work provides a large number of solid experiments and is more technically operable. With the extensive experimental results on both the representative 2D and 3D medical image datasets, we explicitly clarify the effect of these tricks. Moreover, based on the surveyed tricks, we also open-sourced a strong MedISeg repository, where each of its components has the advantage of plug-and-play. We believe that this milestone work not only completes a comprehensive and complementary survey of the state-of-the-art MedISeg approaches, but also offers a practical guide for addressing the future medical image processing challenges including but not limited to small dataset learning, class imbalance learning, multi-modality learning, and domain adaptation. The code has been released at: //github.com/hust-linyi/MedISeg

With the advent of 5G commercialization, the need for more reliable, faster, and intelligent telecommunication systems are envisaged for the next generation beyond 5G (B5G) radio access technologies. Artificial Intelligence (AI) and Machine Learning (ML) are not just immensely popular in the service layer applications but also have been proposed as essential enablers in many aspects of B5G networks, from IoT devices and edge computing to cloud-based infrastructures. However, most of the existing surveys in B5G security focus on the performance of AI/ML models and their accuracy, but they often overlook the accountability and trustworthiness of the models' decisions. Explainable AI (XAI) methods are promising techniques that would allow system developers to identify the internal workings of AI/ML black-box models. The goal of using XAI in the security domain of B5G is to allow the decision-making processes of the security of systems to be transparent and comprehensible to stakeholders making the systems accountable for automated actions. In every facet of the forthcoming B5G era, including B5G technologies such as RAN, zero-touch network management, E2E slicing, this survey emphasizes the role of XAI in them and the use cases that the general users would ultimately enjoy. Furthermore, we presented the lessons learned from recent efforts and future research directions on top of the currently conducted projects involving XAI.

Deep Learning has revolutionized the fields of computer vision, natural language understanding, speech recognition, information retrieval and more. However, with the progressive improvements in deep learning models, their number of parameters, latency, resources required to train, etc. have all have increased significantly. Consequently, it has become important to pay attention to these footprint metrics of a model as well, not just its quality. We present and motivate the problem of efficiency in deep learning, followed by a thorough survey of the five core areas of model efficiency (spanning modeling techniques, infrastructure, and hardware) and the seminal work there. We also present an experiment-based guide along with code, for practitioners to optimize their model training and deployment. We believe this is the first comprehensive survey in the efficient deep learning space that covers the landscape of model efficiency from modeling techniques to hardware support. Our hope is that this survey would provide the reader with the mental model and the necessary understanding of the field to apply generic efficiency techniques to immediately get significant improvements, and also equip them with ideas for further research and experimentation to achieve additional gains.

The cross-domain recommendation technique is an effective way of alleviating the data sparsity in recommender systems by leveraging the knowledge from relevant domains. Transfer learning is a class of algorithms underlying these techniques. In this paper, we propose a novel transfer learning approach for cross-domain recommendation by using neural networks as the base model. We assume that hidden layers in two base networks are connected by cross mappings, leading to the collaborative cross networks (CoNet). CoNet enables dual knowledge transfer across domains by introducing cross connections from one base network to another and vice versa. CoNet is achieved in multi-layer feedforward networks by adding dual connections and joint loss functions, which can be trained efficiently by back-propagation. The proposed model is evaluated on two real-world datasets and it outperforms baseline models by relative improvements of 3.56\% in MRR and 8.94\% in NDCG, respectively.

北京阿比特科技有限公司