亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Physical layer security (PLS) technologies are expected to play an important role in the next-generation wireless networks, by providing secure communication to protect critical and sensitive information from illegitimate devices. In this paper, we propose a novel secure communication scheme where the legitimate receiver use full-duplex (FD) technology to transmit jamming signals with the assistance of simultaneous transmitting and reflecting reconfigurable intelligent surface (STARRIS) which can operate under the energy splitting (ES) model and the mode switching (MS) model, to interfere with the undesired reception by the eavesdropper. We aim to maximize the secrecy capacity by jointly optimizing the FD beamforming vectors, amplitudes and phase shift coefficients for the ESRIS, and mode selection and phase shift coefficients for the MS-RIS. With above optimization, the proposed scheme can concentrate the jamming signals on the eavesdropper while simultaneously eliminating the self-interference (SI) in the desired receiver. To tackle the coupling effect of multiple variables, we propose an alternating optimization algorithm to solve the problem iteratively. Furthermore, we handle the non-convexity of the problem by the the successive convex approximation (SCA) scheme for the beamforming optimizations, amplitudes and phase shifts optimizations for the ES-RIS, as well as the phase shifts optimizations for the MS-RIS. In addition, we adopt a semi-definite relaxation (SDR) and Gaussian randomization process to overcome the difficulty introduced by the binary nature of mode optimization of the MS-RIS. Simulation results validate the performance of our proposed schemes as well as the efficacy of adapting both two types of STAR-RISs in enhancing secure communications when compared to the traditional selfinterference cancellation technology.

相關內容

Existing distributed denial of service attack (DDoS) solutions cannot handle highly aggregated data rates; thus, they are unsuitable for Internet service provider (ISP) core networks. This article proposes a digital twin-enabled intelligent DDoS detection mechanism using an online learning method for autonomous systems. Our contributions are three-fold: we first design a DDoS detection architecture based on the digital twin for ISP core networks. We implemented a Yet Another Next Generation (YANG) model and an automated feature selection (AutoFS) module to handle core network data. We used an online learning approach to update the model instantly and efficiently, improve the learning model quickly, and ensure accurate predictions. Finally, we reveal that our proposed solution successfully detects DDoS attacks and updates the feature selection method and learning model with a true classification rate of ninety-seven percent. Our proposed solution can estimate the attack within approximately fifteen minutes after the DDoS attack starts.

Biometric verification systems are deployed in various security-based access-control applications that require user-friendly and reliable person verification. Among the different biometric characteristics, fingervein biometrics have been extensively studied owing to their reliable verification performance. Furthermore, fingervein patterns reside inside the skin and are not visible outside; therefore, they possess inherent resistance to presentation attacks and degradation due to external factors. In this paper, we introduce a novel fingervein verification technique using a convolutional multihead attention network called VeinAtnNet. The proposed VeinAtnNet is designed to achieve light weight with a smaller number of learnable parameters while extracting discriminant information from both normal and enhanced fingervein images. The proposed VeinAtnNet was trained on the newly constructed fingervein dataset with 300 unique fingervein patterns that were captured in multiple sessions to obtain 92 samples per unique fingervein. Extensive experiments were performed on the newly collected dataset FV-300 and the publicly available FV-USM and FV-PolyU fingervein dataset. The performance of the proposed method was compared with five state-of-the-art fingervein verification systems, indicating the efficacy of the proposed VeinAtnNet.

As a new technology to reconfigure wireless communication environment by signal reflection controlled by software, intelligent reflecting surface (IRS) has attracted lots of attention in recent years. Compared with conventional relay system, the relay system aided by IRS can effectively reduce the cost and energy consumption, and significantly enhance the system performance. However, the phase quantization error generated by IRS with discrete phase shifter may degrade the receiving performance of the receiver. To analyze the performance loss caused by IRS phase quantization error, based on the law of large numbers and Rayleigh distribution, the closed-form expressions for the signal-to-noise ratio (SNR) performance loss and achievable rate of the IRS-aided amplify-and-forward (AF) relay network, which are related to the number of phase shifter quantization bits, are derived under the line-of-sight (LoS) channels and Rayleigh channels, respectively. Moreover, their approximate performance loss closed-form expressions are also derived based on the Taylor series expansion. Simulation results show that the performance losses of SNR and achievable rate decrease with the number of quantization bits increases gradually. When the number of quantization bits is larger than or equal to 3, the SNR performance loss of the system is smaller than 0.23dB, and the achievable rate loss is less than 0.04bits/s/Hz, regardless of the LoS channels or Rayleigh channels.

Model predictive control (MPC) may provide local motion planning for mobile robotic platforms. The challenging aspect is the analytic representation of collision cost for the case when both the obstacle map and robot footprint are arbitrary. We propose a Neural Potential Field: a neural network model that returns a differentiable collision cost based on robot pose, obstacle map, and robot footprint. The differentiability of our model allows its usage within the MPC solver. It is computationally hard to solve problems with a very high number of parameters. Therefore, our architecture includes neural image encoders, which transform obstacle maps and robot footprints into embeddings, which reduce problem dimensionality by two orders of magnitude. The reference data for network training are generated based on algorithmic calculation of a signed distance function. Comparative experiments showed that the proposed approach is comparable with existing local planners: it provides trajectories with outperforming smoothness, comparable path length, and safe distance from obstacles. Experiment on Husky UGV mobile robot showed that our approach allows real-time and safe local planning. The code for our approach is presented at //github.com/cog-isa/NPField together with demo video.

Industrial wireless sensor networks enable real-time data collection, analysis, and control by interconnecting diverse industrial devices. In these industrial settings, power outlets are not always available, and reliance on battery power can be impractical due to the need for frequent battery replacement or stringent safety regulations. Battery-less energy harvesters present a suitable alternative for powering these devices. However, these energy harvesters, equipped with supercapacitors instead of batteries, suffer from intermittent on-off behavior due to their limited energy storage capacity. As a result, they struggle with extended or frequent energy-consuming phases of multi-hop network formation, such as network joining and synchronization. To address these challenges, our work proposes three strategies for integrating battery-less energy harvesting devices into industrial multi-hop wireless sensor networks. In contrast to other works, our work prioritizes the mitigation of intermittency-related issues, rather than focusing solely on average energy consumption, as is typically the case with battery-powered devices. For each of the proposed strategies, we provide an in-depth discussion of their suitability based on several critical factors, including the type of energy source, storage capacity, device mobility, latency, and reliability.

To address privacy concerns and reduce network latency, there has been a recent trend of compressing cumbersome recommendation models trained on the cloud and deploying compact recommender models to resource-limited devices for real-time recommendation. Existing solutions generally overlook device heterogeneity and user heterogeneity. They either require all devices to share the same compressed model or the devices with the same resource budget to share the same model. However, even users with the same devices may have different preferences. In addition, they assume the available resources (e.g., memory) for the recommender on a device are constant, which is not reflective of reality. In light of device and user heterogeneities as well as dynamic resource constraints, this paper proposes a Personalized Elastic Embedding Learning framework (PEEL) for on-device recommendation, which generates personalized embeddings for devices with various memory budgets in once-for-all manner, efficiently adapting to new or dynamic budgets, and effectively addressing user preference diversity by assigning personalized embeddings for different groups of users. Specifically, it pretrains using user-item interaction instances to generate the global embedding table and cluster users into groups. Then, it refines the embedding tables with local interaction instances within each group. Personalized elastic embedding is generated from the group-wise embedding blocks and their weights that indicate the contribution of each embedding block to the local recommendation performance. PEEL efficiently generates personalized elastic embeddings by selecting embedding blocks with the largest weights, making it adaptable to dynamic memory budgets. Extensive experiments are conducted on two public datasets, and the results show that PEEL yields superior performance on devices with heterogeneous and dynamic memory budgets.

Unmanned aerial vehicles (UAVs) can provide wireless access to terrestrial users, regardless of geographical constraints, and will be an important part of future communication systems. In this paper, a multi-user downlink dual-UAVs enabled covert communication system was investigated, in which a UAV transmits secure information to ground users in the presence of multiple wardens as well as a friendly jammer UAV transmits artificial jamming signals to fight with the wardens. The scenario of wardens being outfitted with a single antenna is considered, and the detection error probability (DEP) of wardens with finite observations is researched. Then, considering the uncertainty of wardens' location, a robust optimization problem with worst-case covertness constraint is formulated to maximize the average covert rate by jointly optimizing power allocation and trajectory. To cope with the optimization problem, an algorithm based on successive convex approximation methods is proposed. Thereafter, the results are extended to the case where all the wardens are equipped with multiple antennas. After analyzing the DEP in this scenario, a tractable lower bound of the DEP is obtained by utilizing Pinsker's inequality. Subsequently, the non-convex optimization problem was established and efficiently coped by utilizing a similar algorithm as in the single-antenna scenario. Numerical results indicate the effectiveness of our proposed algorithm.

Cross-Domain Recommendation (CDR) stands as a pivotal technology addressing issues of data sparsity and cold start by transferring general knowledge from the source to the target domain. However, existing CDR models suffer limitations in adaptability across various scenarios due to their inherent complexity. To tackle this challenge, recent advancements introduce universal CDR models that leverage shared embeddings to capture general knowledge across domains and transfer it through "Multi-task Learning" or "Pre-train, Fine-tune" paradigms. However, these models often overlook the broader structural topology that spans domains and fail to align training objectives, potentially leading to negative transfer. To address these issues, we propose a motif-based prompt learning framework, MOP, which introduces motif-based shared embeddings to encapsulate generalized domain knowledge, catering to both intra-domain and inter-domain CDR tasks. Specifically, we devise three typical motifs: butterfly, triangle, and random walk, and encode them through a Motif-based Encoder to obtain motif-based shared embeddings. Moreover, we train MOP under the "Pre-training \& Prompt Tuning" paradigm. By unifying pre-training and recommendation tasks as a common motif-based similarity learning task and integrating adaptable prompt parameters to guide the model in downstream recommendation tasks, MOP excels in transferring domain knowledge effectively. Experimental results on four distinct CDR tasks demonstrate the effectiveness of MOP than the state-of-the-art models.

Vast amount of data generated from networks of sensors, wearables, and the Internet of Things (IoT) devices underscores the need for advanced modeling techniques that leverage the spatio-temporal structure of decentralized data due to the need for edge computation and licensing (data access) issues. While federated learning (FL) has emerged as a framework for model training without requiring direct data sharing and exchange, effectively modeling the complex spatio-temporal dependencies to improve forecasting capabilities still remains an open problem. On the other hand, state-of-the-art spatio-temporal forecasting models assume unfettered access to the data, neglecting constraints on data sharing. To bridge this gap, we propose a federated spatio-temporal model -- Cross-Node Federated Graph Neural Network (CNFGNN) -- which explicitly encodes the underlying graph structure using graph neural network (GNN)-based architecture under the constraint of cross-node federated learning, which requires that data in a network of nodes is generated locally on each node and remains decentralized. CNFGNN operates by disentangling the temporal dynamics modeling on devices and spatial dynamics on the server, utilizing alternating optimization to reduce the communication cost, facilitating computations on the edge devices. Experiments on the traffic flow forecasting task show that CNFGNN achieves the best forecasting performance in both transductive and inductive learning settings with no extra computation cost on edge devices, while incurring modest communication cost.

The prevalence of networked sensors and actuators in many real-world systems such as smart buildings, factories, power plants, and data centers generate substantial amounts of multivariate time series data for these systems. The rich sensor data can be continuously monitored for intrusion events through anomaly detection. However, conventional threshold-based anomaly detection methods are inadequate due to the dynamic complexities of these systems, while supervised machine learning methods are unable to exploit the large amounts of data due to the lack of labeled data. On the other hand, current unsupervised machine learning approaches have not fully exploited the spatial-temporal correlation and other dependencies amongst the multiple variables (sensors/actuators) in the system for detecting anomalies. In this work, we propose an unsupervised multivariate anomaly detection method based on Generative Adversarial Networks (GANs). Instead of treating each data stream independently, our proposed MAD-GAN framework considers the entire variable set concurrently to capture the latent interactions amongst the variables. We also fully exploit both the generator and discriminator produced by the GAN, using a novel anomaly score called DR-score to detect anomalies by discrimination and reconstruction. We have tested our proposed MAD-GAN using two recent datasets collected from real-world CPS: the Secure Water Treatment (SWaT) and the Water Distribution (WADI) datasets. Our experimental results showed that the proposed MAD-GAN is effective in reporting anomalies caused by various cyber-intrusions compared in these complex real-world systems.

北京阿比特科技有限公司