亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Radio-frequency (RF) energy harvesting (EH) in wireless relaying networks has attracted considerable recent interest, especially for supplying energy to relay nodes in Internet-of-Things (IoT) systems to assist the information exchange between a source and a destination. Moreover, limited hardware, computational resources, and energy availability of IoT devices have raised various security challenges. To this end, physical layer security (PLS) has been proposed as an effective alternative to cryptographic methods for providing information security. In this study, we propose a PLS approach for simultaneous wireless information and power transfer (SWIPT)-based half-duplex (HD) amplify-and-forward (AF) relaying systems in the presence of an eavesdropper. Furthermore, we take into account both static power splitting relaying (SPSR) and dynamic power splitting relaying (DPSR) to thoroughly investigate the benefits of each one. To further enhance secure communication, we consider multiple friendly jammers to help prevent wiretapping attacks from the eavesdropper. More specifically, we provide a reliability and security analysis by deriving closed-form expressions of outage probability (OP) and intercept probability (IP), respectively, for both the SPSR and DPSR schemes. Then, simulations are also performed to validate our analysis and the effectiveness of the proposed schemes. Specifically, numerical results illustrate the non-trivial trade-off between reliability and security of the proposed system. In addition, we conclude from the simulation results that the proposed DPSR scheme outperforms the SPSR-based scheme in terms of OP and IP under the influences of different parameters on system performance.

相關內容

In the modern digital world, a user of a smart system remains surrounded with as well as observed by a number of tiny IoT devices round the clock almost everywhere. Unfortunately, the ability of these devices to sense and share various physical parameters, although play a key role in these smart systems but also causes the threat of breach of the privacy of the users. Existing solutions for privacy-preserving computation for decentralized systems either use too complex cryptographic techniques or exploit an extremely high degree of message passing and hence, are not suitable for the resource-constrained IoT devices that constitute a significant fraction of a smart system. In this work, we propose a novel lightweight strategy LiPI for Privacy-Preserving Data Aggregation in low-power IoT systems. The design of the strategy is based on decentralized and collaborative data obfuscation and does not exploit any dependency on any trusted third party. In addition, besides minimizing the communication requirements, we make appropriate use of the recent advances in Synchronous-Transmission (ST)-based protocols in our design to accomplish the goal efficiently. Extensive evaluation based on comprehensive experiments in both simulation platforms and publicly available WSN/IoT testbeds demonstrates that our strategy works up to at least 51.7% faster and consumes 50.5% lesser energy compared to the existing state-of-the-art strategies.

Rate Splitting Multiple Access (RSMA) has emerged as an effective interference management scheme for applications that require high data rates. Although RSMA has shown advantages in rate enhancement and spectral efficiency, it has yet not to be ready for latency-sensitive applications such as virtual reality streaming, which is an essential building block of future 6G networks. Unlike conventional High-Definition streaming applications, streaming virtual reality applications requires not only stringent latency requirements but also the computation capability of the transmitter to quickly respond to dynamic users' demands. Thus, conventional RSMA approaches usually fail to address the challenges caused by computational demands at the transmitter, let alone the dynamic nature of the virtual reality streaming applications. To overcome the aforementioned challenges, we first formulate the virtual reality streaming problem assisted by RSMA as a joint communication and computation optimization problem. A novel multicast approach is then proposed to cluster users into different groups based on a Field-of-View metric and transmit multicast streams in a hierarchical manner. After that, we propose a deep reinforcement learning approach to obtain the solution for the optimization problem. Extensive simulations show that our framework can achieve the millisecond-latency requirement, which is much lower than other baseline schemes.

Reconfigurable intelligent surface (RIS)-aided terahertz (THz) communications have been regarded as a promising candidate for future 6G networks because of its ultra-wide bandwidth and ultra-low power consumption. However, there exists the beam split problem, especially when the base station (BS) or RIS owns the large-scale antennas, which may lead to serious array gain loss. Therefore, in this paper, we investigate the beam split and beamforming design problems in the THz RIS communications. Specifically, we first analyze the beam split effect caused by different RIS sizes, shapes and deployments. On this basis, we apply the fully connected time delayer phase shifter hybrid beamforming architecture at the BS and deploy distributed RISs to cooperatively mitigate the beam split effect. We aim to maximize the achievable sum rate by jointly optimizing the hybrid analog/digital beamforming, time delays at the BS and reflection coefficients at the RISs. To solve the formulated problem, we first design the analog beamforming and time delays based on different RISs physical directions, and then it is transformed into an optimization problem by jointly optimizing the digital beamforming and reflection coefficients. Next, we propose an alternatively iterative optimization algorithm to deal with it. Specifically, for given the reflection coefficients, we propose an iterative algorithm based on the minimum mean square error technique to obtain the digital beamforming. After, we apply LDR and MCQT methods to transform the original problem to a QCQP, which can be solved by ADMM technique to obtain the reflection coefficients. Finally, the digital beamforming and reflection coefficients are obtained via repeating the above processes until convergence. Simulation results verify that the proposed scheme can effectively alleviate the beam split effect and improve the system capacity.

The fifth-generation of wireless communication networks is required to support a range of use cases such as enhanced mobile broadband (eMBB), ultra-reliable, low-latency communications (URLLC), massive machine-type communications (mMTCs), with heterogeneous data rate, delay, and power requirements. The 4G LTE air interface uses extra overhead to enable scheduled access, which is not justified for small payload sizes. We employ a random access communication model with retransmissions for multiple users with small payloads at the low spectral efficiency regime. The radio resources are split non-orthogonally in the time and frequency dimensions. Retransmissions are combined via Hybrid Automatic Repeat reQuest (HARQ) methods, namely Chase Combining and Incremental Redundancy with a finite buffer size constraint $C_{\sf buf}$. We determine the best scaling for the spectral efficiency (SE) versus signal-to-noise ratio (SNR) per bit and for the user density versus SNR per bit, for the sum-optimal regime and when the interference is treated as noise, using a Shannon capacity approximation. Numerical results show that the scaling results are applicable over a range of $\eta$, $T$, $C_{\sf buf}$, $J$, at low received SNR values. The proposed analytical framework provides insights for resource allocation in general random access systems and specific 5G use cases for massive URLLC uplink access.

Designing effective routing strategies for mobile wireless networks is challenging due to the need to seamlessly adapt routing behavior to spatially diverse and temporally changing network conditions. In this work, we use deep reinforcement learning (DeepRL) to learn a scalable and generalizable single-copy routing strategy for such networks. We make the following contributions: i) we design a reward function that enables the DeepRL agent to explicitly trade-off competing network goals, such as minimizing delay vs. the number of transmissions per packet; ii) we propose a novel set of relational neighborhood, path, and context features to characterize mobile wireless networks and model device mobility independently of a specific network topology; and iii) we use a flexible training approach that allows us to combine data from all packets and devices into a single offline centralized training set to train a single DeepRL agent. To evaluate generalizeability and scalability, we train our DeepRL agent on one mobile network scenario and then test it on other mobile scenarios, varying the number of devices and transmission ranges. Our results show our learned single-copy routing strategy outperforms all other strategies in terms of delay except for the optimal strategy, even on scenarios on which the DeepRL agent was not trained.

The capacity sharing problem in Radio Access Network (RAN) slicing deals with the distribution of the capacity available in each RAN node among various RAN slices to satisfy their traffic demands and efficiently use the radio resources. While several capacity sharing algorithmic solutions have been proposed in the literature, their practical implementation still remains as a gap. In this paper, the implementation of a Reinforcement Learning-based capacity sharing algorithm over the O-RAN architecture is discussed, providing insights into the operation of the involved interfaces and the containerization of the solution. Moreover, the description of the testbed implemented to validate the solution is included and some performance and validation results are presented.

In this paper, we consider multiple solar-powered wireless nodes which utilize the harvested solar energy to transmit collected data to multiple unmanned aerial vehicles (UAVs) in the uplink. In this context, we jointly design UAV flight trajectories, UAV-node communication associations, and uplink power control to effectively utilize the harvested energy and manage co-channel interference within a finite time horizon. To ensure the fairness of wireless nodes, the design goal is to maximize the worst user rate. The joint design problem is highly non-convex and requires causal (future) knowledge of the instantaneous energy state information (ESI) and channel state information (CSI), which are difficult to predict in reality. To overcome these challenges, we propose an offline method based on convex optimization that only utilizes the average ESI and CSI. The problem is solved by three convex subproblems with successive convex approximation (SCA) and alternative optimization. We further design an online convex-assisted reinforcement learning (CARL) method to improve the system performance based on real-time environmental information. An idea of multi-UAV regulated flight corridors, based on the optimal offline UAV trajectories, is proposed to avoid unnecessary flight exploration by UAVs and enables us to improve the learning efficiency and system performance, as compared with the conventional reinforcement learning (RL) method. Computer simulations are used to verify the effectiveness of the proposed methods. The proposed CARL method provides 25% and 12% improvement on the worst user rate over the offline and conventional RL methods.

5G radio access network (RAN) with network slicing methodology plays a key role in the development of the next-generation network system. RAN slicing focuses on splitting the substrate's resources into a set of self-contained programmable RAN slices. Leveraged by network function virtualization (NFV), a RAN slice is constituted by various virtual network functions (VNFs) and virtual links that are embedded as instances on substrate nodes. In this work, we focus on the following fundamental tasks: i) establishing the theoretical foundation for constructing a VNF mapping plan for RAN slice recovery optimization and ii) developing algorithms needed to map/embed VNFs efficiently. In particular, we propose four efficient algorithms, including Resource-based Algorithm (RBA), Connectivity-based Algorithm (CBA), Group-based Algorithm (GBA), and Group-Connectivity-based Algorithm (GCBA) to solve the resource allocation and VNF mapping problem. Extensive experiments are also conducted to validate the robustness of RAN slicing via the proposed algorithms.

Advancements in semiconductor technology have reduced dimensions and cost while improving the performance and capacity of chipsets. In addition, advancement in the AI frameworks and libraries brings possibilities to accommodate more AI at the resource-constrained edge of consumer IoT devices. Sensors are nowadays an integral part of our environment which provide continuous data streams to build intelligent applications. An example could be a smart home scenario with multiple interconnected devices. In such smart environments, for convenience and quick access to web-based service and personal information such as calendars, notes, emails, reminders, banking, etc, users link third-party skills or skills from the Amazon store to their smart speakers. Also, in current smart home scenarios, several smart home products such as smart security cameras, video doorbells, smart plugs, smart carbon monoxide monitors, and smart door locks, etc. are interlinked to a modern smart speaker via means of custom skill addition. Since smart speakers are linked to such services and devices via the smart speaker user's account. They can be used by anyone with physical access to the smart speaker via voice commands. If done so, the data privacy, home security and other aspects of the user get compromised. Recently launched, Tensor Cam's AI Camera, Toshiba's Symbio, Facebook's Portal are camera-enabled smart speakers with AI functionalities. Although they are camera-enabled, yet they do not have an authentication scheme in addition to calling out the wake-word. This paper provides an overview of cybersecurity risks faced by smart speaker users due to lack of authentication scheme and discusses the development of a state-of-the-art camera-enabled, microphone array-based modern Alexa smart speaker prototype to address these risks.

Clustering is one of the most fundamental and wide-spread techniques in exploratory data analysis. Yet, the basic approach to clustering has not really changed: a practitioner hand-picks a task-specific clustering loss to optimize and fit the given data to reveal the underlying cluster structure. Some types of losses---such as k-means, or its non-linear version: kernelized k-means (centroid based), and DBSCAN (density based)---are popular choices due to their good empirical performance on a range of applications. Although every so often the clustering output using these standard losses fails to reveal the underlying structure, and the practitioner has to custom-design their own variation. In this work we take an intrinsically different approach to clustering: rather than fitting a dataset to a specific clustering loss, we train a recurrent model that learns how to cluster. The model uses as training pairs examples of datasets (as input) and its corresponding cluster identities (as output). By providing multiple types of training datasets as inputs, our model has the ability to generalize well on unseen datasets (new clustering tasks). Our experiments reveal that by training on simple synthetically generated datasets or on existing real datasets, we can achieve better clustering performance on unseen real-world datasets when compared with standard benchmark clustering techniques. Our meta clustering model works well even for small datasets where the usual deep learning models tend to perform worse.

北京阿比特科技有限公司