亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Reconfigurable Intelligent Surfaces (RISs) are considered one of the key disruptive technologies towards future 6G networks. RISs revolutionize the traditional wireless communication paradigm by controlling the wave propagation properties of the impinging signals as required. A major roadblock for RIS is though the need for a fast and complex control channel to continuously adapt to the ever-changing wireless channel conditions. In this paper, we ask ourselves the question: Would it be feasible to remove the need for control channels for RISs? We analyze the feasibility of devising Self-Configuring Smart Surfaces that can be easily and seamlessly installed throughout the environment, following the new Internet-of-Surfaces (IoS) paradigm, without requiring modifications of the deployed mobile network. To this aim, we design MARISA, a self-configuring metasurfaces absorption and reflection solution, and show that it can achieve a better-than-expected performance rivaling with control channel-driven RISs.

相關內容

Reconfigurable Intelligent Surfaces (RISs) are an emerging technology for future wireless communication systems, enabling improved coverage in an energy efficient manner. RISs are usually metasurfaces, constituting of two-dimensional arrangements of metamaterial elements, whose individual response is commonly modeled in the literature as an adjustable phase shifter. However, this model holds only for narrow communications, and when wideband transmissions are utilized, one has to account for the frequency selectivity of metamaterials, whose response follows a Lorentzian profile. In this paper, we consider the uplink of a wideband RIS-empowered multi-user Multiple-Input Multiple-Output (MIMO) wireless system with Orthogonal Frequency Division Multiplexing (OFDM) signaling, while accounting for the frequency selectivity of RISs. In particular, we focus on designing the controllable parameters dictating the Lorentzian response of each RIS metamaterial element in order to maximize the achievable sum-rate. We devise a scheme combining block coordinate descent with penalty dual decomposition to tackle the resulting challenging optimization framework. Our simulation results reveal the achievable rates one can achieve using realistically frequency selective RISs in wideband settings, and quantify the performance loss that occurs when using state-of-the-art methods which assume that the RIS elements behave as frequency-flat phase shifters.

Fast and reliable wireless communication has become a critical demand in human life. When natural disasters strike, providing ubiquitous connectivity becomes challenging by using traditional wireless networks. In this context, unmanned aerial vehicle (UAV) based aerial networks offer a promising alternative for fast, flexible, and reliable wireless communications in mission-critical (MC) scenarios. Due to the unique characteristics such as mobility, flexible deployment, and rapid reconfiguration, drones can readily change location dynamically to provide on-demand communications to users on the ground in emergency scenarios. As a result, the usage of UAV base stations (UAV-BSs) has been considered as an appropriate approach for providing rapid connection in MC scenarios. In this paper, we study how to control a UAV-BS in both static and dynamic environments. We investigate a situation in which a macro BS is destroyed as a result of a natural disaster and a UAV-BS is deployed using integrated access and backhaul (IAB) technology to provide coverage for users in the disaster area. We present a data collection system, signaling procedures and machine learning applications for this use case. A deep reinforcement learning algorithm is developed to jointly optimize the tilt of the access and backhaul antennas of the UAV-BS as well as its three-dimensional placement. Evaluation results show that the proposed algorithm can autonomously navigate and configure the UAV-BS to satisfactorily serve the MC users on the ground.

Mobile edge computing (MEC) is considered a novel paradigm for computation-intensive and delay-sensitive tasks in fifth generation (5G) networks and beyond. However, its uncertainty, referred to as dynamic and randomness, from the mobile device, wireless channel, and edge network sides, results in high-dimensional, nonconvex, nonlinear, and NP-hard optimization problems. Thanks to the evolved reinforcement learning (RL), upon iteratively interacting with the dynamic and random environment, its trained agent can intelligently obtain the optimal policy in MEC. Furthermore, its evolved versions, such as deep RL (DRL), can achieve higher convergence speed efficiency and learning accuracy based on the parametric approximation for the large-scale state-action space. This paper provides a comprehensive research review on RL-enabled MEC and offers insight for development in this area. More importantly, associated with free mobility, dynamic channels, and distributed services, the MEC challenges that can be solved by different kinds of RL algorithms are identified, followed by how they can be solved by RL solutions in diverse mobile applications. Finally, the open challenges are discussed to provide helpful guidance for future research in RL training and learning MEC.

Deployment of small cells in dense urban areas dedicated to the heterogeneous network (HetNet) and associated relay nodes for improving backhauling is expected to be an important structural element in the design of beyond 5G (B5G) and 6G wireless access networks. A key operational aspect in HetNets is how to optimally implement the wireless backhaul links to efficiently support the traffic demand. In this work, we utilize the recently proposed Robotic Aerial Small Cells (RASCs) that are able to grasp at different tall urban landforms as wireless relay nodes for backhauling. This can be considered as an alternative to fixed small cells (FSCs) which lack flexibility since once installed their position cannot be altered. More specifically, on-demand deployment of RASCs is considered for constructing a millimeter-wave (mmWave) backhaul network to optimize available network capacity using a network flow-based mixed integer linear programming (MILP) formulation. Numerical investigations reveal that for the same required achievable throughput, the number of RASCs required are 25\% to 65\% less than the number of required FSCs. This result can have significant implications in reducing required wireless network equipment (capex) to provide a given network capacity and allows for an efficient and flexible network densification.

Reorientation (turning in plane) plays a critical role for all robots in any field application, especially those that in confined spaces. While important, reorientation remains a relatively unstudied problem for robots, including limbless mechanisms, often called snake robots. Instead of looking at snakes, we take inspiration from observations of the turning behavior of tiny nematode worms C. elegans. Our previous work presented an in-place and in-plane turning gait for limbless robots, called an omega turn, and prescribed it using a novel two-wave template. In this work, we advance omega turn-inspired controllers in three aspects: 1) we use geometric methods to vary joint angle amplitudes and forward wave spatial frequency in our turning equation to establish a wide and precise amplitude modulation and frequency modulation on omega turn; 2) we use this new relationship to enable robots with fewer internal degrees of freedom (i.e., fewer joints in the body) to achieve desirable performance, and 3) we apply compliant control methods to this relationship to handle unmodelled effects in the environment. We experimentally validate our approach on a limbless robot that the omega turn can produce effective and robust turning motion in various types of environments, such as granular media and rock pile.

In this paper, we propose a novel wireless architecture, mounted on a high-altitude aerial platform, which is enabled by reconfigurable intelligent surface (RIS). By installing RIS on the aerial platform, rich line-of-sight and full-area coverage can be achieved, thereby, overcoming the limitations of the conventional terrestrial RIS. We consider a scenario where a sudden increase in traffic in an urban area triggers authorities to rapidly deploy unmanned-aerial vehicle base stations (UAV-BSs) to serve the ground users. In this scenario, since the direct backhaul link from the ground source can be blocked due to several obstacles from the urban area, we propose reflecting the backhaul signal using aerial-RIS so that it successfully reaches the UAV-BSs. We jointly optimize the placement and array-partition strategies of aerial-RIS and the phases of RIS elements, which leads to an increase in energy-efficiency of every UAV-BS. We show that the complexity of our algorithm can be bounded by the quadratic order, thus implying high computational efficiency. We verify the performance of the proposed algorithm via extensive numerical evaluations and show that our method achieves an outstanding performance in terms of energy-efficiency compared to benchmark schemes.

In recent years, Bi-Level Optimization (BLO) techniques have received extensive attentions from both learning and vision communities. A variety of BLO models in complex and practical tasks are of non-convex follower structure in nature (a.k.a., without Lower-Level Convexity, LLC for short). However, this challenging class of BLOs is lack of developments on both efficient solution strategies and solid theoretical guarantees. In this work, we propose a new algorithmic framework, named Initialization Auxiliary and Pessimistic Trajectory Truncated Gradient Method (IAPTT-GM), to partially address the above issues. In particular, by introducing an auxiliary as initialization to guide the optimization dynamics and designing a pessimistic trajectory truncation operation, we construct a reliable approximate version of the original BLO in the absence of LLC hypothesis. Our theoretical investigations establish the convergence of solutions returned by IAPTT-GM towards those of the original BLO without LLC. As an additional bonus, we also theoretically justify the quality of our IAPTT-GM embedded with Nesterov's accelerated dynamics under LLC. The experimental results confirm both the convergence of our algorithm without LLC, and the theoretical findings under LLC.

Accurate segmentation of the prostate from magnetic resonance (MR) images provides useful information for prostate cancer diagnosis and treatment. However, automated prostate segmentation from 3D MR images still faces several challenges. For instance, a lack of clear edge between the prostate and other anatomical structures makes it challenging to accurately extract the boundaries. The complex background texture and large variation in size, shape and intensity distribution of the prostate itself make segmentation even further complicated. With deep learning, especially convolutional neural networks (CNNs), emerging as commonly used methods for medical image segmentation, the difficulty in obtaining large number of annotated medical images for training CNNs has become much more pronounced that ever before. Since large-scale dataset is one of the critical components for the success of deep learning, lack of sufficient training data makes it difficult to fully train complex CNNs. To tackle the above challenges, in this paper, we propose a boundary-weighted domain adaptive neural network (BOWDA-Net). To make the network more sensitive to the boundaries during segmentation, a boundary-weighted segmentation loss (BWL) is proposed. Furthermore, an advanced boundary-weighted transfer leaning approach is introduced to address the problem of small medical imaging datasets. We evaluate our proposed model on the publicly available MICCAI 2012 Prostate MR Image Segmentation (PROMISE12) challenge dataset. Our experimental results demonstrate that the proposed model is more sensitive to boundary information and outperformed other state-of-the-art methods.

Model quantization is a widely used technique to compress and accelerate deep neural network (DNN) inference. Emergent DNN hardware accelerators begin to support flexible bitwidth (1-8 bits) to further improve the computation efficiency, which raises a great challenge to find the optimal bitwidth for each layer: it requires domain experts to explore the vast design space trading off among accuracy, latency, power, and model size, which is both time-consuming and sub-optimal. Conventional quantization algorithm ignores the different hardware architectures and quantizes all the layers in an uniform way. In this paper, we introduce the Hardware-Aware Automated Quantization (HAQ) framework which leverages the reinforcement learning to automatically determine the quantization policy, and we take the hardware accelerator's feedback in the design loop. Rather than relying on proxy signals such as FLOPs and model size, we employ a hardware simulator to generate direct feedback signals to the RL agent. Compared with conventional methods, our framework is fully automated and can specialize the quantization policy for different neural network architectures and hardware architectures. Our framework effectively reduced the latency by 1.4-1.95x and the energy consumption by 1.9x with negligible loss of accuracy compared with the fixed bitwidth (8 bits) quantization. Our framework reveals that the optimal policies on different hardware architectures (i.e., edge and cloud architectures) under different resource constraints (i.e., latency, power and model size) are drastically different. We interpreted the implication of different quantization policies, which offer insights for both neural network architecture design and hardware architecture design.

Mobile network that millions of people use every day is one of the most complex systems in real world. Optimization of mobile network to meet exploding customer demand and reduce CAPEX/OPEX poses greater challenges than in prior works. Actually, learning to solve complex problems in real world to benefit everyone and make the world better has long been ultimate goal of AI. However, application of deep reinforcement learning (DRL) to complex problems in real world still remains unsolved, due to imperfect information, data scarcity and complex rules in real world, potential negative impact to real world, etc. To bridge this reality gap, we propose a sim-to-real framework to direct transfer learning from simulation to real world without any training in real world. First, we distill temporal-spatial relationships between cells and mobile users to scalable 3D image-like tensor to best characterize partially observed mobile network. Second, inspired by AlphaGo, we introduce a novel self-play mechanism to empower DRL agents to gradually improve intelligence by competing for best record on multiple tasks, just like athletes compete for world record in decathlon. Third, a decentralized DRL method is proposed to coordinate multi-agents to compete and cooperate as a team to maximize global reward and minimize potential negative impact. Using 7693 unseen test tasks over 160 unseen mobile networks in another simulator as well as 6 field trials on 4 commercial mobile networks in real world, we demonstrate the capability of this sim-to-real framework to direct transfer the learning not only from one simulator to another simulator, but also from simulation to real world. This is the first time that a DRL agent successfully transfers its learning directly from simulation to very complex real world problems with imperfect information, complex rules, huge state/action space, and multi-agent interactions.

北京阿比特科技有限公司