亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Communication can improve control of important system parameters by allowing different grid components to communicate their states with each other. This information exchange requires a reliable and fast communication infrastructure. 5G communication can be a viable means to achieve this objective. This paper investigates the performance of several smart grid applications under a 5G radio access network. Different scenarios including set point changes and transients are evaluated, and the results indicate that the system maintains stability when a 5Gnetwork is used to communicate system states.

相關內容

The field of Human-Robot Collaboration (HRC) has seen a considerable amount of progress in recent years. Thanks in part to advances in control and perception algorithms, robots have started to work in increasingly unstructured environments, where they operate side by side with humans to achieve shared tasks. However, little progress has been made toward the development of systems that are truly effective in supporting the human, proactive in their collaboration, and that can autonomously take care of part of the task. In this work, we present a collaborative system capable of assisting a human worker despite limited manipulation capabilities, incomplete model of the task, and partial observability of the environment. Our framework leverages information from a high-level, hierarchical model that is shared between the human and robot and that enables transparent synchronization between the peers and mutual understanding of each other's plan. More precisely, we firstly derive a partially observable Markov model from the high-level task representation; we then use an online Monte-Carlo solver to compute a short-horizon robot-executable plan. The resulting policy is capable of interactive replanning on-the-fly, dynamic error recovery, and identification of hidden user preferences. We demonstrate that the system is capable of robustly providing support to the human in a realistic furniture construction task.

The evolution of the future beyond-5G/6G networks towards a service-aware network is based on network slicing technology. With network slicing, communication service providers seek to meet all the requirements imposed by the verticals, including ultra-reliable low-latency communication (URLLC) services. In addition, the open radio access network (O-RAN) architecture paves the way for flexible sharing of network resources by introducing more programmability into the RAN. RAN slicing is an essential part of end-to-end network slicing since it ensures efficient sharing of communication and computation resources. However, due to the stringent requirements of URLLC services and the dynamics of the RAN environment, RAN slicing is challenging. In this article, we propose a two-level RAN slicing approach based on the O-RAN architecture to allocate the communication and computation RAN resources among URLLC end-devices. For each RAN slicing level, we model the resource slicing problem as a single-agent Markov decision process and design a deep reinforcement learning algorithm to solve it. Simulation results demonstrate the efficiency of the proposed approach in meeting the desired quality of service requirements.

Radio access network (RAN) slicing is a key technology that enables 5G network to support heterogeneous requirements of generic services, namely ultra-reliable low-latency communication (URLLC) and enhanced mobile broadband (eMBB). In this paper, we propose a two time-scales RAN slicing mechanism to optimize the performance of URLLC and eMBB services. In a large time-scale, an SDN controller allocates radio resources to gNodeBs according to the requirements of the eMBB and URLLC services. In a short time-scale, each gNodeB allocates its available resources to its end-users and requests, if needed, additional resources from adjacent gNodeBs. We formulate this problem as a non-linear binary program and prove its NP-hardness. Next, for each time-scale, we model the problem as a Markov decision process (MDP), where the large-time scale is modeled as a single agent MDP whereas the shorter time-scale is modeled as a multi-agent MDP. We leverage the exponential-weight algorithm for exploration and exploitation (EXP3) to solve the single-agent MDP of the large time-scale MDP and the multi-agent deep Q-learning (DQL) algorithm to solve the multi-agent MDP of the short time-scale resource allocation. Extensive simulations show that our approach is efficient under different network parameters configuration and it outperforms recent benchmark solutions.

As more and more autonomous vehicles (AVs) are being deployed on public roads, designing socially compatible behaviors for them is becoming increasingly important. In order to generate safe and efficient actions, AVs need to not only predict the future behaviors of other traffic participants, but also be aware of the uncertainties associated with such behavior prediction. In this paper, we propose an uncertain-aware integrated prediction and planning (UAPP) framework. It allows the AVs to infer the characteristics of other road users online and generate behaviors optimizing not only their own rewards, but also their courtesy to others, and their confidence regarding the prediction uncertainties. We first propose the definitions for courtesy and confidence. Based on that, their influences on the behaviors of AVs in interactive driving scenarios are explored. Moreover, we evaluate the proposed algorithm on naturalistic human driving data by comparing the generated behavior against ground truth. Results show that the online inference can significantly improve the human-likeness of the generated behaviors. Furthermore, we find that human drivers show great courtesy to others, even for those without right-of-way. We also find that such driving preferences vary significantly in different cultures.

We present a multi-agent system where agents can cooperate to solve a system of dependent tasks, with agents having the capability to explore a solution space, make inferences, as well as query for information under a limited budget. Re-exploration of the solution space takes place by an agent when an older solution expires and is thus able to adapt to dynamic changes in the environment. We investigate the effects of task dependencies, with highly-dependent graph $G_{40}$ (a well-known program graph that contains $40$ highly interlinked nodes, each representing a task) and less-dependent graphs $G_{18}$ (a program graph that contains $18$ tasks with fewer links), increasing the speed of the agents and the complexity of the problem space and the query budgets available to agents. Specifically, we evaluate trade-offs between the agent's speed and query budget. During the experiments, we observed that increasing the speed of a single agent improves the system performance to a certain point only, and increasing the number of faster agents may not improve the system performance due to task dependencies. Favoring faster agents during budget allocation enhances the system performance, in line with the "Matthew effect." We also observe that allocating more budget to a faster agent gives better performance for a less-dependent system, but increasing the number of faster agents gives a better performance for a highly-dependent system.

It is known that fixed rate adaptive quantizers can be used to stabilize an open-loop-unstable linear system driven by unbounded noise. These quantizers can be designed so that they have near-optimal rate, and the resulting system will be stable in the sense of having an invariant probability measure, or ergodicity, as well as the boundedness of the state second moment. However, results on the minimization of the state second moment for such quantizers, an important goal in practice, do not seem to be available. In this paper, we construct a two-part adaptive coding scheme that is asymptotically optimal in terms of the second moments. The first part, as in prior work, leads to ergodicity (via positive Harris recurrence) and the second part attains order optimality of the invariant second moment, resulting in near optimal performance at high rates.

Wireless energy transfer (WET) is a ground-breaking technology for cutting the last wire between mobile sensors and power grids in smart cities. Yet, WET only offers effective transmission of energy over a short distance. Robotic WET is an emerging paradigm that mounts the energy transmitter on a mobile robot and navigates the robot through different regions in a large area to charge remote energy harvesters. However, it is challenging to determine the robotic charging strategy in an unknown and dynamic environment due to the uncertainty of obstacles. This paper proposes a hardware-in-the-loop joint optimization framework that offers three distinctive features: 1) efficient model updates and re-optimization based on the last-round experimental data; 2) iterative refinement of the anchor list for adaptation to different environments; 3) verification of algorithms in a high-fidelity Gazebo simulator and a multi-robot testbed. Experimental results show that the proposed framework significantly saves the WET mission completion time while satisfying collision avoidance and energy harvesting constraints.

5G millimeter wave (mmWave) cellular networks have been deployed by carriers mostly in dense urban areas of the US, and have been reported to deliver a throughput of around 1 - 2 Gbps per device. These throughput numbers are captured via speed-testing applications which run for only a few seconds at a time and are not indicative of the sustained throughput obtained while downloading a large volume of data that can take several minutes to complete. In this paper we report the first detailed measurements in three cities, Miami, Chicago, and San Francisco that study the impact of skin temperature, as measured by the device, on the throughput that the device is able to sustain over many minutes. We report results from experiments conducted on deployed 5G mmWave networks that show the effect of thermal throttling on a 5G mmWave communication link when a large file download is initiated that saturates the link, i.e., the device is connected to 4 mmWave downlink channels each 100 MHz wide. We observe a gradual increase of skin temperature within 1-2 minutes which correlates to a decrease in the data rate due to the number of aggregated mmWave channels reducing from 4 to 1 before finally switching to 4G LTE. Finally, we identify messaging in the Radio Resource Control (RRC) layer confirming that the reduction in throughput was due to throttling due to the skin temperature rise and further demonstrate that cooling the device restores throughput performance. These results are extremely important since they indicate that the Gbps throughput deliverable by 5G mmWave to an end-user 5G device will be limited not by network characteristics but by device thermal management.

Most Deep Reinforcement Learning (Deep RL) algorithms require a prohibitively large number of training samples for learning complex tasks. Many recent works on speeding up Deep RL have focused on distributed training and simulation. While distributed training is often done on the GPU, simulation is not. In this work, we propose using GPU-accelerated RL simulations as an alternative to CPU ones. Using NVIDIA Flex, a GPU-based physics engine, we show promising speed-ups of learning various continuous-control, locomotion tasks. With one GPU and CPU core, we are able to train the Humanoid running task in less than 20 minutes, using 10-1000x fewer CPU cores than previous works. We also demonstrate the scalability of our simulator to multi-GPU settings to train more challenging locomotion tasks.

Internet of Things (IoT) infrastructure within the physical library environment is the basis for an integrative, hybrid approach to digital resource recommenders. The IoT infrastructure provides mobile, dynamic wayfinding support for items in the collection, which includes features for location-based recommendations. The evaluation and analysis herein clarified the nature of users' requests for recommendations based on their location, and describes subject areas of the library for which users request recommendations. The results indicated that users of IoT-based recommendations are interested in a broad distribution of subjects, with a short-head distribution from this collection in American and English Literature. A long-tail finding showed a diversity of topics that are recommended to users in the library book stacks with IoT-powered recommendations.

北京阿比特科技有限公司