亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Service Function Chain (SFC) provisioning stands as a pivotal technology in the realm of 5G and future networks. Its essence lies in orchestrating VNFs (Virtual Network Functions) in a specified sequence for different types of SFC requests. Efficient SFC provisioning requires fast, reliable, and automatic VNFs' placements, especially in a network where massive amounts of SFC requests are generated having ultra-reliable and low latency communication (URLLC) requirements. Although much research has been done in this area, including Artificial Intelligence (AI) and Machine Learning (ML)-based solutions, this work presents an advanced Deep Reinforcement Learning (DRL)-based simulation model for SFC provisioning that illustrates a realistic environment. The proposed simulation platform can handle massive heterogeneous SFC requests having different characteristics in terms of VNFs chain, bandwidth, and latency constraints. Also, the model is flexible to apply to networks having different configurations in terms of the number of data centers (DCs), logical connections among DCs, and service demands. The simulation model components and the workflow of processing VNFs in the SFC requests are described in detail. Numerical results demonstrate that using this simulation setup and proposed algorithm, a realistic SFC provisioning can be achieved with an optimal SFC acceptance ratio while minimizing the E2E latency and resource consumption.

相關內容

The advent of 6G networks will present a pivotal juncture in the evolution of telecommunications, marked by the proliferation of devices, dynamic service requests, and the integration of edge and cloud computing. In response to these transformative shifts, this paper proposes a service and resource discovery architecture as part of service provisioning for the future 6G edge-cloud-continuum. Through the architecture's orchestration and platform components, users will have access to services efficiently and on time. Blockchain underpins trust in this inherently trustless environment, while semantic networking dynamically extracts context from service requests, fostering efficient communication and service delivery. A key innovation lies in dynamic overlay zoning, which not only optimizes resource allocation but also endows our architecture with scalability, adaptability, and resilience. Notably, our architecture excels at predictive capabilities, harnessing learning algorithms to anticipate user and service instance behavior, thereby enhancing network responsiveness and preserving service continuity. This comprehensive architecture paves the way for unparalleled resource optimization, latency reduction, and seamless service delivery, positioning it as an instrumental pillar in the unfolding 6G landscape. Simulation results show that our architecture provides near-optimal timely responses that significantly improve the network's potential, offering scalable and efficient service and resource discovery.

This paper proposes a novel Reinforcement Learning (RL) approach for sim-to-real policy transfer of Vertical Take-Off and Landing Unmanned Aerial Vehicle (VTOL-UAV). The proposed approach is designed for VTOL-UAV landing on offshore docking stations in maritime operations. VTOL-UAVs in maritime operations encounter limitations in their operational range, primarily stemming from constraints imposed by their battery capacity. The concept of autonomous landing on a charging platform presents an intriguing prospect for mitigating these limitations by facilitating battery charging and data transfer. However, current Deep Reinforcement Learning (DRL) methods exhibit drawbacks, including lengthy training times, and modest success rates. In this paper, we tackle these concerns comprehensively by decomposing the landing procedure into a sequence of more manageable but analogous tasks in terms of an approach phase and a landing phase. The proposed architecture utilizes a model-based control scheme for the approach phase, where the VTOL-UAV is approaching the offshore docking station. In the Landing phase, DRL agents were trained offline to learn the optimal policy to dock on the offshore station. The Joint North Sea Wave Project (JONSWAP) spectrum model has been employed to create a wave model for each episode, enhancing policy generalization for sim2real transfer. A set of DRL algorithms have been tested through numerical simulations including value-based agents and policy-based agents such as Deep \textit{Q} Networks (DQN) and Proximal Policy Optimization (PPO) respectively. The numerical experiments show that the PPO agent can learn complicated and efficient policies to land in uncertain environments, which in turn enhances the likelihood of successful sim-to-real transfer.

Exchangeability concerning a continuous exposure, X, may be assumed to identify average exposure effects of X, AEE(X). When X is measured with error (Xep), three challenges arise. First, exchangeability regarding Xep does not equal exchangeability regarding X. Second, the non-differential error assumption (NDEA) could be overly stringent in practice. Third, a definition of exchangeability that implies that AEE(Xep) can differ from AEE(X) is lacking. To address them, this article proposes unifying exchangeability and exposure/confounder measurement errors with three novel concepts. The first, Probabilistic Exchangeability (PE) is an exchangeability assumption that allows for the difference between AEE(Xep) and AEE(X). The second concept, Emergent Pseudo Confounding (EPC), describes the bias introduced by exposure measurement error through mechanisms like confounding mechanisms. The third, Emergent Confounding, describes when bias due to confounder measurement error arises. PE requires adjustment for E(P)C, which can be performed like confounding adjustment. Under PE, the coefficient of determination (R2) in the regression of Xep against X may sometimes be sufficient to measure the difference between AEE(Xep) and AEE(X) in risk difference and ratio scales. This paper provides comprehensive insight into when AEE(Xep) is a surrogate of AEE(X). Differential errors could be addressed and may not compromise causal inference

Narrowband Internet of Things (NB-IoT) is a promising technology designated specially by the 3rd Generation Partnership Project (3GPP) to meet the growing demand of massive machine-type communications (mMTC). More and more industrial companies choose NB-IoT network as the solution to mMTC due to its unique design and technical specification released by 3GPP. In order to evaluate the performance of NB-IoT network, we design a system-level simulation for NB-IoT network in this paper. In particular, the structure of system-level simulator are divided into four parts, i.e., initialization, pre-generation, main simulation loop and post-processing. Moreover, three key techniques are developed in the implementation of NB-IoT network by accounting for enhanced coverage, massive connection and low-power consumption. Simulation results demonstrate the cumulative distribution function curves of signal-to-interference-and-noise ratio are fully compliant with industrial standard, and the performance of throughput explains how NB-IoT network realize massive connection at the cost of data rate.

Unmanned Aerial Vehicles (UAVs) provide agile and safe solutions to communication relay networks, offering improved throughput. However, their modeling and control present challenges, and real-world deployment is hindered by the gap between simulation and reality. Moreover, enhancing situational awareness is critical. Several works in the literature proposed integrating UAV operation with immersive digital technologies, such as Digital Twin (DT) and Extended Reality (XR), to address these challenges. This paper provides a comprehensive overview of current research and developments involving immersive digital technologies for UAVs, including the latest advancements and emerging trends. We also explore the integration of DT and XR with Artificial Intelligence (AI) algorithms to create more intelligent, adaptive, and responsive UAV systems. Finally, we provide discussions, identify gaps in current research, and suggest future directions for studying the application of immersive technologies in UAVs, fostering further innovation and development in this field. We envision the fusion of DTs with XR will transform how UAVs operate, offering tools that enhance visualization, improve decision-making, and enable effective collaboration.

Automatic Sign Language (SL) recognition is an important task in the computer vision community. To build a robust SL recognition system, we need a considerable amount of data which is lacking particularly in Indian sign language (ISL). In this paper, we propose a large-scale isolated ISL dataset and a novel SL recognition model based on skeleton graph structure. The dataset covers 2,002 daily used common words in the deaf community recorded by 20 (10 male and 10 female) deaf adult signers (contains 40033 videos). We propose a SL recognition model namely Hierarchical Windowed Graph Attention Network (HWGAT) by utilizing the human upper body skeleton graph structure. The HWGAT tries to capture distinctive motions by giving attention to different body parts induced by the human skeleton graph structure. The utility of the proposed dataset and the usefulness of our model are evaluated through extensive experiments. We pre-trained the proposed model on the proposed dataset and fine-tuned it across different sign language datasets further boosting the performance of 1.10, 0.46, 0.78, and 6.84 percentage points on INCLUDE, LSA64, AUTSL and WLASL respectively compared to the existing state-of-the-art skeleton-based models.

Graph neural networks (GNNs) have demonstrated a significant boost in prediction performance on graph data. At the same time, the predictions made by these models are often hard to interpret. In that regard, many efforts have been made to explain the prediction mechanisms of these models from perspectives such as GNNExplainer, XGNN and PGExplainer. Although such works present systematic frameworks to interpret GNNs, a holistic review for explainable GNNs is unavailable. In this survey, we present a comprehensive review of explainability techniques developed for GNNs. We focus on explainable graph neural networks and categorize them based on the use of explainable methods. We further provide the common performance metrics for GNNs explanations and point out several future research directions.

Deployment of Internet of Things (IoT) devices and Data Fusion techniques have gained popularity in public and government domains. This usually requires capturing and consolidating data from multiple sources. As datasets do not necessarily originate from identical sensors, fused data typically results in a complex data problem. Because military is investigating how heterogeneous IoT devices can aid processes and tasks, we investigate a multi-sensor approach. Moreover, we propose a signal to image encoding approach to transform information (signal) to integrate (fuse) data from IoT wearable devices to an image which is invertible and easier to visualize supporting decision making. Furthermore, we investigate the challenge of enabling an intelligent identification and detection operation and demonstrate the feasibility of the proposed Deep Learning and Anomaly Detection models that can support future application that utilizes hand gesture data from wearable devices.

Deep neural networks (DNNs) are successful in many computer vision tasks. However, the most accurate DNNs require millions of parameters and operations, making them energy, computation and memory intensive. This impedes the deployment of large DNNs in low-power devices with limited compute resources. Recent research improves DNN models by reducing the memory requirement, energy consumption, and number of operations without significantly decreasing the accuracy. This paper surveys the progress of low-power deep learning and computer vision, specifically in regards to inference, and discusses the methods for compacting and accelerating DNN models. The techniques can be divided into four major categories: (1) parameter quantization and pruning, (2) compressed convolutional filters and matrix factorization, (3) network architecture search, and (4) knowledge distillation. We analyze the accuracy, advantages, disadvantages, and potential solutions to the problems with the techniques in each category. We also discuss new evaluation metrics as a guideline for future research.

Deep convolutional neural networks (CNNs) have recently achieved great success in many visual recognition tasks. However, existing deep neural network models are computationally expensive and memory intensive, hindering their deployment in devices with low memory resources or in applications with strict latency requirements. Therefore, a natural thought is to perform model compression and acceleration in deep networks without significantly decreasing the model performance. During the past few years, tremendous progress has been made in this area. In this paper, we survey the recent advanced techniques for compacting and accelerating CNNs model developed. These techniques are roughly categorized into four schemes: parameter pruning and sharing, low-rank factorization, transferred/compact convolutional filters, and knowledge distillation. Methods of parameter pruning and sharing will be described at the beginning, after that the other techniques will be introduced. For each scheme, we provide insightful analysis regarding the performance, related applications, advantages, and drawbacks etc. Then we will go through a few very recent additional successful methods, for example, dynamic capacity networks and stochastic depths networks. After that, we survey the evaluation matrix, the main datasets used for evaluating the model performance and recent benchmarking efforts. Finally, we conclude this paper, discuss remaining challenges and possible directions on this topic.

北京阿比特科技有限公司