亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Immersive technologies such as virtual reality (VR), augmented reality (AR), and holograms will change users' digital experience. These immersive technologies have a multitude of applications, including telesurgeries, teleconferencing, Internet shopping, computer games, etc. Holographic-type communication (HTC) is a type of augmented reality media that provides an immersive experience to Internet users. However, HTC has different characteristics and network requirements, and the existing network architecture and transport protocols may not be able to cope with the stringent network requirements of HTC. Therefore, in this paper, we provide an in-depth and critical study of the transport protocols for HTC. We also discuss the characteristics and the network requirements for HTC. Based on the performance evaluation of the existing transport protocols, we propose a roadmap to design new high-performance transport protocols for immersive applications.

相關內容

For vehicular metaverses, one of the ultimate user-centric goals is to optimize the immersive experience and Quality of Service (QoS) for users on board. Semantic Communication (SemCom) has been introduced as a revolutionary paradigm that significantly eases communication resource pressure for vehicular metaverse applications to achieve this goal. SemCom enables high-quality and ultra-efficient vehicular communication, even with explosively increasing data traffic among vehicles. In this article, we propose a hierarchical SemCom-enabled vehicular metaverses framework consisting of the global metaverse, local metaverses, SemCom module, and resource pool. The global and local metaverses are brand-new concepts from the metaverse's distribution standpoint. Considering the QoS of users, this article explores the potential security vulnerabilities of the proposed framework. To that purpose, this study highlights a specific security risk to the framework's SemCom module and offers a viable defense solution, so encouraging community researchers to focus more on vehicular metaverse security. Finally, we provide an overview of the open issues of secure SemCom in the vehicular metaverses, notably pointing out potential future research directions.

Emerging optical and virtualization technologies enable the design of more flexible and demand-aware networked systems, in which resources can be optimized toward the actual workload they serve. For example, in a demand-aware datacenter network, frequently communicating nodes (e.g., two virtual machines or a pair of racks in a datacenter) can be placed topologically closer, reducing communication costs and hence improving the overall network performance. This paper revisits the bounded-degree network design problem underlying such demand-aware networks. Namely, given a distribution over communicating server pairs, we want to design a network with bounded maximum degree that minimizes expected communication distance. In addition to this known problem, we introduce and study a variant where we allow Steiner nodes (i.e., additional routers) to be added to augment the network. We improve the understanding of this problem domain in several ways. First, we shed light on the complexity and hardness of the aforementioned problems, and study a connection between them and the virtual networking embedding problem. We then provide a constant-factor approximation algorithm for the Steiner node version of the problem, and use it to improve over prior state-of-the-art algorithms for the original version of the problem with sparse communication distributions. Finally, we investigate various heuristic approaches to bounded-degree network design problem, in particular providing a reliable heuristic algorithm with good experimental performance. We report on an extensive empirical evaluation, using several real-world traffic traces from datacenters, and find that our approach results in improved demand-aware network designs.

The capability of R to do symbolic mathematics is enhanced by the caracas package. This package uses the Python computer algebra library SymPy as a back-end but caracas is tightly integrated in the R environment. This enables the R user with symbolic mathematics within R at a high abstraction level rather than using text strings and text string manipulation as the case would be if using SymPy from R directly. We demonstrate how mathematics and statistics can benefit from bridging computer algebra and data via R. This is done thought a number of examples and we propose some topics for small student projects. The caracas package integrates well with e.g. Rmarkdown, and as such creation of scientific reports and teaching is supported.

As software becomes increasingly pervasive in critical domains like autonomous driving, new challenges arise, necessitating rethinking of system engineering approaches. The gradual takeover of all critical driving functions by autonomous driving adds to the complexity of certifying these systems. Namely, certification procedures do not fully keep pace with the dynamism and unpredictability of future autonomous systems, and they may not fully guarantee compliance with the requirements imposed on these systems. In this paper, we have identified several issues with the current certification strategies that could pose serious safety risks. As an example, we highlight the inadequate reflection of software changes in constantly evolving systems and the lack of support for systems' cooperation necessary for managing coordinated movements. Other shortcomings include the narrow focus of awarded certification, neglecting aspects such as the ethical behavior of autonomous software systems. The contribution of this paper is threefold. First, we analyze the existing international standards used in certification processes in relation to the requirements derived from dynamic software ecosystems and autonomous systems themselves, and identify their shortcomings. Second, we outline six suggestions for rethinking certification to foster comprehensive solutions to the identified problems. Third, a conceptual Multi-Layer Trust Governance Framework is introduced to establish a robust governance structure for autonomous ecosystems and associated processes, including envisioned future certification schemes. The framework comprises three layers, which together support safe and ethical operation of autonomous systems.

Segmenting humans in 3D indoor scenes has become increasingly important with the rise of human-centered robotics and AR/VR applications. To this end, we propose the task of joint 3D human semantic segmentation, instance segmentation and multi-human body-part segmentation. Few works have attempted to directly segment humans in cluttered 3D scenes, which is largely due to the lack of annotated training data of humans interacting with 3D scenes. We address this challenge and propose a framework for generating training data of synthetic humans interacting with real 3D scenes. Furthermore, we propose a novel transformer-based model, Human3D, which is the first end-to-end model for segmenting multiple human instances and their body-parts in a unified manner. The key advantage of our synthetic data generation framework is its ability to generate diverse and realistic human-scene interactions, with highly accurate ground truth. Our experiments show that pre-training on synthetic data improves performance on a wide variety of 3D human segmentation tasks. Finally, we demonstrate that Human3D outperforms even task-specific state-of-the-art 3D segmentation methods.

The sixth-generation (6G) wireless technology recognizes the potential of reconfigurable intelligent surfaces (RIS) as an effective technique for intelligently manipulating channel paths through reflection to serve desired users. Full-duplex (FD) systems, enabling simultaneous transmission and reception from a base station (BS), offer the theoretical advantage of doubled spectrum efficiency. However, the presence of strong self-interference (SI) in FD systems significantly degrades performance, which can be mitigated by leveraging the capabilities of RIS. Moreover, accurately obtaining channel state information (CSI) from RIS poses a critical challenge. Our objective is to maximize downlink (DL) user data rates while ensuring quality-of-service (QoS) for uplink (UL) users under imperfect CSI from reflected channels. To address this, we introduce the robust active BS and passive RIS beamforming (RAPB) scheme for RIS-FD, accounting for both SI and imperfect CSI. RAPB incorporates distributionally robust design, conditional value-at-risk (CVaR), and penalty convex-concave programming (PCCP) techniques. Additionally, RAPB extends to active and passive beamforming (APB) with perfect channel estimation. Simulation results demonstrate the UL/DL rate improvements achieved considering various levels of imperfect CSI. The proposed RAPB/APB schemes validate their effectiveness across different RIS deployment and RIS/BS configurations. Benefited from robust beamforming, RAPB outperforms existing methods in terms of non-robustness, deployment without RIS, conventional successive convex approximation, and half-duplex systems.

This report surveys advances in deep learning-based modeling techniques that address four different 3D indoor scene analysis tasks, as well as synthesis of 3D indoor scenes. We describe different kinds of representations for indoor scenes, various indoor scene datasets available for research in the aforementioned areas, and discuss notable works employing machine learning models for such scene modeling tasks based on these representations. Specifically, we focus on the analysis and synthesis of 3D indoor scenes. With respect to analysis, we focus on four basic scene understanding tasks -- 3D object detection, 3D scene segmentation, 3D scene reconstruction and 3D scene similarity. And for synthesis, we mainly discuss neural scene synthesis works, though also highlighting model-driven methods that allow for human-centric, progressive scene synthesis. We identify the challenges involved in modeling scenes for these tasks and the kind of machinery that needs to be developed to adapt to the data representation, and the task setting in general. For each of these tasks, we provide a comprehensive summary of the state-of-the-art works across different axes such as the choice of data representation, backbone, evaluation metric, input, output, etc., providing an organized review of the literature. Towards the end, we discuss some interesting research directions that have the potential to make a direct impact on the way users interact and engage with these virtual scene models, making them an integral part of the metaverse.

Deployment of Internet of Things (IoT) devices and Data Fusion techniques have gained popularity in public and government domains. This usually requires capturing and consolidating data from multiple sources. As datasets do not necessarily originate from identical sensors, fused data typically results in a complex data problem. Because military is investigating how heterogeneous IoT devices can aid processes and tasks, we investigate a multi-sensor approach. Moreover, we propose a signal to image encoding approach to transform information (signal) to integrate (fuse) data from IoT wearable devices to an image which is invertible and easier to visualize supporting decision making. Furthermore, we investigate the challenge of enabling an intelligent identification and detection operation and demonstrate the feasibility of the proposed Deep Learning and Anomaly Detection models that can support future application that utilizes hand gesture data from wearable devices.

Deep neural networks (DNNs) are successful in many computer vision tasks. However, the most accurate DNNs require millions of parameters and operations, making them energy, computation and memory intensive. This impedes the deployment of large DNNs in low-power devices with limited compute resources. Recent research improves DNN models by reducing the memory requirement, energy consumption, and number of operations without significantly decreasing the accuracy. This paper surveys the progress of low-power deep learning and computer vision, specifically in regards to inference, and discusses the methods for compacting and accelerating DNN models. The techniques can be divided into four major categories: (1) parameter quantization and pruning, (2) compressed convolutional filters and matrix factorization, (3) network architecture search, and (4) knowledge distillation. We analyze the accuracy, advantages, disadvantages, and potential solutions to the problems with the techniques in each category. We also discuss new evaluation metrics as a guideline for future research.

Within the rapidly developing Internet of Things (IoT), numerous and diverse physical devices, Edge devices, Cloud infrastructure, and their quality of service requirements (QoS), need to be represented within a unified specification in order to enable rapid IoT application development, monitoring, and dynamic reconfiguration. But heterogeneities among different configuration knowledge representation models pose limitations for acquisition, discovery and curation of configuration knowledge for coordinated IoT applications. This paper proposes a unified data model to represent IoT resource configuration knowledge artifacts. It also proposes IoT-CANE (Context-Aware recommendatioN systEm) to facilitate incremental knowledge acquisition and declarative context driven knowledge recommendation.

北京阿比特科技有限公司