亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The increasing demand for video streaming services is the key driver of modern wireless and mobile communications. For robust and high-quality delivery of video content over wireless and mobile networks, the main challenge is sending image and video signals to single and multiple users over unstable and diverse channel environments. To this end, many studies have designed digital-based video delivery schemes, which mainly consist of a sequence of digital-based coding and transmission schemes. Although digital-based schemes perform well when the channel characteristics are known in advance, significant quality degradation, known as cliff and leveling effects, often occurs owing to the fluctuating channel characteristics. To prevent cliff and leveling effects irrespective of the channel characteristics of each user, a new paradigm for wireless and mobile video streaming has been proposed. Soft delivery schemes skip the digital operations of quantization and entropy and channel coding while directly mapping the power-assigned frequency--domain coefficients onto the transmission symbols. This modification is based on the fact that the pixel distortion due to communication noise is proportional to the magnitude of the noise, resulting in graceful quality improvement, wherein quality is improved gradually, according to the wireless channel quality without any cliff and leveling effects. Herein, we present a comprehensive summary of soft delivery schemes.

相關內容

The extreme or maximum age of information (AoI) is analytically studied for wireless communication systems. In particular, a wireless powered single-antenna source node and a receiver (connected to the power grid) equipped with multiple antennas are considered when operated under independent Rayleigh-faded channels. Via the extreme value theory and its corresponding statistical features, we demonstrate that the extreme AoI converges to the Gumbel distribution whereas its corresponding parameters are obtained in straightforward closed-form expressions. Capitalizing on this result, the risk of the extreme AoI realization is analytically evaluated according to some relevant performance metrics, while some useful engineering insights are manifested.

User dissatisfaction due to buffering pauses during streaming is a significant cost to the system, which we model as a non-decreasing function of the frequency of buffering pause. Minimization of total user dissatisfaction in a multi-channel cellular network leads to a non-convex problem. Utilizing a combinatorial structure in this problem, we first propose a polynomial time joint admission control and channel allocation algorithm which is provably (almost) optimal. This scheme assumes that the base station (BS) knows the frame statistics of the streams. In a more practical setting, where these statistics are not available a priori at the BS, a learning based scheme with provable guarantees is developed. This learning based scheme has relation to regret minimization in multi-armed bandits with non-i.i.d. and delayed reward (cost). All these algorithms require none to minimal feedback from the user equipment to the base station regarding the states of the media player buffer at the application layer, and hence, are of practical interest.

This paper investigates the performance of streaming codes in low-latency applications over a multi-link three-node relayed network. The source wishes to transmit a sequence of messages to the destination through a relay. Each message must be reconstructed after a fixed decoding delay. The special case with one link connecting each node has been studied by Fong et. al [1], and a multi-hop multi-link setting has been studied by Domanovitz et. al [2]. The topology with three nodes and multiple links is studied in this paper. Each link is subject to a different number of erasures due to different channel conditions. An information-theoretic upper bound is derived, and an achievable scheme is presented. The proposed scheme judiciously allocates rates for each link based on the concept of delay spectrum. The achievable scheme is compared to two baseline schemes and the scheme proposed in [2]. Experimental results show that this scheme achieves higher rates than the other schemes, and can achieve the upper bound even in non-trivial scenarios. The scheme is further extended to handle different propagation delays in each link, something not previously considered in the literature. Simulations over statistical channels show that the proposed scheme can outperform the simpler baseline under practical models.

This paper introduces a new theoretical framework for optimizing second-order behaviors of wireless networks. Unlike existing techniques for network utility maximization, which only considers first-order statistics, this framework models every random process by its mean and temporal variance. The inclusion of temporal variance makes this framework well-suited for modeling stateful fading wireless channels and emerging network performance metrics such as age-of-information (AoI). Using this framework, we sharply characterize the second-order capacity region of wireless access networks. We also propose a simple scheduling policy and prove that it can achieve every interior point in the second-order capacity region. To demonstrate the utility of this framework, we apply it for an important open problem: the optimization of AoI over Gilbert-Elliott channels. We show that this framework provides a very accurate characterization of AoI. Moreover, it leads to a tractable scheduling policy that outperforms other existing work.

Reconfigurable intelligent surface has attracted the attention of academia and industry as soon as it appears because it can flexibly manipulate the electromagnetic characteristics of wireless channel. Especially in the past one or two years, RIS has been developing rapidly in academic research and industry promotion and is one of the key candidate technologies for 5G-Advanced and 6G networks. RIS can build a smart radio environment through its ability to regulate radio wave transmission in a flexible way. The introduction of RIS may create a new network paradigm, which brings new possibilities to the future network, but also leads to many new challenges in the technological and engineering applications. This paper first introduces the main aspects of RIS enabled wireless communication network from a new perspective, and then focuses on the key challenges faced by the introduction of RIS. This paper briefly summarizes the main engineering application challenges faced by RIS networks, and further analyzes and discusses several key technical challenges among of them in depth, such as channel degradation, network coexistence, network coexistence and network deployment, and proposes possible solutions.

Mobile cloud gaming enables high-end games on constrained devices by streaming the game content from powerful servers through mobile networks. Mobile networks suffer from highly variable bandwidth, latency, and losses that affect the gaming experience. This paper introduces Nebula, an end-to-end cloud gaming framework to minimize the impact of network conditions on the user experience. Nebula relies on an end-to-end distortion model adapting the video source rate and the amount of frame-level redundancy based on the measured network conditions. As a result, it minimizes the motion-to-photon (MTP) latency while protecting the frames from losses. We fully implement Nebula and evaluate its performance against the state of the art techniques and latest research in real-time mobile cloud gaming transmission on a physical testbed over emulated and real wireless networks. Nebula consistently balances MTP latency (<140 ms) and visual quality (>31 dB) even in highly variable environments. A user experiment confirms that Nebula maximizes the user experience with high perceived video quality, playability, and low user load.

Phobia is a widespread mental illness, and severe phobias can seriously impact patients daily lives. One-session Exposure Treatment (OST) has been used to treat phobias in the early days,but it has many disadvantages. As a new way to treat a phobia, virtual reality exposure therapy(VRET) based on serious games is introduced. There have been much researches in the field of serious games for phobia therapy (SGPT), so this paper presents a detailed review of SGPT from three perspectives. First, SGPT in different stages has different forms with the update and iteration of technology. Therefore, we reviewed the development history of SGPT from the perspective of equipment. Secondly, there is no unified classification framework for a large number of SGPT. So we classified and combed SGPT according to different types of phobias. Finally, most articles on SGPT have studied the therapeutic effects of serious games from a medical perspective, and few have studied serious games from a technical perspective. Therefore, we conducted in-depth research on SGPT from a technical perspective in order to provide technical guidance for the development of SGPT. Accordingly, the challenges facing the existing technology has been explored and listed.

Deep neural models in recent years have been successful in almost every field, including extremely complex problem statements. However, these models are huge in size, with millions (and even billions) of parameters, thus demanding more heavy computation power and failing to be deployed on edge devices. Besides, the performance boost is highly dependent on redundant labeled data. To achieve faster speeds and to handle the problems caused by the lack of data, knowledge distillation (KD) has been proposed to transfer information learned from one model to another. KD is often characterized by the so-called `Student-Teacher' (S-T) learning framework and has been broadly applied in model compression and knowledge transfer. This paper is about KD and S-T learning, which are being actively studied in recent years. First, we aim to provide explanations of what KD is and how/why it works. Then, we provide a comprehensive survey on the recent progress of KD methods together with S-T frameworks typically for vision tasks. In general, we consider some fundamental questions that have been driving this research area and thoroughly generalize the research progress and technical details. Additionally, we systematically analyze the research status of KD in vision applications. Finally, we discuss the potentials and open challenges of existing methods and prospect the future directions of KD and S-T learning.

Transformers have achieved great success in many artificial intelligence fields, such as natural language processing, computer vision, and audio processing. Therefore, it is natural to attract lots of interest from academic and industry researchers. Up to the present, a great variety of Transformer variants (a.k.a. X-formers) have been proposed, however, a systematic and comprehensive literature review on these Transformer variants is still missing. In this survey, we provide a comprehensive review of various X-formers. We first briefly introduce the vanilla Transformer and then propose a new taxonomy of X-formers. Next, we introduce the various X-formers from three perspectives: architectural modification, pre-training, and applications. Finally, we outline some potential directions for future research.

Driven by the visions of Internet of Things and 5G communications, the edge computing systems integrate computing, storage and network resources at the edge of the network to provide computing infrastructure, enabling developers to quickly develop and deploy edge applications. Nowadays the edge computing systems have received widespread attention in both industry and academia. To explore new research opportunities and assist users in selecting suitable edge computing systems for specific applications, this survey paper provides a comprehensive overview of the existing edge computing systems and introduces representative projects. A comparison of open source tools is presented according to their applicability. Finally, we highlight energy efficiency and deep learning optimization of edge computing systems. Open issues for analyzing and designing an edge computing system are also studied in this survey.

北京阿比特科技有限公司