亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Contention-based wireless channel access methods like CSMA and ALOHA paved the way for the rise of the Internet of Things in industrial applications (IIoT). However, to cope with increasing demands for reliability and throughput, several mostly TDMA-based protocols like IEEE 802.15.4 and its extensions were proposed. Nonetheless, many of these IIoT-protocols still require contention-based communication, e.g., for slot allocation and broadcast transmission. In many cases, subtle but hidden patterns characterize this secondary traffic. Present contention-based protocols are unaware of these hidden patterns and can therefore not exploit this information. Especially in dense networks, they often do not provide sufficient reliability for primary traffic, e.g., they are unable to allocate transmission slots in time. In this paper, we propose QMA, a contention-based multiple access scheme based on Q-learning, which dynamically adapts transmission times to avoid collisions by learning patterns in the contention-based traffic. QMA is designed to be resource-efficient and targets small embedded devices. We show that QMA solves the hidden node problem without the additional overhead of RTS / CTS messages and verify the behaviour of QMA in the FIT IoT-LAB testbed. Finally, QMA's scalability is studied by simulation, where it is used for GTS allocation in IEEE 802.15.4 DSME. Results show that QMA considerably increases reliability and throughput in comparison to CSMA/CA, especially in networks with a high load.

相關內容

iOS 8 提供的應用間和應用跟系統的功能交互特性。
  • Today (iOS and OS X): widgets for the Today view of Notification Center
  • Share (iOS and OS X): post content to web services or share content with others
  • Actions (iOS and OS X): app extensions to view or manipulate inside another app
  • Photo Editing (iOS): edit a photo or video in Apple's Photos app with extensions from a third-party apps
  • Finder Sync (OS X): remote file storage in the Finder with support for Finder content annotation
  • Storage Provider (iOS): an interface between files inside an app and other apps on a user's device
  • Custom Keyboard (iOS): system-wide alternative keyboards

Source:

We study broadcasting on multiple-access channels under adversarial packet injection. Leaky-bucket adversaries model packet injection. There is a fixed set of stations attached to a channel. Additional constrains on the model include bounds on the number of stations activated at a round, individual injection rates, and randomness in generating and injecting packets. Broadcast algorithms that we concentrate on are deterministic and distributed. We demonstrate that some broadcast algorithms designed for ad-hoc channels have bounded latency for wider ranges of injection rates when executed on channels with a fixed number of stations against adversaries that can activate at most one station per round. Individual injection rates are shown to impact latency, as compared to the model of general leaky bucket adversaries. Outcomes of experiments are given that compare the performance of broadcast algorithms against randomized adversaries. The experiments include randomized backoff algorithms.

Grant-free non-coherent index-modulation (NC-IM) has been recently considered as an efficient massive access scheme for enabling cost- and energy-limited Internet-of-Things (IoT) devices with small data packets. This paper investigates the grant-free NC-IM combined with orthogonal frequency division multiplexing for unmanned aerial vehicle (UAV)-based massive IoT access. Specifically, each device is assigned a unique non-orthogonal signature sequence codebook. Each active device transmits one of its signature sequences in the given time-frequency resources, by modulating the information in the index of the transmitted signature sequence. For small-scale MIMO equipped at the UAV-based aerial base station (BS), by jointly exploiting the space-time-frequency domain device activity, we propose a computationally efficient space-time-frequency joint activity and blind information detection (JABID) algorithm with significantly improved detection performance. Furthermore, for large-scale MIMO equipped at the aerial BS, by leveraging the sparsity of the virtual angular-domain channels, we propose an angular-domain based JABID algorithm for improving the system performance with reduced access latency. In addition, for the case of high mobility IoT devices and/or UAVs, we introduce a time-frequency spread transmission (TFST) strategy for the proposed JABID algorithms to combat doubly-selective fading channels. Finally, extensive simulation results are illustrated to verify the superiority of our proposed algorithms and the TFST strategy over known state-of-the-art algorithms.

With growing deployment of Internet of Things (IoT) and machine learning (ML) applications, which need to leverage computation on edge and cloud resources, it is important to develop algorithms and tools to place these distributed computations to optimize their performance. We address the problem of optimally placing computations (described as directed acyclic graphs (DAGs)) on a set of machines to maximize the steady-state throughput for pipelined inputs. Traditionally, such optimization has focused on a different metric, minimizing single-shot makespan, and a well-known algorithm is the Heterogeneous Earliest Finish Time (HEFT) algorithm. Maximizing throughput however, is more suitable for many real-time, edge, cloud and IoT applications, we present a different scheduling algorithm, namely Throughput HEFT (TPHEFT). Further, we present two throughput-oriented enhancements which can be applied to any baseline schedule, that we refer to as "node splitting" (SPLIT) and "task duplication" (DUP). In order to implement and evaluate these algorithms, we built new subsystems and plugins for an open-source dispersed computing framework called Jupiter. Experiments with varying DAG structures indicate that: 1) TPHEFT can significantly improve throughput performance compared to HEFT (up to 2.3 times in our experiments), with greater gains when there is less degree of parallelism in the DAG, 2) Node splitting can potentially improve performance over a baseline schedule, with greater gains when there's an imbalanced allocation of computation or inter-task communication, and 3) Task duplication generally gives improvements only when running upon a baseline that places communication over slow links. To our knowledge, this is the first study to present a systematic experimental implementation and exploration of throughput-enhancing techniques for dispersed computing on real testbeds.

Due to the scarcity in the wireless spectrum and limited energy resources especially in mobile applications, efficient resource allocation strategies are critical in wireless networks. Motivated by the recent advances in deep reinforcement learning (DRL), we address multi-agent DRL-based joint dynamic channel access and power control in a wireless interference network. We first propose a multi-agent DRL algorithm with centralized training (DRL-CT) to tackle the joint resource allocation problem. In this case, the training is performed at the central unit (CU) and after training, the users make autonomous decisions on their transmission strategies with only local information. We demonstrate that with limited information exchange and faster convergence, DRL-CT algorithm can achieve 90% of the performance achieved by the combination of weighted minimum mean square error (WMMSE) algorithm for power control and exhaustive search for dynamic channel access. In the second part of this paper, we consider distributed multi-agent DRL scenario in which each user conducts its own training and makes its decisions individually, acting as a DRL agent. Finally, as a compromise between centralized and fully distributed scenarios, we consider federated DRL (FDRL) to approach the performance of DRL-CT with the use of a central unit in training while limiting the information exchange and preserving privacy of the users in the wireless system. Via simulation results, we show that proposed learning frameworks lead to efficient adaptive channel access and power control policies in dynamic environments.

Over the past few years, we have seen fundamental breakthroughs in core problems in machine learning, largely driven by advances in deep neural networks. At the same time, the amount of data collected in a wide array of scientific domains is dramatically increasing in both size and complexity. Taken together, this suggests many exciting opportunities for deep learning applications in scientific settings. But a significant challenge to this is simply knowing where to start. The sheer breadth and diversity of different deep learning techniques makes it difficult to determine what scientific problems might be most amenable to these methods, or which specific combination of methods might offer the most promising first approach. In this survey, we focus on addressing this central issue, providing an overview of many widely used deep learning models, spanning visual, sequential and graph structured data, associated tasks and different training methods, along with techniques to use deep learning with less data and better interpret these complex models --- two central considerations for many scientific use cases. We also include overviews of the full design process, implementation tips, and links to a plethora of tutorials, research summaries and open-sourced deep learning pipelines and pretrained models, developed by the community. We hope that this survey will help accelerate the use of deep learning across different scientific domains.

Convolutional neural networks (CNNs) have shown dramatic improvements in single image super-resolution (SISR) by using large-scale external samples. Despite their remarkable performance based on the external dataset, they cannot exploit internal information within a specific image. Another problem is that they are applicable only to the specific condition of data that they are supervised. For instance, the low-resolution (LR) image should be a "bicubic" downsampled noise-free image from a high-resolution (HR) one. To address both issues, zero-shot super-resolution (ZSSR) has been proposed for flexible internal learning. However, they require thousands of gradient updates, i.e., long inference time. In this paper, we present Meta-Transfer Learning for Zero-Shot Super-Resolution (MZSR), which leverages ZSSR. Precisely, it is based on finding a generic initial parameter that is suitable for internal learning. Thus, we can exploit both external and internal information, where one single gradient update can yield quite considerable results. (See Figure 1). With our method, the network can quickly adapt to a given image condition. In this respect, our method can be applied to a large spectrum of image conditions within a fast adaptation process.

Time Series Classification (TSC) is an important and challenging problem in data mining. With the increase of time series data availability, hundreds of TSC algorithms have been proposed. Among these methods, only a few have considered Deep Neural Networks (DNNs) to perform this task. This is surprising as deep learning has seen very successful applications in the last years. DNNs have indeed revolutionized the field of computer vision especially with the advent of novel deeper architectures such as Residual and Convolutional Neural Networks. Apart from images, sequential data such as text and audio can also be processed with DNNs to reach state-of-the-art performance for document classification and speech recognition. In this article, we study the current state-of-the-art performance of deep learning algorithms for TSC by presenting an empirical study of the most recent DNN architectures for TSC. We give an overview of the most successful deep learning applications in various time series domains under a unified taxonomy of DNNs for TSC. We also provide an open source deep learning framework to the TSC community where we implemented each of the compared approaches and evaluated them on a univariate TSC benchmark (the UCR/UEA archive) and 12 multivariate time series datasets. By training 8,730 deep learning models on 97 time series datasets, we propose the most exhaustive study of DNNs for TSC to date.

We present Residual Policy Learning (RPL): a simple method for improving nondifferentiable policies using model-free deep reinforcement learning. RPL thrives in complex robotic manipulation tasks where good but imperfect controllers are available. In these tasks, reinforcement learning from scratch remains data-inefficient or intractable, but learning a residual on top of the initial controller can yield substantial improvement. We study RPL in five challenging MuJoCo tasks involving partial observability, sensor noise, model misspecification, and controller miscalibration. By combining learning with control algorithms, RPL can perform long-horizon, sparse-reward tasks for which reinforcement learning alone fails. Moreover, we find that RPL consistently and substantially improves on the initial controllers. We argue that RPL is a promising approach for combining the complementary strengths of deep reinforcement learning and robotic control, pushing the boundaries of what either can achieve independently.

This paper presents a new multi-objective deep reinforcement learning (MODRL) framework based on deep Q-networks. We propose the use of linear and non-linear methods to develop the MODRL framework that includes both single-policy and multi-policy strategies. The experimental results on two benchmark problems including the two-objective deep sea treasure environment and the three-objective mountain car problem indicate that the proposed framework is able to converge to the optimal Pareto solutions effectively. The proposed framework is generic, which allows implementation of different deep reinforcement learning algorithms in different complex environments. This therefore overcomes many difficulties involved with standard multi-objective reinforcement learning (MORL) methods existing in the current literature. The framework creates a platform as a testbed environment to develop methods for solving various problems associated with the current MORL. Details of the framework implementation can be referred to //www.deakin.edu.au/~thanhthi/drl.htm.

Existing multi-agent reinforcement learning methods are limited typically to a small number of agents. When the agent number increases largely, the learning becomes intractable due to the curse of the dimensionality and the exponential growth of agent interactions. In this paper, we present Mean Field Reinforcement Learning where the interactions within the population of agents are approximated by those between a single agent and the average effect from the overall population or neighboring agents; the interplay between the two entities is mutually reinforced: the learning of the individual agent's optimal policy depends on the dynamics of the population, while the dynamics of the population change according to the collective patterns of the individual policies. We develop practical mean field Q-learning and mean field Actor-Critic algorithms and analyze the convergence of the solution to Nash equilibrium. Experiments on Gaussian squeeze, Ising model, and battle games justify the learning effectiveness of our mean field approaches. In addition, we report the first result to solve the Ising model via model-free reinforcement learning methods.

北京阿比特科技有限公司