亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Multi-access edge computing (MEC) is an emerging paradigm that pushes resources for sensing, communications, computing, storage and intelligence (SCCSI) to the premises closer to the end users, i.e., the edge, so that they could leverage the nearby rich resources to improve their quality of experience (QoE). Due to the growing emerging applications targeting at intelligentizing life-sustaining cyber-physical systems, this paradigm has become a hot research topic, particularly when MEC is utilized to provide edge intelligence and real-time processing and control. This article is to elaborate the research issues along this line, including basic concepts and performance metrics, killer applications, architectural design, modeling approaches and solutions, and future research directions. It is hoped that this article provides a quick introduction to this fruitful research area particularly for beginning researchers.

相關內容

Efficient sampling and remote estimation are critical for a plethora of wireless-empowered applications in the Internet of Things and cyber-physical systems. Motivated by such applications, this work proposes decentralized policies for the real-time monitoring and estimation of autoregressive processes over random access channels. Two classes of policies are investigated: (i) oblivious schemes in which sampling and transmission policies are independent of the processes that are monitored, and (ii) non-oblivious schemes in which transmitters causally observe their corresponding processes for decision making. In the class of oblivious policies, we show that minimizing the expected time-average estimation error is equivalent to minimizing the expected age of information. Consequently, we prove lower and upper bounds on the minimum achievable estimation error in this class. Next, we consider non-oblivious policies and design a threshold policy, called error-based thinning, in which each source node becomes active if its instantaneous error has crossed a fixed threshold (which we optimize). Active nodes then transmit stochastically following a slotted ALOHA policy. A closed-form, approximately optimal, solution is found for the threshold as well as the resulting estimation error. It is shown that non-oblivious policies offer a multiplicative gain close to $3$ compared to oblivious policies. Moreover, it is shown that oblivious policies that use the age of information for decision making improve the state-of-the-art at least by the multiplicative factor $2$. The performance of all discussed policies is compared using simulations. The numerical comparison shows that the performance of the proposed decentralized policy is very close to that of centralized greedy scheduling.

We present an historical overview about the connections between the analysis of risk and the control of autonomous systems. We offer two main contributions. Our first contribution is to propose three overlapping paradigms to classify the vast body of literature: the worst-case, risk-neutral, and risk-averse paradigms. We consider an appropriate assessment for the risk of an autonomous system to depend on the application at hand. In contrast, it is typical to assess risk using an expectation, variance, or probability alone. Our second contribution is to unify the concepts of risk and autonomous systems. We achieve this by connecting approaches for quantifying and optimizing the risk that arises from a system's behaviour across academic fields. The survey is highly multidisciplinary. We include research from the communities of reinforcement learning, stochastic and robust control theory, operations research, and formal verification. We describe both model-based and model-free methods, with emphasis on the former. Lastly, we highlight fruitful areas for further research. A key direction is to blend risk-averse model-based and model-free methods to enhance the real-time adaptive capabilities of systems to improve human and environmental welfare.

Feature extraction is an essential task in graph analytics. These feature vectors, called graph descriptors, are used in downstream vector-space-based graph analysis models. This idea has proved fruitful in the past, with spectral-based graph descriptors providing state-of-the-art classification accuracy. However, known algorithms to compute meaningful descriptors do not scale to large graphs since: (1) they require storing the entire graph in memory, and (2) the end-user has no control over the algorithm's runtime. In this paper, we present streaming algorithms to approximately compute three different graph descriptors capturing the essential structure of graphs. Operating on edge streams allows us to avoid storing the entire graph in memory, and controlling the sample size enables us to keep the runtime of our algorithms within desired bounds. We demonstrate the efficacy of the proposed descriptors by analyzing the approximation error and classification accuracy. Our scalable algorithms compute descriptors of graphs with millions of edges within minutes. Moreover, these descriptors yield predictive accuracy comparable to the state-of-the-art methods but can be computed using only 25% as much memory.

Many real-world settings involve costs for performing actions; transaction costs in financial systems and fuel costs being common examples. In these settings, performing actions at each time step quickly accumulates costs leading to vastly suboptimal outcomes. Additionally, repeatedly acting produces wear and tear and ultimately, damage. Determining when to act is crucial for achieving successful outcomes and yet, the challenge of efficiently learning to behave optimally when actions incur minimally bounded costs remains unresolved. In this paper, we introduce a reinforcement learning (RL) framework named Learnable Impulse Control Reinforcement Algorithm (LICRA), for learning to optimally select both when to act and which actions to take when actions incur costs. At the core of LICRA is a nested structure that combines RL and a form of policy known as impulse control which learns to maximise objectives when actions incur costs. We prove that LICRA, which seamlessly adopts any RL method, converges to policies that optimally select when to perform actions and their optimal magnitudes. We then augment LICRA to handle problems in which the agent can perform at most $k<\infty$ actions and more generally, faces a budget constraint. We show LICRA learns the optimal value function and ensures budget constraints are satisfied almost surely. We demonstrate empirically LICRA's superior performance against benchmark RL methods in OpenAI gym's Lunar Lander and in Highway environments and a variant of the Merton portfolio problem within finance.

With the development of mobile edge computing (MEC) and blockchain-based federated learning (BCFL), a number of studies suggest deploying BCFL on edge servers. In this case, resource-limited edge servers need to serve both mobile devices for their offloading tasks and the BCFL system for model training and blockchain consensus in a cost-efficient manner without sacrificing the service quality to any side. To address this challenge, this paper proposes a resource allocation scheme for edge servers, aiming to provide the optimal services with the minimum cost. Specifically, we first analyze the energy consumed by the MEC and BCFL tasks, and then use the completion time of each task as the service quality constraint. Then, we model the resource allocation challenge into a multivariate, multi-constraint, and convex optimization problem. To solve the problem in a progressive manner, we design two algorithms based on the alternating direction method of multipliers (ADMM) in both the homogeneous and heterogeneous situations with equal and on-demand resource distribution strategies, respectively. The validity of our proposed algorithms is proved via rigorous theoretical analysis. Through extensive experiments, the convergence and efficiency of our proposed resource allocation schemes are evaluated. To the best of our knowledge, this is the first work to investigate the resource allocation dilemma of edge servers for BCFL in MEC.

In applications of remote sensing, estimation, and control, timely communication is not always ensured by high-rate communication. This work proposes distributed age-efficient transmission policies for random access channels with $M$ transmitters. In the first part of this work, we analyze the age performance of stationary randomized policies by relating the problem of finding age to the absorption time of a related Markov chain. In the second part of this work, we propose the notion of \emph{age-gain} of a packet to quantify how much the packet will reduce the instantaneous age of information at the receiver side upon successful delivery. We then utilize this notion to propose a transmission policy in which transmitters act in a distributed manner based on the age-gain of their available packets. In particular, each transmitter sends its latest packet only if its corresponding age-gain is beyond a certain threshold which could be computed adaptively using the collision feedback or found as a fixed value analytically in advance. Both methods improve age of information significantly compared to the state of the art. In the limit of large $M$, we prove that when the arrival rate is small (below $\frac{1}{eM}$), slotted ALOHA-type algorithms are asymptotically optimal. As the arrival rate increases beyond $\frac{1}{eM}$, while age increases under slotted ALOHA, it decreases significantly under the proposed age-based policies. For arrival rates $\theta$, $\theta=\frac{1}{o(M)}$, the proposed algorithms provide a multiplicative factor of at least two compared to the minimum age under slotted ALOHA (minimum over all arrival rates). We conclude that, as opposed to the common practice, it is beneficial to increase the sampling rate (and hence the arrival rate) and transmit packets selectively based on their age-gain.

Deep neural networks have revolutionized many machine learning tasks in power systems, ranging from pattern recognition to signal processing. The data in these tasks is typically represented in Euclidean domains. Nevertheless, there is an increasing number of applications in power systems, where data are collected from non-Euclidean domains and represented as the graph-structured data with high dimensional features and interdependency among nodes. The complexity of graph-structured data has brought significant challenges to the existing deep neural networks defined in Euclidean domains. Recently, many studies on extending deep neural networks for graph-structured data in power systems have emerged. In this paper, a comprehensive overview of graph neural networks (GNNs) in power systems is proposed. Specifically, several classical paradigms of GNNs structures (e.g., graph convolutional networks, graph recurrent neural networks, graph attention networks, graph generative networks, spatial-temporal graph convolutional networks, and hybrid forms of GNNs) are summarized, and key applications in power systems such as fault diagnosis, power prediction, power flow calculation, and data generation are reviewed in detail. Furthermore, main issues and some research trends about the applications of GNNs in power systems are discussed.

Image-to-image translation (I2I) aims to transfer images from a source domain to a target domain while preserving the content representations. I2I has drawn increasing attention and made tremendous progress in recent years because of its wide range of applications in many computer vision and image processing problems, such as image synthesis, segmentation, style transfer, restoration, and pose estimation. In this paper, we provide an overview of the I2I works developed in recent years. We will analyze the key techniques of the existing I2I works and clarify the main progress the community has made. Additionally, we will elaborate on the effect of I2I on the research and industry community and point out remaining challenges in related fields.

Edge intelligence refers to a set of connected systems and devices for data collection, caching, processing, and analysis in locations close to where data is captured based on artificial intelligence. The aim of edge intelligence is to enhance the quality and speed of data processing and protect the privacy and security of the data. Although recently emerged, spanning the period from 2011 to now, this field of research has shown explosive growth over the past five years. In this paper, we present a thorough and comprehensive survey on the literature surrounding edge intelligence. We first identify four fundamental components of edge intelligence, namely edge caching, edge training, edge inference, and edge offloading, based on theoretical and practical results pertaining to proposed and deployed systems. We then aim for a systematic classification of the state of the solutions by examining research results and observations for each of the four components and present a taxonomy that includes practical problems, adopted techniques, and application goals. For each category, we elaborate, compare and analyse the literature from the perspectives of adopted techniques, objectives, performance, advantages and drawbacks, etc. This survey article provides a comprehensive introduction to edge intelligence and its application areas. In addition, we summarise the development of the emerging research field and the current state-of-the-art and discuss the important open issues and possible theoretical and technical solutions.

Driven by the visions of Internet of Things and 5G communications, the edge computing systems integrate computing, storage and network resources at the edge of the network to provide computing infrastructure, enabling developers to quickly develop and deploy edge applications. Nowadays the edge computing systems have received widespread attention in both industry and academia. To explore new research opportunities and assist users in selecting suitable edge computing systems for specific applications, this survey paper provides a comprehensive overview of the existing edge computing systems and introduces representative projects. A comparison of open source tools is presented according to their applicability. Finally, we highlight energy efficiency and deep learning optimization of edge computing systems. Open issues for analyzing and designing an edge computing system are also studied in this survey.

北京阿比特科技有限公司