The metaverse is regarded as a new wave of technological transformation that provides a virtual space for people to interact with each other through digital avatars. To achieve immersive user experiences in the metaverse, real-time rendering is the key technology. However, computing intensive tasks of real-time graphic and audio rendering from metaverse service providers cannot be processed efficiently on a single resource-limited mobile and Internet of Things (IoT) device. Alternatively, such devices can adopt the collaborative computing paradigm based on Coded Distributed Computing (CDC) to support metaverse services. Therefore, this paper introduces a reliable collaborative CDC framework for metaverse. In the framework, idle resources from mobile devices, acting as CDC workers, are aggregated to handle intensive computation tasks in the metaverse. A coalition can be formed among reliable workers based on a reputation metric which is maintained in a double blockchains database. The framework also considers an incentive to attract reliable workers to participate and process computation tasks of metaverse services. Moreover, the framework is designed with a hierarchical structure composed of coalition formation and Stackelberg games in the lower and upper levels to determine stable coalitions and rewards for reliable workers, respectively. The simulation results illustrate that the proposed framework is resistant to malicious workers. Compared with the random worker selection scheme, the proposed coalition formation and Stackelberg game can improve the utilities of both metaverse service providers and CDC workers.
In order to efficiently provide demand side management (DSM) in smart grid, carrying out pricing on the basis of real-time energy usage is considered to be the most vital tool because it is directly linked with the finances associated with smart meters. Hence, every smart meter user wants to pay the minimum possible amount along with getting maximum benefits. In this context, usage based dynamic pricing strategies of DSM plays their role and provide users with specific incentives that help shaping their load curve according to the forecasted load. However, these reported real-time values can leak privacy of smart meter users, which can lead to serious consequences such as spying, etc. Moreover, most dynamic pricing algorithms charge all users equally irrespective of their contribution in causing peak factor. Therefore, in this paper, we propose a modified usage based dynamic pricing mechanism that only charges the users responsible for causing peak factor. We further integrate the concept of differential privacy to protect the privacy of real-time smart metering data. To calculate accurate billing, we also propose a noise adjustment method. Finally, we propose Demand Response enhancing Differential Pricing (DRDP) strategy that effectively enhances demand response along with providing dynamic pricing to smart meter users. We also carry out theoretical analysis for differential privacy guarantees and for cooperative state probability to analyze behavior of cooperative smart meters. The performance evaluation of DRDP strategy at various privacy parameters show that the proposed strategy outperforms previous mechanisms in terms of dynamic pricing and privacy preservation.
This work elevates coded caching networks from their purely information-theoretic framework to a stochastic setting, by exploring the effect of random user activity and by exploiting correlations in the activity patterns of different users. In particular, the work studies the $K$-user cache-aided broadcast channel with a limited number of cache states, and explores the effect of cache state association strategies in the presence of arbitrary user activity levels; a combination that strikes at the very core of the coded caching problem and its crippling subpacketization bottleneck. We first present a statistical analysis of the average worst-case delay performance of such subpacketization-constrained (state-constrained) coded caching networks, and provide computationally efficient performance bounds as well as scaling laws for any arbitrary probability distribution of the user-activity levels. The achieved performance is a result of a novel user-to-cache state association algorithm that leverages the knowledge of probabilistic user-activity levels. We then follow a data-driven approach that exploits the prior history on user-activity levels and correlations, in order to predict interference patterns, and thus better design the caching algorithm. This optimized strategy is based on the principle that users that overlap more, interfere more, and thus have higher priority to secure complementary cache states. This strategy is proven here to be within a small constant factor from the optimal. Finally, the above analysis is validated numerically using synthetic data following the Pareto principle. To the best of our understanding, this is the first work that seeks to exploit user-activity levels and correlations, in order to map future interference and design optimized coded caching algorithms that better handle this interference.
Multi-access edge computing (MEC) is viewed as an integral part of future wireless networks to support new applications with stringent service reliability and latency requirements. However, guaranteeing ultra-reliable and low-latency MEC (URLL MEC) is very challenging due to uncertainties of wireless links, limited communications and computing resources, as well as dynamic network traffic. Enabling URLL MEC mandates taking into account the statistics of the end-to-end (E2E) latency and reliability across the wireless and edge computing systems. In this paper, a novel framework is proposed to optimize the reliability of MEC networks by considering the distribution of E2E service delay, encompassing over-the-air transmission and edge computing latency. The proposed framework builds on correlated variational autoencoders (VAEs) to estimate the full distribution of the E2E service delay. Using this result, a new optimization problem based on risk theory is formulated to maximize the network reliability by minimizing the Conditional Value at Risk (CVaR) as a risk measure of the E2E service delay. To solve this problem, a new algorithm is developed to efficiently allocate users' processing tasks to edge computing servers across the MEC network, while considering the statistics of the E2E service delay learned by VAEs. The simulation results show that the proposed scheme outperforms several baselines that do not account for the risk analyses or statistics of the E2E service delay.
We consider in this work Edge Computing (EC) in a multi-tenant environment: the resource owner, i.e., the Network Operator (NO), virtualizes the resources and lets third party Service Providers (SPs - tenants) run their services, which can be diverse and with heterogeneous requirements. Due to confidentiality guarantees, the NO cannot observe the nature of the traffic of SPs, which is encrypted. This makes resource allocation decisions challenging, since they must be taken based solely on observed monitoring information. We focus on one specific resource, i.e., cache space, deployed in some edge node, e.g., a base station. We study the decision of the NO about how to partition cache among several SPs in order to minimize the upstream traffic. Our goal is to optimize cache allocation using purely data-driven, model-free Reinforcement Learning (RL). Differently from most applications of RL, in which the decision policy is learned offline on a simulator, we assume no previous knowledge is available to build such a simulator. We thus apply RL in an \emph{online} fashion, i.e., the policy is learned by directly perturbing the actual system and monitoring how its performance changes. Since perturbations generate spurious traffic, we also limit them. We show in simulation that our method rapidly converges toward the theoretical optimum, we study its fairness, its sensitivity to several scenario characteristics and compare it with a method from the state-of-the-art.
Fog computing is a new computational paradigm that emerged from the need to reduce network usage and latency in the Internet of Things (IoT). Fog can be considered as a continuum between the cloud layer and IoT users that allows the execution of applications or storage/processing of data in network infrastructure devices. The heterogeneity and wider distribution of fog devices are the key differences between cloud and fog infrastructure. Genetic-based optimization is commonly used in distributed systems; however, the differentiating features of fog computing require new designs, studies, and experimentation. The growing research in the field of genetic-based fog resource optimization and the lack of previous analysis in this field have encouraged us to present a comprehensive, exhaustive, and systematic review of the most recent research works. Resource optimization techniques in fog were examined and analyzed, with special emphasis on genetic-based solutions and their characteristics and design alternatives. We defined a classification of the optimization scope in fog infrastructures and used this optimization taxonomy to classify the 70 papers in this survey. Subsequently, the papers were assessed in terms of genetic optimization design. Finally, the benefits and limitations of each surveyed work are outlined in this paper. Based on these previous analyses of the relevant literature, future research directions were identified. We concluded that more research efforts are needed to address the current challenges in data management, workflow scheduling, and service placement. Additionally, there is still room for improved designs and deployments of parallel and hybrid genetic algorithms that leverage, and adapt to, the heterogeneity and distributed features of fog domains.
Federated learning makes it possible for all parties with data isolation to train the model collaboratively and efficiently while satisfying privacy protection. To obtain a high-quality model, an incentive mechanism is necessary to motivate more high-quality workers with data and computing power. The existing incentive mechanisms are applied in offline scenarios, where the task publisher collects all bids and selects workers before the task. However, it is practical that different workers arrive online in different orders before or during the task. Therefore, we propose a reverse auction-based online incentive mechanism for horizontal federated learning with budget constraint. Workers submit bids when they arrive online. The task publisher with a limited budget leverages the information of the arrived workers to decide on whether to select the new worker. Theoretical analysis proves that our mechanism satisfies budget feasibility, computational efficiency, individual rationality, consumer sovereignty, time truthfulness, and cost truthfulness with a sufficient budget. The experimental results show that our online mechanism is efficient and can obtain high-quality models.
This letter studies a vertical federated edge learning (FEEL) system for collaborative objects/human motion recognition by exploiting the distributed integrated sensing and communication (ISAC). In this system, distributed edge devices first send wireless signals to sense targeted objects/human, and then exchange intermediate computed vectors (instead of raw sensing data) for collaborative recognition while preserving data privacy. To boost the spectrum and hardware utilization efficiency for FEEL, we exploit ISAC for both target sensing and data exchange, by employing dedicated frequency-modulated continuous-wave (FMCW) signals at each edge device. Under this setup, we propose a vertical FEEL framework for realizing the recognition based on the collected multi-view wireless sensing data. In this framework, each edge device owns an individual local L-model to transform its sensing data into an intermediate vector with relatively low dimensions, which is then transmitted to a coordinating edge device for final output via a common downstream S-model. By considering a human motion recognition task, experimental results show that our vertical FEEL based approach achieves recognition accuracy up to 98\% with an improvement up to 8\% compared to the benchmarks, including on-device training and horizontal FEEL.
Memory disaggregation has attracted great attention recently because of its benefits in efficient memory utilization and ease of management. So far, memory disaggregation research has all taken one of two approaches: building/emulating memory nodes using regular servers or building them using raw memory devices with no processing power. The former incurs higher monetary cost and faces tail latency and scalability limitations, while the latter introduces performance, security, and management problems. Server-based memory nodes and memory nodes with no processing power are two extreme approaches. We seek a sweet spot in the middle by proposing a hardware-based memory disaggregation solution that has the right amount of processing power at memory nodes. Furthermore, we take a clean-slate approach by starting from the requirements of memory disaggregation and designing a memory-disaggregation-native system. We built Clio, a disaggregated memory system that virtualizes, protects, and manages disaggregated memory at hardware-based memory nodes. The Clio hardware includes a new virtual memory system, a customized network system, and a framework for computation offloading. In building Clio, we not only co-design OS functionalities, hardware architecture, and the network system, but also co-design compute nodes and memory nodes. Our FPGA prototype of Clio demonstrates that each memory node can achieve 100 Gbps throughput and an end-to-end latency of 2.5 us at median and 3.2us at the 99th percentile. Clio also scales much better and has orders of magnitude lower tail latency than RDMA. It has 1.1x to 3.4x energy saving compared to CPU-based and SmartNIC-based disaggregated memory systems and is 2.7x faster than software-based SmartNIC solutions.
In recent years, mobile devices have gained increasingly development with stronger computation capability and larger storage. Some of the computation-intensive machine learning and deep learning tasks can now be run on mobile devices. To take advantage of the resources available on mobile devices and preserve users' privacy, the idea of mobile distributed machine learning is proposed. It uses local hardware resources and local data to solve machine learning sub-problems on mobile devices, and only uploads computation results instead of original data to contribute to the optimization of the global model. This architecture can not only relieve computation and storage burden on servers, but also protect the users' sensitive information. Another benefit is the bandwidth reduction, as various kinds of local data can now participate in the training process without being uploaded to the server. In this paper, we provide a comprehensive survey on recent studies of mobile distributed machine learning. We survey a number of widely-used mobile distributed machine learning methods. We also present an in-depth discussion on the challenges and future directions in this area. We believe that this survey can demonstrate a clear overview of mobile distributed machine learning and provide guidelines on applying mobile distributed machine learning to real applications.
Network Virtualization is one of the most promising technologies for future networking and considered as a critical IT resource that connects distributed, virtualized Cloud Computing services and different components such as storage, servers and application. Network Virtualization allows multiple virtual networks to coexist on same shared physical infrastructure simultaneously. One of the crucial keys in Network Virtualization is Virtual Network Embedding, which provides a method to allocate physical substrate resources to virtual network requests. In this paper, we investigate Virtual Network Embedding strategies and related issues for resource allocation of an Internet Provider(InP) to efficiently embed virtual networks that are requested by Virtual Network Operators(VNOs) who share the same infrastructure provided by the InP. In order to achieve that goal, we design a heuristic Virtual Network Embedding algorithm that simultaneously embeds virtual nodes and virtual links of each virtual network request onto physic infrastructure. Through extensive simulations, we demonstrate that our proposed scheme improves significantly the performance of Virtual Network Embedding by enhancing the long-term average revenue as well as acceptance ratio and resource utilization of virtual network requests compared to prior algorithms.