In the edge-cloud continuum, datacenters provide microservices (MSs) to mobile users, with each MS having specific latency constraints and computational requirements. Deploying such a variety of MSs matching their requirements with the available computing resources is challenging. In addition, time-critical MSs may have to be migrated as the users move, to keep meeting their latency constraints. Unlike previous work relying on a central orchestrator with an always-updated global view of the available resources and of the users' locations, this work envisions a distributed solution to the above issues. In particular, we propose a distributed asynchronous protocol for MS deployment in the cloud-edge continuum that (i) dramatically reduces the system overhead compared to a centralized approach, and (ii) increases the system stability by avoiding having a single point of failure as in the case of a central orchestrator. Our solution ensures cost-efficient feasible placement of MSs, while using negligible bandwidth.
An autonomous mobile robot system is a distributed system consisting of mobile computational entities (called robots) that autonomously and repeatedly perform three operations: Look, Compute, and Move. Various problems related to autonomous mobile robots, such as gathering, pattern formation, or flocking, have been extensively studied to understand the relationship between each robot's capabilities and the solvability of these problems. In this study, we focus on the complete visibility problem, which involves relocating all the robots on an infinite grid plane such that each robot is visible to every other robot. We assume that each robot is a luminous robot (i.e., has a light with a constant number of colors) and opaque (not transparent). In this paper, we propose an algorithm to achieve complete visibility when a set of robots is given. The algorithm ensures that complete visibility is achieved even when robots operate asynchronously and have no knowledge of the total number of robots on the grid plane using only two colors.
Distributed training of Deep Learning models has been critical to many recent successes in the field. Current standard methods primarily rely on synchronous centralized algorithms which induce major communication bottlenecks and limit their usability to High-Performance Computing (HPC) environments with strong connectivity. Decentralized asynchronous algorithms are emerging as a potential alternative but their practical applicability still lags. In this work, we focus on peerto-peer asynchronous methods due to their flexibility and parallelization potentials. In order to mitigate the increase in bandwidth they require at large scale and in poorly connected contexts, we introduce a principled asynchronous, randomized, gossip-based algorithm which works thanks to a continuous momentum named $\textbf{A}^2\textbf{CiD}^2$. In addition to inducing a significant communication acceleration at no cost other than doubling the parameters, minimal adaptation is required to incorporate $\textbf{A}^2\textbf{CiD}^2$ to other asynchronous approaches. We demonstrate its efficiency theoretically and numerically. Empirically on the ring graph, adding $\textbf{A}^2\textbf{CiD}^2$ has the same effect as doubling the communication rate. In particular, we show consistent improvement on the ImageNet dataset using up to 64 asynchronous workers (A100 GPUs) and various communication network topologies.
Computational simulations have the potential to assist in liver resection surgeries by facilitating surgical planning, optimizing resection strategies, and predicting postoperative outcomes. The modeling of liver tissue across multiple length scales constitutes a significant challenge, primarily due to the multiphysics coupling of mechanical response and perfusion within the complex multiscale vascularization of the organ. In this paper, we present a modeling framework that connects continuum poroelasticity and discrete vascular tree structures to model liver tissue across disparate levels of the perfusion hierarchy. The connection is achieved through a series of modeling decisions, which include source terms in the pressure equation to model inflow from the supplying tree, pressure boundary conditions to model outflow into the draining tree, and contact conditions to model surrounding tissue. We investigate the numerical behaviour of our framework and apply it to a patient-specific full-scale liver problem that demonstrates its potential to help assess surgical liver resection procedures
The paper uses machine learning and mathematical modeling to predict future vaccine distribution and solve the problem of allocating vaccines to different types of hospitals. They collected data and analyzed it, finding factors such as nearby residents, transportation, and medical personnel that impact distribution. They used the results to create a model and allocate vaccines to central and community hospitals and health centers in Hangzhou Gongshu District and Harbin Daoli District based on the model. They provide an explanation for the vaccine distribution based on their model and conclusions.
Evidence-based or data-driven dynamic treatment regimes are essential for personalized medicine, which can benefit from offline reinforcement learning (RL). Although massive healthcare data are available across medical institutions, they are prohibited from sharing due to privacy constraints. Besides, heterogeneity exists in different sites. As a result, federated offline RL algorithms are necessary and promising to deal with the problems. In this paper, we propose a multi-site Markov decision process model which allows both homogeneous and heterogeneous effects across sites. The proposed model makes the analysis of the site-level features possible. We design the first federated policy optimization algorithm for offline RL with sample complexity. The proposed algorithm is communication-efficient and privacy-preserving, which requires only a single round of communication interaction by exchanging summary statistics. We give a theoretical guarantee for the proposed algorithm without the assumption of sufficient action coverage, where the suboptimality for the learned policies is comparable to the rate as if data is not distributed. Extensive simulations demonstrate the effectiveness of the proposed algorithm. The method is applied to a sepsis data set in multiple sites to illustrate its use in clinical settings.
The In-Network Computing (COIN) paradigm is a promising solution that leverages unused network resources to perform some tasks to meet up with computation-demanding applications, such as metaverse. In this vein, we consider the metaverse partial computation offloading problem for multiple subtasks in a COIN environment to minimise energy consumption and delay while dynamically adjusting the offloading policy based on the changing computation resources status. We prove that the problem is NP and thus transformed it into two subproblems: task splitting problem (TSP) on the user side and task offloading problem (TOP) on the COIN side. We modelled the TSP as an ordinal potential game (OPG) and proposed a decentralised algorithm to obtain its Nash Equilibrium (NE). Then, we model the TOP as Markov Decision Process (MDP) proposed double deep Q-network (DDQN) to solve for the optimal offloading policy. Unlike the conventional DDQN algorithm, where intelligent agents sample offloading decisions randomly within a certain probability, our COIN agent explores the NE of the TSP and the deep neural network. Finally, simulation results show that our proposed model approach allows the COIN agent to update its policies and make more informed decisions, leading to improved performance over time compared to the traditional baseline.
The widespread adoption of edge computing has emerged as a prominent trend for alleviating task processing delays and reducing energy consumption. However, the dynamic nature of network conditions and the varying computation capacities of edge servers (ESs) can introduce disparities between computation loads and available computing resources in edge computing networks, potentially leading to inadequate service quality. To address this challenge, this paper investigates a practical scenario characterized by dynamic task offloading. Initially, we examine traditional Multi-armed Bandit (MAB) algorithms, namely the $\varepsilon$-greedy algorithm and the UCB1-based algorithm. However, both algorithms exhibit certain weaknesses in effectively addressing the tidal data traffic patterns. Consequently, based on MAB, we propose an adaptive task offloading algorithm (ATOA) that overcomes these limitations. By conducting extensive simulations, we demonstrate the superiority of our ATOA solution in reducing task processing latency compared to conventional MAB methods. This substantiates the effectiveness of our approach in enhancing the performance of edge computing networks and improving overall service quality.
We study a decentralized multi-agent multi-armed bandit problem in which multiple clients are connected by time dependent random graphs provided by an environment. The reward distributions of each arm vary across clients and rewards are generated independently over time by an environment based on distributions that include both sub-exponential and sub-gaussian distributions. Each client pulls an arm and communicates with neighbors based on the graph provided by the environment. The goal is to minimize the overall regret of the entire system through collaborations. To this end, we introduce a novel algorithmic framework, which first provides robust simulation methods for generating random graphs using rapidly mixing Markov chains or the random graph model, and then combines an averaging-based consensus approach with a newly proposed weighting technique and the upper confidence bound to deliver a UCB-type solution. Our algorithms account for the randomness in the graphs, removing the conventional doubly stochasticity assumption, and only require the knowledge of the number of clients at initialization. We derive optimal instance-dependent regret upper bounds of order $\log{T}$ in both sub-gaussian and sub-exponential environments, and a nearly optimal mean-gap independent regret upper bound of order $\sqrt{T}\log T$ up to a $\log T$ factor. Importantly, our regret bounds hold with high probability and capture graph randomness, whereas prior works consider expected regret under assumptions and require more stringent reward distributions.
Federated Learning (FL) is a decentralized machine-learning paradigm, in which a global server iteratively averages the model parameters of local users without accessing their data. User heterogeneity has imposed significant challenges to FL, which can incur drifted global models that are slow to converge. Knowledge Distillation has recently emerged to tackle this issue, by refining the server model using aggregated knowledge from heterogeneous users, other than directly averaging their model parameters. This approach, however, depends on a proxy dataset, making it impractical unless such a prerequisite is satisfied. Moreover, the ensemble knowledge is not fully utilized to guide local model learning, which may in turn affect the quality of the aggregated model. Inspired by the prior art, we propose a data-free knowledge distillation} approach to address heterogeneous FL, where the server learns a lightweight generator to ensemble user information in a data-free manner, which is then broadcasted to users, regulating local training using the learned knowledge as an inductive bias. Empirical studies powered by theoretical implications show that, our approach facilitates FL with better generalization performance using fewer communication rounds, compared with the state-of-the-art.
In recent years, mobile devices have gained increasingly development with stronger computation capability and larger storage. Some of the computation-intensive machine learning and deep learning tasks can now be run on mobile devices. To take advantage of the resources available on mobile devices and preserve users' privacy, the idea of mobile distributed machine learning is proposed. It uses local hardware resources and local data to solve machine learning sub-problems on mobile devices, and only uploads computation results instead of original data to contribute to the optimization of the global model. This architecture can not only relieve computation and storage burden on servers, but also protect the users' sensitive information. Another benefit is the bandwidth reduction, as various kinds of local data can now participate in the training process without being uploaded to the server. In this paper, we provide a comprehensive survey on recent studies of mobile distributed machine learning. We survey a number of widely-used mobile distributed machine learning methods. We also present an in-depth discussion on the challenges and future directions in this area. We believe that this survey can demonstrate a clear overview of mobile distributed machine learning and provide guidelines on applying mobile distributed machine learning to real applications.