We present a federated, asynchronous, memory-limited algorithm for online task scheduling across large-scale networks of hundreds of workers. This is achieved through recent advancements in federated edge computing that unlocks the ability to incrementally compute local model updates within each node separately. This local model is then used along with incoming data to generate a rejection signal which reflects the overall node responsiveness and if it is able to accept an incoming task without resulting in degraded performance. Through this innovation, we allow each node to execute scheduling decisions on whether to accept an incoming job independently based on the workload seen thus far. Further, using the aggregate of the iterates a global view of the system can be constructed, as needed, and could be used to produce a holistic perspective of the system. We complement our findings, by an empirical evaluation on a large-scale real-world dataset of traces from a virtualized production data center that shows, while using limited memory, that our algorithm exhibits state-of-the-art performance. Concretely, it is able to predict changes in the system responsiveness ahead of time based on the industry-standard CPU-Ready metric and, in turn, can lead to better scheduling decisions and overall utilization of the available resources. Finally, in the absence of communication latency, it exhibits attractive horizontal scalability.
Federated learning (FL) has become de facto framework for collaborative learning among edge devices with privacy concern. The core of the FL strategy is the use of stochastic gradient descent (SGD) in a distributed manner. Large scale implementation of FL brings new challenges, such as the incorporation of acceleration techniques designed for SGD into the distributed setting, and mitigation of the drift problem due to non-homogeneous distribution of local datasets. These two problems have been separately studied in the literature; whereas, in this paper, we show that it is possible to address both problems using a single strategy without any major alteration to the FL framework, or introducing additional computation and communication load. To achieve this goal, we propose FedADC, which is an accelerated FL algorithm with drift control. We empirically illustrate the advantages of FedADC.
Decentralized training of deep learning models is a key element for enabling data privacy and on-device learning over networks. In realistic learning scenarios, the presence of heterogeneity across different clients' local datasets poses an optimization challenge and may severely deteriorate the generalization performance. In this paper, we investigate and identify the limitation of several decentralized optimization algorithms for different degrees of data heterogeneity. We propose a novel momentum-based method to mitigate this decentralized training difficulty. We show in extensive empirical experiments on various CV/NLP datasets (CIFAR-10, ImageNet, and AG News) and several network topologies (Ring and Social Network) that our method is much more robust to the heterogeneity of clients' data than other existing methods, by a significant improvement in test performance ($1\% \!-\! 20\%$). Our code is publicly available.
Scheduling a sports tournament is a complex optimization problem, which requires a large number of hard constraints to satisfy. Despite the availability of several such constraints in the literature, there remains a gap since most of the new sports events pose their own unique set of requirements, and demand novel constraints. Specifically talking of the strictly time bound events, ensuring fairness between the different teams in terms of their rest days, traveling, and the number of successive games they play, becomes a difficult task to resolve, and demands attention. In this work, we present a similar situation with a recently played sports event, where a suboptimal schedule favored some of the sides more than the others. We introduce various competitive parameters to draw a fairness comparison between the sides and propose a weighting criterion to point out the sides that enjoyed this schedule more than the others. Furthermore, we use root mean squared error between an ideal schedule and the actual ones for each side to determine unfairness in the distribution of rest days across their entire schedules. The latter is crucial, since successively playing a large number of games may lead to sportsmen burnout, which must be prevented.
Federated learning is a distributed learning paradigm where multiple agents, each only with access to local data, jointly learn a global model. There has recently been an explosion of research aiming not only to improve the accuracy rates of federated learning, but also provide certain guarantees around social good properties such as total error. One branch of this research has taken a game-theoretic approach, and in particular, prior work has viewed federated learning as a hedonic game, where error-minimizing players arrange themselves into federating coalitions. This past work proves the existence of stable coalition partitions, but leaves open a wide range of questions, including how far from optimal these stable solutions are. In this work, we motivate and define a notion of optimality given by the average error rates among federating agents (players). First, we provide and prove the correctness of an efficient algorithm to calculate an optimal (error minimizing) arrangement of players. Next, we analyze the relationship between the stability and optimality of an arrangement. First, we show that for some regions of parameter space, all stable arrangements are optimal (Price of Anarchy equal to 1). However, we show this is not true for all settings: there exist examples of stable arrangements with higher cost than optimal (Price of Anarchy greater than 1). Finally, we give the first constant-factor bound on the performance gap between stability and optimality, proving that the total error of the worst stable solution can be no higher than 9 times the total error of an optimal solution (Price of Anarchy bound of 9).
Federated learning (FL) has been recognized as a viable distributed learning paradigm which trains a machine learning model collaboratively with massive mobile devices in the wireless edge while protecting user privacy. Although various communication schemes have been proposed to expedite the FL process, most of them have assumed ideal wireless channels which provide reliable and lossless communication links between the server and mobile clients. Unfortunately, in practical systems with limited radio resources such as constraint on the training latency and constraints on the transmission power and bandwidth, transmission of a large number of model parameters inevitably suffers from quantization errors (QE) and transmission outage (TO). In this paper, we consider such non-ideal wireless channels, and carry out the first analysis showing that the FL convergence can be severely jeopardized by TO and QE, but intriguingly can be alleviated if the clients have uniform outage probabilities. These insightful results motivate us to propose a robust FL scheme, named FedTOE, which performs joint allocation of wireless resources and quantization bits across the clients to minimize the QE while making the clients have the same TO probability. Extensive experimental results are presented to show the superior performance of FedTOE for a deep learning-based classification task with transmission latency constraints.
Fairness has emerged as a critical problem in federated learning (FL). In this work, we identify a cause of unfairness in FL -- conflicting gradients with large differences in the magnitudes. To address this issue, we propose the federated fair averaging (FedFV) algorithm to mitigate potential conflicts among clients before averaging their gradients. We first use the cosine similarity to detect gradient conflicts, and then iteratively eliminate such conflicts by modifying both the direction and the magnitude of the gradients. We further show the theoretical foundation of FedFV to mitigate the issue conflicting gradients and converge to Pareto stationary solutions. Extensive experiments on a suite of federated datasets confirm that FedFV compares favorably against state-of-the-art methods in terms of fairness, accuracy and efficiency. The source code is available at //github.com/WwZzz/easyFL.
In type theory, we can express many practical ideas by attributing some additional data to expressions we operate on during compilation. For instance, some substructural type theories augment variables' typing judgments with the information of their usage. That is, they allow one to explicitly state how many times - 0, 1, or many - a variable can be used. This solves the problem of resource usage control and allows us to treat variables as resources. What's more, it often happens that this attributed information is interpreted (used) during the same compilation and erased before we run a program. A case in the point is that in the same substructural type theories, their type checkers use these 0, 1, or many, to ensure that all variables are used as many times as these attributions say them to be. Yet, there wasn't any programming language concept whose concern would be to allow a programmer to express these attributions in the language itself. That is, to let the programmer express which data the one wants to attribute to what expressions and, most importantly, the meaning of the attributed data in their program. As it turned out, the presence of such a concept allows us to express many practical ideas in the language itself. For instance, with appropriate means for assigning the meaning of these attributions, this concept would allow one to express linear types as functionality in a separate program module, without the need to refine the whole type system to add them. In this paper, we present such a concept - we propose type properties. It allows a programmer to express these attributions while fulfilling the requirement of being fully on the static level. That is, it allows one to express how to interpret these attributions during compilation and erases them before a program is passed to the runtime.
Fairness and robustness are two important concerns for federated learning systems. In this work, we identify that robustness to data and model poisoning attacks and fairness, measured as the uniformity of performance across devices, are competing constraints in statistically heterogeneous networks. To address these constraints, we propose employing a simple, general framework for personalized federated learning, Ditto, that can inherently provide fairness and robustness benefits, and develop a scalable solver for it. Theoretically, we analyze the ability of Ditto to achieve fairness and robustness simultaneously on a class of linear problems. Empirically, across a suite of federated datasets, we show that Ditto not only achieves competitive performance relative to recent personalization methods, but also enables more accurate, robust, and fair models relative to state-of-the-art fair or robust baselines.
Federated learning (FL) is an emerging, privacy-preserving machine learning paradigm, drawing tremendous attention in both academia and industry. A unique characteristic of FL is heterogeneity, which resides in the various hardware specifications and dynamic states across the participating devices. Theoretically, heterogeneity can exert a huge influence on the FL training process, e.g., causing a device unavailable for training or unable to upload its model updates. Unfortunately, these impacts have never been systematically studied and quantified in existing FL literature. In this paper, we carry out the first empirical study to characterize the impacts of heterogeneity in FL. We collect large-scale data from 136k smartphones that can faithfully reflect heterogeneity in real-world settings. We also build a heterogeneity-aware FL platform that complies with the standard FL protocol but with heterogeneity in consideration. Based on the data and the platform, we conduct extensive experiments to compare the performance of state-of-the-art FL algorithms under heterogeneity-aware and heterogeneity-unaware settings. Results show that heterogeneity causes non-trivial performance degradation in FL, including up to 9.2% accuracy drop, 2.32x lengthened training time, and undermined fairness. Furthermore, we analyze potential impact factors and find that device failure and participant bias are two potential factors for performance degradation. Our study provides insightful implications for FL practitioners. On the one hand, our findings suggest that FL algorithm designers consider necessary heterogeneity during the evaluation. On the other hand, our findings urge system providers to design specific mechanisms to mitigate the impacts of heterogeneity.
Cold-start problems are long-standing challenges for practical recommendations. Most existing recommendation algorithms rely on extensive observed data and are brittle to recommendation scenarios with few interactions. This paper addresses such problems using few-shot learning and meta learning. Our approach is based on the insight that having a good generalization from a few examples relies on both a generic model initialization and an effective strategy for adapting this model to newly arising tasks. To accomplish this, we combine the scenario-specific learning with a model-agnostic sequential meta-learning and unify them into an integrated end-to-end framework, namely Scenario-specific Sequential Meta learner (or s^2 meta). By doing so, our meta-learner produces a generic initial model through aggregating contextual information from a variety of prediction tasks while effectively adapting to specific tasks by leveraging learning-to-learn knowledge. Extensive experiments on various real-world datasets demonstrate that our proposed model can achieve significant gains over the state-of-the-arts for cold-start problems in online recommendation. Deployment is at the Guess You Like session, the front page of the Mobile Taobao.