In Covid-19 pandemic, the number of users connecting to the Internet using mobile devices increased. People are doing there every task using mobile phones [16]. These devices are battery-powered and have limited computation capabilities. Their computational capabilities can be enhanced by computation offloading in which required computation is to be done on a third-party server on a cloud instead of the device itself. The cloud offers virtually infinite computation and storage. We proposed that by exploiting parallelism within an application call hierarchy we can decrease the execution time of off-loadable parts and minimize data resend in case of VM crash. We determine function call paths that are independent of each other within an application and schedule each of them on separate VMs in a distributed way. Wherever such independent paths merge, we collapse to a single VM and whenever the paths diverge again, we schedule multiple VMs. If any single VM fails another copy will be created. However, only the code and data associated with the crashed VM needs to be re-transmitted from the client device. In the case of face reorganization application and montage application we decrease execution time to 27.5% and 43.43% respectively. Whereas the data resend in case if any of both VMs crash will be the portion of the application that had been offloaded to respective VM at depending upon the level of parallelism they have which save mobile battery in case of Resend. We will also discuss the energy consumption effect of using multiple Vms for a job VS single Vm for the same job.
Online federated learning (FL) enables geographically distributed devices to learn a global shared model from locally available streaming data. Most online FL literature considers a best-case scenario regarding the participating clients and the communication channels. However, these assumptions are often not met in real-world applications. Asynchronous settings can reflect a more realistic environment, such as heterogeneous client participation due to available computational power and battery constraints, as well as delays caused by communication channels or straggler devices. Further, in most applications, energy efficiency must be taken into consideration. Using the principles of partial-sharing-based communications, we propose a communication-efficient asynchronous online federated learning (PAO-Fed) strategy. By reducing the communication overhead of the participants, the proposed method renders participation in the learning task more accessible and efficient. In addition, the proposed aggregation mechanism accounts for random participation, handles delayed updates and mitigates their effect on accuracy. We prove the first and second-order convergence of the proposed PAO-Fed method and obtain an expression for its steady-state mean square deviation. Finally, we conduct comprehensive simulations to study the performance of the proposed method on both synthetic and real-life datasets. The simulations reveal that in asynchronous settings, the proposed PAO-Fed is able to achieve the same convergence properties as that of the online federated stochastic gradient while reducing the communication overhead by 98 percent.
The last years have seen an increase in Man-at-the-End (MATE) attacks against software applications, both in number and severity. However, software protection, which aims at mitigating MATE attacks, is dominated by fuzzy concepts and security-through-obscurity. This paper presents a rationale for adopting and standardizing the protection of software as a risk management process according to the NIST SP800-39 approach. We examine the relevant constructs, models, and methods needed for formalizing and automating the activities in this process in the context of MATE software protection. We highlight the open issues that the research community still has to address. We discuss the benefits that such an approach can bring to all stakeholders. In addition, we present a Proof of Concept (PoC) decision support system that instantiates many of the discussed construct, models, and methods and automates many activities in the risk analysis methodology for the protection of software. Despite being a prototype, the PoC's validation with industry experts indicated that several aspects of the proposed risk management process can already be formalized and automated with our existing toolbox and that it can actually assist decision-making in industrially relevant settings.
In recent years, a new class of models for multi-agent epistemic logic has emerged, based on simplicial complexes. Since then, many variants of these simplicial models have been investigated, giving rise to different logics and axiomatizations. In this paper, we present a further generalization, where a group of agents may distinguish two worlds, even though each individual agent in the group is unable to distinguish them. For that purpose, we generalize beyond simplicial complexes and consider instead simplicial sets. By doing so, we define a new semantics for epistemic logic with distributed knowledge. As it turns out, these models are the geometric counterpart of a generalization of Kripke models, called "pseudo-models". We identify various interesting sub-classes of these models, encompassing all previously studied variants of simplicial models; and give a sound and complete axiomatization for each of them.
Secure aggregation promises a heightened level of privacy in federated learning, maintaining that a server only has access to a decrypted aggregate update. Within this setting, linear layer leakage methods are the only data reconstruction attacks able to scale and achieve a high leakage rate regardless of the number of clients or batch size. This is done through increasing the size of an injected fully-connected (FC) layer. However, this results in a resource overhead which grows larger with an increasing number of clients. We show that this resource overhead is caused by an incorrect perspective in all prior work that treats an attack on an aggregate update in the same way as an individual update with a larger batch size. Instead, by attacking the update from the perspective that aggregation is combining multiple individual updates, this allows the application of sparsity to alleviate resource overhead. We show that the use of sparsity can decrease the model size overhead by over 327$\times$ and the computation time by 3.34$\times$ compared to SOTA while maintaining equivalent total leakage rate, 77% even with $1000$ clients in aggregation.
In this paper, we propose IMA-GNN as an In-Memory Accelerator for centralized and decentralized Graph Neural Network inference, explore its potential in both settings and provide a guideline for the community targeting flexible and efficient edge computation. Leveraging IMA-GNN, we first model the computation and communication latencies of edge devices. We then present practical case studies on GNN-based taxi demand and supply prediction and also adopt four large graph datasets to quantitatively compare and analyze centralized and decentralized settings. Our cross-layer simulation results demonstrate that on average, IMA-GNN in the centralized setting can obtain ~790x communication speed-up compared to the decentralized GNN setting. However, the decentralized setting performs computation ~1400x faster while reducing the power consumption per device. This further underlines the need for a hybrid semi-decentralized GNN approach.
Federated Learning (FL) has gained widespread popularity in recent years due to the fast booming of advanced machine learning and artificial intelligence along with emerging security and privacy threats. FL enables efficient model generation from local data storage of the edge devices without revealing the sensitive data to any entities. While this paradigm partly mitigates the privacy issues of users' sensitive data, the performance of the FL process can be threatened and reached a bottleneck due to the growing cyber threats and privacy violation techniques. To expedite the proliferation of FL process, the integration of blockchain for FL environments has drawn prolific attention from the people of academia and industry. Blockchain has the potential to prevent security and privacy threats with its decentralization, immutability, consensus, and transparency characteristic. However, if the blockchain mechanism requires costly computational resources, then the resource-constrained FL clients cannot be involved in the training. Considering that, this survey focuses on reviewing the challenges, solutions, and future directions for the successful deployment of blockchain in resource-constrained FL environments. We comprehensively review variant blockchain mechanisms that are suitable for FL process and discuss their trade-offs for a limited resource budget. Further, we extensively analyze the cyber threats that could be observed in a resource-constrained FL environment, and how blockchain can play a key role to block those cyber attacks. To this end, we highlight some potential solutions towards the coupling of blockchain and federated learning that can offer high levels of reliability, data privacy, and distributed computing performance.
This paper presents Poplar, a new system for solving the private heavy-hitters problem. In this problem, there are many clients and a small set of data-collection servers. Each client holds a private bitstring. The servers want to recover the set of all popular strings, without learning anything else about any client's string. A web-browser vendor, for instance, can use Poplar to figure out which homepages are popular, without learning any user's homepage. We also consider the simpler private subset-histogram problem, in which the servers want to count how many clients hold strings in a particular set without revealing this set to the clients. Poplar uses two data-collection servers and, in a protocol run, each client send sends only a single message to the servers. Poplar protects client privacy against arbitrary misbehavior by one of the servers and our approach requires no public-key cryptography (except for secure channels), nor general-purpose multiparty computation. Instead, we rely on incremental distributed point functions, a new cryptographic tool that allows a client to succinctly secret-share the labels on the nodes of an exponentially large binary tree, provided that the tree has a single non-zero path. Along the way, we develop new general tools for providing malicious security in applications of distributed point functions.
With the maturity of web services, containers, and cloud computing technologies, large services in traditional systems (e.g. the computation services of machine learning and artificial intelligence) are gradually being broken down into many microservices to increase service reusability and flexibility. Therefore, this study proposes an efficiency analysis framework based on queuing models to analyze the efficiency difference of breaking down traditional large services into n microservices. For generalization, this study considers different service time distributions (e.g. exponential distribution of service time and fixed service time) and explores the system efficiency in the worst-case and best-case scenarios through queuing models (i.e. M/M/1 queuing model and M/D/1 queuing model). In each experiment, it was shown that the total time required for the original large service was higher than that required for breaking it down into multiple microservices, so breaking it down into multiple microservices can improve system efficiency. It can also be observed that in the best-case scenario, the improvement effect becomes more significant with an increase in arrival rate. However, in the worst-case scenario, only slight improvement was achieved. This study found that breaking down into multiple microservices can effectively improve system efficiency and proved that when the computation time of the large service is evenly distributed among multiple microservices, the best improvement effect can be achieved. Therefore, this study's findings can serve as a reference guide for future development of microservice architecture.
Deep neural networks (DNNs) have succeeded in many different perception tasks, e.g., computer vision, natural language processing, reinforcement learning, etc. The high-performed DNNs heavily rely on intensive resource consumption. For example, training a DNN requires high dynamic memory, a large-scale dataset, and a large number of computations (a long training time); even inference with a DNN also demands a large amount of static storage, computations (a long inference time), and energy. Therefore, state-of-the-art DNNs are often deployed on a cloud server with a large number of super-computers, a high-bandwidth communication bus, a shared storage infrastructure, and a high power supplement. Recently, some new emerging intelligent applications, e.g., AR/VR, mobile assistants, Internet of Things, require us to deploy DNNs on resource-constrained edge devices. Compare to a cloud server, edge devices often have a rather small amount of resources. To deploy DNNs on edge devices, we need to reduce the size of DNNs, i.e., we target a better trade-off between resource consumption and model accuracy. In this dissertation, we studied four edge intelligence scenarios, i.e., Inference on Edge Devices, Adaptation on Edge Devices, Learning on Edge Devices, and Edge-Server Systems, and developed different methodologies to enable deep learning in each scenario. Since current DNNs are often over-parameterized, our goal is to find and reduce the redundancy of the DNNs in each scenario.
In recent years, mobile devices have gained increasingly development with stronger computation capability and larger storage. Some of the computation-intensive machine learning and deep learning tasks can now be run on mobile devices. To take advantage of the resources available on mobile devices and preserve users' privacy, the idea of mobile distributed machine learning is proposed. It uses local hardware resources and local data to solve machine learning sub-problems on mobile devices, and only uploads computation results instead of original data to contribute to the optimization of the global model. This architecture can not only relieve computation and storage burden on servers, but also protect the users' sensitive information. Another benefit is the bandwidth reduction, as various kinds of local data can now participate in the training process without being uploaded to the server. In this paper, we provide a comprehensive survey on recent studies of mobile distributed machine learning. We survey a number of widely-used mobile distributed machine learning methods. We also present an in-depth discussion on the challenges and future directions in this area. We believe that this survey can demonstrate a clear overview of mobile distributed machine learning and provide guidelines on applying mobile distributed machine learning to real applications.