Serverless applications can be particularly difficult to troubleshoot, as these applications are often composed of various managed and partly managed services. Faults are often unpredictable and can occur at multiple points, even in simple compositions. Each additional function or service in a serverless composition introduces a new possible fault source and a new layer to obfuscate faults. Currently, serverless platforms offer only limited support for identifying runtime faults. Developers looking to observe their serverless compositions often have to rely on scattered logs and ambiguous error messages to pinpoint root causes. In this paper, we investigate the use of distributed tracing for improving the observability of faults in serverless applications. To this end, we first introduce a model for characterizing fault observability, then provide a prototypical tracing implementation - specifically, a developer-driven and a platform-supported tracing approach. We compare both approaches with our model, measure associated trade-offs (execution latency, resource utilization), and contribute new insights for troubleshooting serverless compositions.
Data centers have become a popular computing platform for various applications, and they account for nearly 2% of total US energy consumption. Therefore, it has become important to optimize data center power, and reduce their energy footprint. Most existing work optimizes power in servers and networks independently and does not address them together in a holistic fashion that has the potential to achieve greater power savings. In this article, we present PopcornsPro, a cooperative server network framework for energy optimization. We present a comprehensive power model for heterogeneous data center switches along with low power mode designs in combination with the server power model. We design job scheduling algorithms that place tasks onto servers in a power-aware manner, such that servers and network switches can take effective advantage of low power state and available network link capacities. Our experimental results show that we are able to achieve significantly higher savings up to 80% compared to the previously well-known server and network power optimization policies.
IoT application development usually involves separate programming at the device side and server side. While separate programming style is sufficient for many simple applications, it is not suitable for many complex applications that involve complex interactions and intensive data processing. We propose EdgeProg, an edge-centric programming approach to simplify IoT application programming, motivated by the increasing popularity of edge computing. With EdgeProg, users could write application logic in a centralized manner with an augmented If-This-Then-That (IFTTT) syntax and virtual sensor mechanism. The program can be processed at the edge server, which can automatically generate the actual application code and intelligently partition the code into device code and server code, for achieving the optimal latency. EdgeProg employs dynamic linking and loading to deploy the device code on a variety of IoT devices, which do not run any application-specific codes at the start. Results show that EdgeProg achieves an average reduction of 20.96%, 27.8% and 79.41% in terms of execution latency, energy consumption, and lines of code compared with state-of-the-art approaches.
Large linear systems of saddle-point type have arisen in a wide variety of applications throughout computational science and engineering. The discretizations of distributed control problems have a saddle-point structure. The numerical solution of saddle-point problems has attracted considerable interest in recent years. In this work, we propose a novel Braess-Sarazin multigrid relaxation scheme for finite element discretizations of the distributed control problems, where we use the stiffness matrix obtained from the five-point finite difference method for the Laplacian to approximate the inverse of the mass matrix arising in the saddle-point system. We apply local Fourier analysis to examine the smoothing properties of the Braess-Sarazin multigrid relaxation. From our analysis, the optimal smoothing factor for Braess-Sarazin relaxation is derived. Numerical experiments validate our theoretical results. The relaxation scheme considered here shows its high efficiency and robustness with respect to the regularization parameter and grid size.
Cell-free massive MIMO is one of the core technologies for future wireless networks. It is expected to bring enormous benefits, including ultra-high reliability, data throughput, energy efficiency, and uniform coverage. As a radically distributed system, the performance of cell-free massive MIMO critically relies on efficient distributed processing algorithms. In this paper, we propose a distributed expectation propagation (EP) detector for cell-free massive MIMO, which consists of two modules: a nonlinear module at the central processing unit (CPU) and a linear module at each access point (AP). The turbo principle in iterative channel decoding is utilized to compute and pass the extrinsic information between the two modules. An analytical framework is provided to characterize the asymptotic performance of the proposed EP detector with a large number of antennas. Furthermore, a distributed joint channel estimation and data detection (JCD) algorithm is developed to handle the practical setting with imperfect channel state information (CSI). Simulation results will show that the proposed method outperforms existing detectors for cell-free massive MIMO systems in terms of the bit-error rate and demonstrate that the developed theoretical analysis accurately predicts system performance. Finally, it is shown that with imperfect CSI, the proposed JCD algorithm improves the system performance significantly and enables non-orthogonal pilots to reduce the pilot overhead.
Data is replicated and stored redundantly over multiple servers for availability in distributed databases. We focus on databases with frequent reads and writes, where both read and write latencies are important. This is in contrast to databases designed primarily for either read or write applications. Redundancy has contrasting effects on read and write latency. Read latency can be reduced by potential parallel access from multiple servers, whereas write latency increases as a larger number of replicas have to be updated. We quantify this tradeoff between read and write latency as a function of redundancy, and provide a closed-form approximation when the request arrival is Poisson and the service is memoryless. We empirically show that this approximation is tight across all ranges of system parameters. Thus, we provide guidelines for redundancy selection in distributed databases.
The advent of Bitcoin, and consequently Blockchain, has ushered in a new era of decentralization. Blockchain enables mutually distrusting entities to work collaboratively to attain a common objective. However, current Blockchain technologies lack scalability, which limits their use in Internet of Things (IoT) applications. Many devices on the Internet have the computational and communication capabilities to facilitate decision-making. These devices will soon be a 50 billion node network. Furthermore, new IoT business models such as Sensor-as-a-Service (SaaS) require a robust Trust and Reputation System (TRS). In this paper, we introduce an innovative distributed ledger combining Tangle and Blockchain as a TRS framework for IoT. The combination of Tangle and Blockchain provides maintainability of the former and scalability of the latter. The proposed ledger can handle large numbers of IoT device transactions and facilitates low power nodes joining and contributing. Employing a distributed ledger mitigates many threats, such as whitewashing attacks. Along with combining payments and rating protocols, the proposed approach provides cleaner data to the upper layer reputation algorithm.
Traffic flows in a distributed computing network require both transmission and processing, and can be interdicted by removing either communication or computation resources. We study the robustness of a distributed computing network under the failures of communication links and computation nodes. We define cut metrics that measure the connectivity, and show a non-zero gap between the maximum flow and the minimum cut. Moreover, we study a network flow interdiction problem that minimizes the maximum flow by removing communication and computation resources within a given budget. We develop mathematical programs to compute the optimal interdiction, and polynomial-time approximation algorithms that achieve near-optimal interdiction in simulation.
Since deep neural networks were developed, they have made huge contributions to everyday lives. Machine learning provides more rational advice than humans are capable of in almost every aspect of daily life. However, despite this achievement, the design and training of neural networks are still challenging and unpredictable procedures. To lower the technical thresholds for common users, automated hyper-parameter optimization (HPO) has become a popular topic in both academic and industrial areas. This paper provides a review of the most essential topics on HPO. The first section introduces the key hyper-parameters related to model training and structure, and discusses their importance and methods to define the value range. Then, the research focuses on major optimization algorithms and their applicability, covering their efficiency and accuracy especially for deep learning networks. This study next reviews major services and toolkits for HPO, comparing their support for state-of-the-art searching algorithms, feasibility with major deep learning frameworks, and extensibility for new modules designed by users. The paper concludes with problems that exist when HPO is applied to deep learning, a comparison between optimization algorithms, and prominent approaches for model evaluation with limited computational resources.
Most Deep Reinforcement Learning (Deep RL) algorithms require a prohibitively large number of training samples for learning complex tasks. Many recent works on speeding up Deep RL have focused on distributed training and simulation. While distributed training is often done on the GPU, simulation is not. In this work, we propose using GPU-accelerated RL simulations as an alternative to CPU ones. Using NVIDIA Flex, a GPU-based physics engine, we show promising speed-ups of learning various continuous-control, locomotion tasks. With one GPU and CPU core, we are able to train the Humanoid running task in less than 20 minutes, using 10-1000x fewer CPU cores than previous works. We also demonstrate the scalability of our simulator to multi-GPU settings to train more challenging locomotion tasks.
In this paper we describe a new mobile architecture, MobileNetV2, that improves the state of the art performance of mobile models on multiple tasks and benchmarks as well as across a spectrum of different model sizes. We also describe efficient ways of applying these mobile models to object detection in a novel framework we call SSDLite. Additionally, we demonstrate how to build mobile semantic segmentation models through a reduced form of DeepLabv3 which we call Mobile DeepLabv3. The MobileNetV2 architecture is based on an inverted residual structure where the input and output of the residual block are thin bottleneck layers opposite to traditional residual models which use expanded representations in the input an MobileNetV2 uses lightweight depthwise convolutions to filter features in the intermediate expansion layer. Additionally, we find that it is important to remove non-linearities in the narrow layers in order to maintain representational power. We demonstrate that this improves performance and provide an intuition that led to this design. Finally, our approach allows decoupling of the input/output domains from the expressiveness of the transformation, which provides a convenient framework for further analysis. We measure our performance on Imagenet classification, COCO object detection, VOC image segmentation. We evaluate the trade-offs between accuracy, and number of operations measured by multiply-adds (MAdd), as well as the number of parameters