Optical computing has been recently proposed as a new compute paradigm to meet the demands of future AI/ML workloads in datacenters and supercomputers. However, proposed implementations so far suffer from lack of scalability, large footprints and high power consumption, and incomplete system-level architectures to become integrated within existing datacenter architecture for real-world applications. In this work, we present a truly scalable optical AI accelerator based on a crossbar architecture. We have considered all major roadblocks and address them in this design. Weights will be stored on chip using phase change material (PCM) that can be monolithically integrated in silicon photonic processes. All electro-optical components and circuit blocks are modeled based on measured performance metrics in a 45nm monolithic silicon photonic process, which can be co-packaged with advanced CPU/GPUs and HBM memories. We also present a system-level modeling and analysis of our chip's performance for the Resnet-50V1.5, considering all critical parameters, including memory size, array size, photonic losses, and energy consumption of peripheral electronics. Both on-chip SRAM and off-chip DRAM energy overheads have been considered in this modeling. We additionally address how using a dual-core crossbar design can eliminate programming time overhead at practical SRAM block sizes and batch sizes. Our results show that a 128 x 128 proposed architecture can achieve inference per second (IPS) similar to Nvidia A100 GPU at 15.4 times lower power and 7.24 times lower area.
Metaverse is an emerging virtual universe where humans can have real-time interactions and solid social links like in the physical world, and it opens up a new era of Internet and interactions. In Metaverse, an immersive and photorealistic environment promotes social activities, including education, meetings, and shopping of digital avatars based on critical technologies, including 3D rendering, extended reality, digital twins, artificial intelligence, and Blockchain. However, the limitations of computation, storage, and energy resources restrict the development of Metaverse, and a series of system issues (e.g., latency, security, and battery-life) continue to arise. As a result, how to find corresponding measurements to mitigate unsatisfactory influences becomes the focus. Mobile edge computing (MEC) as a distributed computing paradigm offloads computation-intensive tasks to the edge of the network. It brings the resources as close as possible to the end devices, addressing the shortcomings mentioned above. In this paper, we propose a comprehensive survey of the MEC-based Metaverse. Particular emphasis is given to the technologies convergence, architectures, and application scenarios, e.g., BoundlessXR and CloudXR. Significantly, we introduce the potential future directions for developing Metaverse systems.
3D reconstruction from videos has become increasingly popular for various applications, including navigation for autonomous driving of robots and drones, augmented reality (AR), and 3D modeling. This task often combines traditional image/video processing algorithms and deep neural networks (DNNs). Although recent developments in deep learning have improved the accuracy of the task, the large number of calculations involved results in low computation speed and high power consumption. Although there are various domain-specific hardware accelerators for DNNs, it is not easy to accelerate the entire process of applications that alternate between traditional image/video processing algorithms and DNNs. Thus, FPGA-based end-to-end acceleration is required for such complicated applications in low-power embedded environments. This paper proposes a novel FPGA-based accelerator for DeepVideoMVS, a DNN-based depth estimation method for 3D reconstruction. We employ HW/SW co-design to appropriately utilize heterogeneous components in modern SoC FPGAs, such as programmable logic (PL) and CPU, according to the inherent characteristics of the method. As some operations are unsuitable for hardware implementation, we determine the operations to be implemented in software through analyzing the number of times each operation is performed and its memory access pattern, and then considering comprehensive aspects: the ease of hardware implementation and degree of expected acceleration by hardware. The hardware and software implementations are executed in parallel on the PL and CPU to hide their execution latencies. The proposed accelerator was developed on a Xilinx ZCU104 board by using NNgen, an open-source high-level synthesis (HLS) tool. Experiments showed that the proposed accelerator operates 60.2 times faster than the software-only implementation on the same FPGA board with minimal accuracy degradation.
The emerging modular vehicle (MV) technology possesses the ability to physically connect/disconnect with each other and thus travel in platoon for less energy consumption. Moreover, a platoon of MVs can be regarded as a new bus-like platform with expanded on-board carrying capacity and provide larger service throughput according to the demand density. This innovation concept might solve the mismatch problems between the fixed vehicle capacity and the temporal-spatial variations of demand in current transportation system. To obtain the optimal assignments and routes for the operation of MVs, a mixed integer linear programming (MILP) model is formulated to minimize the weighted total cost of vehicle travel cost and passenger service time. The temporal and spatial synchronization of vehicle platoons and passenger en-route transfers are determined and optimized by the MILP model while constructing the paths. Heuristic algorithms based on large neighborhood search are developed to solve the modular dial-a-ride problem (MDARP) for practical scenarios. A set of small-scale synthetic numerical experiments are tested to evaluate the optimality gap and computation time between our proposed MILP model and heuristic algorithms. Large-scale experiments are conducted on the Anaheim network with 378 candidate join/split nodes to further explore the potentials and identify the ideal operation scenarios of MVs. The results show that the innovative MV technology can save up to 52.0% in vehicle travel cost, 35.6% in passenger service time, and 29.4% in total cost against existing on-demand mobility services. Results suggest that MVs best benefit from platooning by serving enclave pairs as a hub-and-spoke service.
Key Distillation is an essential component of every Quantum Key Distribution system because it compensates the inherent transmission errors of quantum channel. However, throughput and interoperability aspects of post-processing engine design often neglected, and exiting solutions are not providing any guarantee. In this paper, we propose multiple protocol support high throughput key distillation framework implemented in a Field Programmable Gate Array (FPGA) using High-Level Synthesis (HLS). The proposed design uses a Hadoop framework with a map-reduce programming model to efficiently process large chunks of raw data across the limited computing resources of an FPGA. We present a novel hardware-efficient integrated post-processing architecture that offer dynamic error correction, a side-channel resistant authentication scheme, and an inbuilt high-speed encryption application, which uses the key for secure communication. We develop a semi automated High level synthesis framework capable of handling different QKD protocols with promising speedup. Overall, the experimental results shows that there is a significant improvement in performance and compatible with any discrete variable QKD systems.
Optical Coherence Tomography is a technique used to scan the Retina of the eye and check for tears. In this paper, we develop a Convolutional Neural Network Architecture for OCT scan classification. The model is trained to detect Retinal tears from an OCT scan and classify the type of tear. We designed a block-based approach to accompany a pre-trained VGG-19 using Transfer Learning by writing customised layers in blocks for better feature extraction. The approach achieved substantially better results than the baseline we initially started out with.
The time and effort involved in hand-designing deep neural networks is immense. This has prompted the development of Neural Architecture Search (NAS) techniques to automate this design. However, NAS algorithms tend to be slow and expensive; they need to train vast numbers of candidate networks to inform the search process. This could be alleviated if we could partially predict a network's trained accuracy from its initial state. In this work, we examine the overlap of activations between datapoints in untrained networks and motivate how this can give a measure which is usefully indicative of a network's trained performance. We incorporate this measure into a simple algorithm that allows us to search for powerful networks without any training in a matter of seconds on a single GPU, and verify its effectiveness on NAS-Bench-101, NAS-Bench-201, NATS-Bench, and Network Design Spaces. Our approach can be readily combined with more expensive search methods; we examine a simple adaptation of regularised evolutionary search. Code for reproducing our experiments is available at //github.com/BayesWatch/nas-without-training.
It has been a long time that computer architecture and systems are optimized to enable efficient execution of machine learning (ML) algorithms or models. Now, it is time to reconsider the relationship between ML and systems, and let ML transform the way that computer architecture and systems are designed. This embraces a twofold meaning: the improvement of designers' productivity, and the completion of the virtuous cycle. In this paper, we present a comprehensive review of work that applies ML for system design, which can be grouped into two major categories, ML-based modelling that involves predictions of performance metrics or some other criteria of interest, and ML-based design methodology that directly leverages ML as the design tool. For ML-based modelling, we discuss existing studies based on their target level of system, ranging from the circuit level to the architecture/system level. For ML-based design methodology, we follow a bottom-up path to review current work, with a scope of (micro-)architecture design (memory, branch prediction, NoC), coordination between architecture/system and workload (resource allocation and management, data center management, and security), compiler, and design automation. We further provide a future vision of opportunities and potential directions, and envision that applying ML for computer architecture and systems would thrive in the community.
The growing energy and performance costs of deep learning have driven the community to reduce the size of neural networks by selectively pruning components. Similarly to their biological counterparts, sparse networks generalize just as well, if not better than, the original dense networks. Sparsity can reduce the memory footprint of regular networks to fit mobile devices, as well as shorten training time for ever growing networks. In this paper, we survey prior work on sparsity in deep learning and provide an extensive tutorial of sparsification for both inference and training. We describe approaches to remove and add elements of neural networks, different training strategies to achieve model sparsity, and mechanisms to exploit sparsity in practice. Our work distills ideas from more than 300 research papers and provides guidance to practitioners who wish to utilize sparsity today, as well as to researchers whose goal is to push the frontier forward. We include the necessary background on mathematical methods in sparsification, describe phenomena such as early structure adaptation, the intricate relations between sparsity and the training process, and show techniques for achieving acceleration on real hardware. We also define a metric of pruned parameter efficiency that could serve as a baseline for comparison of different sparse networks. We close by speculating on how sparsity can improve future workloads and outline major open problems in the field.
Since hardware resources are limited, the objective of training deep learning models is typically to maximize accuracy subject to the time and memory constraints of training and inference. We study the impact of model size in this setting, focusing on Transformer models for NLP tasks that are limited by compute: self-supervised pretraining and high-resource machine translation. We first show that even though smaller Transformer models execute faster per iteration, wider and deeper models converge in significantly fewer steps. Moreover, this acceleration in convergence typically outpaces the additional computational overhead of using larger models. Therefore, the most compute-efficient training strategy is to counterintuitively train extremely large models but stop after a small number of iterations. This leads to an apparent trade-off between the training efficiency of large Transformer models and the inference efficiency of small Transformer models. However, we show that large models are more robust to compression techniques such as quantization and pruning than small models. Consequently, one can get the best of both worlds: heavily compressed, large models achieve higher accuracy than lightly compressed, small models.