Simultaneous Localization and Mapping (SLAM) estimates agents' trajectories and constructs maps, and localization is a fundamental kernel in autonomous machines at all computing scales, from drones, AR, VR to self-driving cars. In this work, we present an energy-efficient and runtime-reconfigurable FPGA-based accelerator for robotic localization. We exploit SLAM-specific data locality, sparsity, reuse, and parallelism, and achieve >5x performance improvement over the state-of-the-art. Especially, our design is reconfigurable at runtime according to the environment to save power while sustaining accuracy and performance.
This paper investigates the problem of regret minimization in linear time-varying (LTV) dynamical systems. Due to the simultaneous presence of uncertainty and non-stationarity, designing online control algorithms for unknown LTV systems remains a challenging task. At a cost of NP-hard offline planning, prior works have introduced online convex optimization algorithms, although they suffer from nonparametric rate of regret. In this paper, we propose the first computationally tractable online algorithm with regret guarantees that avoids offline planning over the state linear feedback policies. Our algorithm is based on the optimism in the face of uncertainty (OFU) principle in which we optimistically select the best model in a high confidence region. Our algorithm is then more explorative when compared to previous approaches. To overcome non-stationarity, we propose either a restarting strategy (R-OFU) or a sliding window (SW-OFU) strategy. With proper configuration, our algorithm is attains sublinear regret $O(T^{2/3})$. These algorithms utilize data from the current phase for tracking variations on the system dynamics. We corroborate our theoretical findings with numerical experiments, which highlight the effectiveness of our methods. To the best of our knowledge, our study establishes the first model-based online algorithm with regret guarantees under LTV dynamical systems.
It is known that fiber nonlinearities induce crosstalk in a wavelength division multiplexed (WDM) system, which limits the capacity of such systems as the transmitted signal power is increased. A network user in a WDM system is an entity that operates around a given optical wavelength. Traditionally, the channel capacity of a WDM system has been analyzed under different assumptions for the transmitted signals of the other users, while treating the interference arising from these users as noise. In this paper, we instead take a multiuser information theoretic view and treat the optical WDM system impaired by cross-phase modulation and dispersion as an interference channel. We characterize an outer bound on the capacity region of simultaneously achievable rate pairs, assuming a simplified K-user perturbative channel model using genie-aided techniques. Furthermore, an achievable rate region is obtained by time-sharing between certain single-user strategies. It is shown that such time-sharing can achieve better rate tuples compared to treating nonlinear interference as noise. For the single-polarization single-span system under consideration and a power 4.4 dB above the optimum launch power, treating nonlinear interference as noise results in a rate of 1.67 bit/sym, while time-sharing gives a rate of 6.33 bit/sym.
Wireless communications systems are impacted by multi-path fading and Doppler shift in dynamic environments, where the channel becomes doubly-dispersive and its estimation becomes an arduous task. Only a few pilots are used for channel estimation in conventional approaches to preserve high data rate transmission. Consequently, such estimators experience a significant performance degradation in high mobility scenarios. Recently, deep learning has been employed for doubly-dispersive channel estimation due to its low-complexity, robustness, and good generalization ability. Against this backdrop, the current paper presents a comprehensive survey on channel estimation techniques based on deep learning by deeply investigating different methods. The study also provides extensive experimental simulations followed by a computational complexity analysis. After considering different parameters such as modulation order, mobility, frame length, and deep learning architecture, the performance of the studied estimators is evaluated in several mobility scenarios. In addition, the source codes are made available online in order to make the results reproducible.
The framework of mixed observable Markov decision processes (MOMDP) models many robotic domains in which some state variables are fully observable while others are not. In this work, we identify a significant subclass of MOMDPs defined by how actions influence the fully observable components of the state and how those, in turn, influence the partially observable components and the rewards. This unique property allows for a two-level hierarchical approach we call HIerarchical Reinforcement Learning under Mixed Observability (HILMO), which restricts partial observability to the top level while the bottom level remains fully observable, enabling higher learning efficiency. The top level produces desired goals to be reached by the bottom level until the task is solved. We further develop theoretical guarantees to show that our approach can achieve optimal and quasi-optimal behavior under mild assumptions. Empirical results on long-horizon continuous control tasks demonstrate the efficacy and efficiency of our approach in terms of improved success rate, sample efficiency, and wall-clock training time. We also deploy policies learned in simulation on a real robot.
Deep Gaussian Process (DGP) as a model prior in Bayesian learning intuitively exploits the expressive power in function composition. DGPs also offer diverse modeling capabilities, but inference becomes Achilles' heel as marginalization in latent function space is not tractable. With Bochner's theorem, DGP with squared exponential kernel can be viewed as a deep trigonometric network consisting of the random feature layers, sine and cosine activation units, and random weight layers. In the wide limit with a bottleneck, we show that the weight space view yield the same effective covariance functions which were obtained previously in function space. As such, DGPs can be translated into the deep trig networks, which is flexible and expressive as one can freely adopt different prior distributions over the parameters.Interestingly, the network representation enables the study of DGP's neural tangent kernel, which may reveal the mean of the intractable predictive distribution. Statistically, unlike the shallow networks, deep networks of finite width have covariance deviating from the limiting kernel, and the inner and outer widths may play different roles in learning.
Reconfigurable intelligent surface (RIS) can effectively control the wavefront of the impinging signals and has emerged as a cost-effective promising solution to improve the spectrum and energy efficiency of wireless systems. Most existing researches on RIS assume that the hardware operations are perfect. However, both physical transceiver and RIS suffer from inevitable hardware impairments in practice, which can lead to severe system performance degradation and increase the complexity of beamforming optimization. Consequently, the existing researches on RIS, including channel estimation, beamforming optimization, spectrum and energy efficiency analysis, etc., cannot directly apply to the case of hardware impairments. In this paper, by taking hardware impairments into consideration, we conduct the joint transmit and reflect beamforming optimization, and reevaluate the system performance. First, we characterize the closed-form estimators of direct and cascaded channels in both single-user and multi-user cases, and analyze the impact of hardware impairments on channel estimation accuracy. Then, the optimal transmit beamforming solution is derived, and a gradient descent method-based algorithm is also proposed to optimize the reflect beamforming. Moreover, we analyze the three types of asymptotic channel capacities with respect to the transmit power, the antenna number, and the reflecting element number. Finally, in terms of the system energy consumption, we analyze the power scaling law and the energy efficiency. Our experimental results also reveal an encouraging phenomenon that the RIS-assisted wireless system with massive reflecting elements can achieve both high spectrum and energy efficiency without the need for massive antennas and without allocating too many resources to optimize the reflect beamforming.
Visible light communication (VLC) is envisioned as a core component of future wireless communication networks due to, among others, the huge unlicensed bandwidth it offers and the fact that it does not cause any interference to existing radio frequency (RF) communication systems. Most research on RF and VLC coexistence has focused on hybrid designs where data transmission to any user could originate from either an RF or a VLC access point (AP). However, hybrid RF/VLC systems fail to exploit the distinct transmission characteristics of RF and VLC systems to fully reap the benefits they can offer. Aggregated RF/VLC systems, in which any user can be served simultaneously by both RF and VLC APs, have recently emerged as a more promising and robust design for the coexistence of RF and VLC systems. To this end, this paper, for the first time, investigates AP assignment, subchannel allocation (SA), and transmit power allocation (PA) to optimize the energy efficiency (EE) of aggregated RF/VLC systems while considering the effects of interference and VLC line-of-sight link blockages. A novel and challenging EE optimization problem is formulated for which an efficient joint solution based on alternating optimization is developed. More particularly, an energy-efficient AP assignment algorithm based on matching theory is proposed. Then, a low-complexity SA scheme that allocates subchannels to users based on their channel conditions is developed. Finally, an effective PA algorithm is presented by utilizing the quadratic transform approach and a multi-objective optimization framework. Extensive simulation results reveal that: 1) the proposed joint AP assignment, SA, and PA solution obtains significant EE, sum-rate, and outage performance gains with low complexity, and 2) the aggregated RF/VLC system provides considerable performance improvement compared to hybrid RF/VLC systems.
Alpa automates model-parallel training of large deep learning (DL) models by generating execution plans that unify data, operator, and pipeline parallelism. Existing model-parallel training systems either require users to manually create a parallelization plan or automatically generate one from a limited space of model parallelism configurations. They do not suffice to scale out complex DL models on distributed compute devices. Alpa distributes the training of large DL models by viewing parallelisms as two hierarchical levels: inter-operator and intra-operator parallelisms. Based on it, Alpa constructs a new hierarchical space for massive model-parallel execution plans. Alpa designs a number of compilation passes to automatically derive efficient parallel execution plans at each parallelism level. Alpa implements an efficient runtime to orchestrate the two-level parallel execution on distributed compute devices. Our evaluation shows Alpa generates parallelization plans that match or outperform hand-tuned model-parallel training systems even on models they are designed for. Unlike specialized systems, Alpa also generalizes to models with heterogeneous architectures and models without manually-designed plans. Alpa's source code is publicly available at //github.com/alpa-projects/alpa
Over the past several years, new machine learning accelerators were being announced and released every month for a variety of applications from speech recognition, video object detection, assisted driving, and many data center applications. This paper updates the survey of AI accelerators and processors from past two years. This paper collects and summarizes the current commercial accelerators that have been publicly announced with peak performance and power consumption numbers. The performance and power values are plotted on a scatter graph, and a number of dimensions and observations from the trends on this plot are again discussed and analyzed. This year, we also compile a list of benchmarking performance results and compute the computational efficiency with respect to peak performance.
Graph convolutional network (GCN) has been successfully applied to many graph-based applications; however, training a large-scale GCN remains challenging. Current SGD-based algorithms suffer from either a high computational cost that exponentially grows with number of GCN layers, or a large space requirement for keeping the entire graph and the embedding of each node in memory. In this paper, we propose Cluster-GCN, a novel GCN algorithm that is suitable for SGD-based training by exploiting the graph clustering structure. Cluster-GCN works as the following: at each step, it samples a block of nodes that associate with a dense subgraph identified by a graph clustering algorithm, and restricts the neighborhood search within this subgraph. This simple but effective strategy leads to significantly improved memory and computational efficiency while being able to achieve comparable test accuracy with previous algorithms. To test the scalability of our algorithm, we create a new Amazon2M data with 2 million nodes and 61 million edges which is more than 5 times larger than the previous largest publicly available dataset (Reddit). For training a 3-layer GCN on this data, Cluster-GCN is faster than the previous state-of-the-art VR-GCN (1523 seconds vs 1961 seconds) and using much less memory (2.2GB vs 11.2GB). Furthermore, for training 4 layer GCN on this data, our algorithm can finish in around 36 minutes while all the existing GCN training algorithms fail to train due to the out-of-memory issue. Furthermore, Cluster-GCN allows us to train much deeper GCN without much time and memory overhead, which leads to improved prediction accuracy---using a 5-layer Cluster-GCN, we achieve state-of-the-art test F1 score 99.36 on the PPI dataset, while the previous best result was 98.71 by [16]. Our codes are publicly available at //github.com/google-research/google-research/tree/master/cluster_gcn.