Global Navigation Satellite Systems (GNSS) provide positioning services for connected and autonomous vehicles. Differential GNSS (DGNSS) has been demonstrated to provide reliable, high quality range correction information enabling real-time navigation with sub-meter or centimeter accuracy. However, DGNSS requires a local reference station near each user, which for a continental or global scale implementation would require a dense network of reference stations whose construction and maintenance would be prohibitively expensive. Precise Point Positioning (PPP) affords more flexibility as a public service for GNSS receivers, but its State Space Representation (SSR) format is not currently supported by most receivers. This article proposes a novel Virtual Network DGNSS (VN-DGNSS) design that capitalizes on the PPP infrastructure to provide global coverage for real-time navigation without building physical reference stations. Correction information is computed using data from public GNSS SSR data services and transmitted to users by Radio Technical Commission for Maritime Services (RTCM) Observation Space Representation (OSR) messages which are accepted by most receivers. The real-time stationary and moving platform testing performance, using u-blox M8P and ZED-F9P receivers, surpasses the Society of Automotive Engineering (SAE) specification (68% of horizontal error $\leqslant$ 1.5 m and vertical error $\leqslant$ 3 m) and shows significantly better horizontal performance than GNSS Open Service (OS). The moving tests also show better horizontal performance than the ZEDF9P receiver with Satellite Based Augmentation Systems (SBAS) enabled and achieve the lane-level accuracy which requires 95% of horizontal errors less than 1 meter.
Corporate Virtual Private Networks (VPNs) enable users to work from home or while traveling. At the same time, VPNs are tied to a company's network infrastructure, forcing users to install proprietary clients for network compatibility reasons. VPN clients run with high privileges to encrypt and reroute network traffic. Thus, bugs in VPN clients pose a substantial risk to their users and in turn the corporate network. Cisco, the dominating vendor of enterprise network hardware, offers VPN connectivity with their AnyConnect client for desktop and mobile devices. While past security research primarily focused on the AnyConnect Windows client, we show that Linux and iOS are based on different architectures and have distinct security issues. Our reverse engineering as well as the follow-up design analysis and fuzzing reveal 13 new vulnerabilities. Seven of these are located in the Linux client. The root cause for privilege escalations on Linux is anchored so deep in the client's architecture that it only got patched with a partial workaround. A similar analysis on iOS uncovers three AnyConnect-specific bugs as well as three general issues in iOS network extensions, which apply to all kinds of VPNs and are not restricted to AnyConnect.
Consider the joint beamforming and quantization problem in the cooperative cellular network, where multiple relay-like base stations (BSs) connected to the central processor (CP) via rate-limited fronthaul links cooperatively serve the users. This problem can be formulated as the minimization of the total transmit power, subject to all users' signal-to-interference-plus-noise-ratio (SINR) constraints and all relay-like BSs' fronthaul rate constraints. In this paper, we first show that there is no duality gap between the considered problem and its Lagrangian dual by showing the tightness of the semidefinite relaxation (SDR) of the considered problem. Then we propose an efficient algorithm based on Lagrangian duality for solving the considered problem. The proposed algorithm judiciously exploits the special structure of the Karush-Kuhn-Tucker (KKT) conditions of the considered problem and finds the solution that satisfies the KKT conditions via two fixed-point iterations. The proposed algorithm is highly efficient (as evaluating the functions in both fixed-point iterations are computationally cheap) and is guaranteed to find the global solution of the problem. Simulation results show the efficiency and the correctness of the proposed algorithm.
In applications of remote sensing, estimation, and control, timely communication is not always ensured by high-rate communication. This work proposes distributed age-efficient transmission policies for random access channels with $M$ transmitters. In the first part of this work, we analyze the age performance of stationary randomized policies by relating the problem of finding age to the absorption time of a related Markov chain. In the second part of this work, we propose the notion of \emph{age-gain} of a packet to quantify how much the packet will reduce the instantaneous age of information at the receiver side upon successful delivery. We then utilize this notion to propose a transmission policy in which transmitters act in a distributed manner based on the age-gain of their available packets. In particular, each transmitter sends its latest packet only if its corresponding age-gain is beyond a certain threshold which could be computed adaptively using the collision feedback or found as a fixed value analytically in advance. Both methods improve age of information significantly compared to the state of the art. In the limit of large $M$, we prove that when the arrival rate is small (below $\frac{1}{eM}$), slotted ALOHA-type algorithms are asymptotically optimal. As the arrival rate increases beyond $\frac{1}{eM}$, while age increases under slotted ALOHA, it decreases significantly under the proposed age-based policies. For arrival rates $\theta$, $\theta=\frac{1}{o(M)}$, the proposed algorithms provide a multiplicative factor of at least two compared to the minimum age under slotted ALOHA (minimum over all arrival rates). We conclude that, as opposed to the common practice, it is beneficial to increase the sampling rate (and hence the arrival rate) and transmit packets selectively based on their age-gain.
Event cameras are ideal for object tracking applications due to their ability to capture fast-moving objects while mitigating latency and data redundancy. Existing event-based clustering and feature tracking approaches for surveillance and object detection work well in the majority of cases, but fall short in a maritime environment. Our application of maritime vessel detection and tracking requires a process that can identify features and output a confidence score representing the likelihood that the feature was produced by a vessel, which may trigger a subsequent alert or activate a classification system. However, the maritime environment presents unique challenges such as the tendency of waves to produce the majority of events, demanding the majority of computational processing and producing false positive detections. By filtering redundant events and analyzing the movement of each event cluster, we can identify and track vessels while ignoring shorter lived and erratic features such as those produced by waves.
Constituting highly informative network embeddings is an important tool for network analysis. It encodes network topology, along with other useful side information, into low-dimensional node-based feature representations that can be exploited by statistical modeling. This work focuses on learning context-aware network embeddings augmented with text data. We reformulate the network-embedding problem, and present two novel strategies to improve over traditional attention mechanisms: ($i$) a content-aware sparse attention module based on optimal transport, and ($ii$) a high-level attention parsing module. Our approach yields naturally sparse and self-normalized relational inference. It can capture long-term interactions between sequences, thus addressing the challenges faced by existing textual network embedding schemes. Extensive experiments are conducted to demonstrate our model can consistently outperform alternative state-of-the-art methods.
Over the past decades, state-of-the-art medical image segmentation has heavily rested on signal processing paradigms, most notably registration-based label propagation and pair-wise patch comparison, which are generally slow despite a high segmentation accuracy. In recent years, deep learning has revolutionalized computer vision with many practices outperforming prior art, in particular the convolutional neural network (CNN) studies on image classification. Deep CNN has also started being applied to medical image segmentation lately, but generally involves long training and demanding memory requirements, achieving limited success. We propose a patch-based deep learning framework based on a revisit to the classic neural network model with substantial modernization, including the use of Rectified Linear Unit (ReLU) activation, dropout layers, 2.5D tri-planar patch multi-pathway settings. In a test application to hippocampus segmentation using 100 brain MR images from the ADNI database, our approach significantly outperformed prior art in terms of both segmentation accuracy and speed: scoring a median Dice score up to 90.98% on a near real-time performance (<1s).
Adding attributes for nodes to network embedding helps to improve the ability of the learned joint representation to depict features from topology and attributes simultaneously. Recent research on the joint embedding has exhibited a promising performance on a variety of tasks by jointly embedding the two spaces. However, due to the indispensable requirement of globality based information, present approaches contain a flaw of in-scalability. Here we propose \emph{SANE}, a scalable attribute-aware network embedding algorithm with locality, to learn the joint representation from topology and attributes. By enforcing the alignment of a local linear relationship between each node and its K-nearest neighbors in topology and attribute space, the joint embedding representations are more informative comparing with a single representation from topology or attributes alone. And we argue that the locality in \emph{SANE} is the key to learning the joint representation at scale. By using several real-world networks from diverse domains, We demonstrate the efficacy of \emph{SANE} in performance and scalability aspect. Overall, for performance on label classification, SANE successfully reaches up to the highest F1-score on most datasets, and even closer to the baseline method that needs label information as extra inputs, compared with other state-of-the-art joint representation algorithms. What's more, \emph{SANE} has an up to 71.4\% performance gain compared with the single topology-based algorithm. For scalability, we have demonstrated the linearly time complexity of \emph{SANE}. In addition, we intuitively observe that when the network size scales to 100,000 nodes, the "learning joint embedding" step of \emph{SANE} only takes $\approx10$ seconds.
Adding attributes for nodes to network embedding helps to improve the ability of the learned joint representation to depict features from topology and attributes simultaneously. Recent research on the joint embedding has exhibited a promising performance on a variety of tasks by jointly embedding the two spaces. However, due to the indispensable requirement of globality based information, present approaches contain a flaw of in-scalability. Here we propose \emph{SANE}, a scalable attribute-aware network embedding algorithm with locality, to learn the joint representation from topology and attributes. By enforcing the alignment of a local linear relationship between each node and its K-nearest neighbors in topology and attribute space, the joint embedding representations are more informative comparing with a single representation from topology or attributes alone. And we argue that the locality in \emph{SANE} is the key to learning the joint representation at scale. By using several real-world networks from diverse domains, We demonstrate the efficacy of \emph{SANE} in performance and scalability aspect. Overall, for performance on label classification, SANE successfully reaches up to the highest F1-score on most datasets, and even closer to the baseline method that needs label information as extra inputs, compared with other state-of-the-art joint representation algorithms. What's more, \emph{SANE} has an up to 71.4\% performance gain compared with the single topology-based algorithm. For scalability, we have demonstrated the linearly time complexity of \emph{SANE}. In addition, we intuitively observe that when the network size scales to 100,000 nodes, the "learning joint embedding" step of \emph{SANE} only takes $\approx10$ seconds.
Mobile network that millions of people use every day is one of the most complex systems in real world. Optimization of mobile network to meet exploding customer demand and reduce CAPEX/OPEX poses greater challenges than in prior works. Actually, learning to solve complex problems in real world to benefit everyone and make the world better has long been ultimate goal of AI. However, application of deep reinforcement learning (DRL) to complex problems in real world still remains unsolved, due to imperfect information, data scarcity and complex rules in real world, potential negative impact to real world, etc. To bridge this reality gap, we propose a sim-to-real framework to direct transfer learning from simulation to real world without any training in real world. First, we distill temporal-spatial relationships between cells and mobile users to scalable 3D image-like tensor to best characterize partially observed mobile network. Second, inspired by AlphaGo, we introduce a novel self-play mechanism to empower DRL agents to gradually improve intelligence by competing for best record on multiple tasks, just like athletes compete for world record in decathlon. Third, a decentralized DRL method is proposed to coordinate multi-agents to compete and cooperate as a team to maximize global reward and minimize potential negative impact. Using 7693 unseen test tasks over 160 unseen mobile networks in another simulator as well as 6 field trials on 4 commercial mobile networks in real world, we demonstrate the capability of this sim-to-real framework to direct transfer the learning not only from one simulator to another simulator, but also from simulation to real world. This is the first time that a DRL agent successfully transfers its learning directly from simulation to very complex real world problems with imperfect information, complex rules, huge state/action space, and multi-agent interactions.
Collecting training data from the physical world is usually time-consuming and even dangerous for fragile robots, and thus, recent advances in robot learning advocate the use of simulators as the training platform. Unfortunately, the reality gap between synthetic and real visual data prohibits direct migration of the models trained in virtual worlds to the real world. This paper proposes a modular architecture for tackling the virtual-to-real problem. The proposed architecture separates the learning model into a perception module and a control policy module, and uses semantic image segmentation as the meta representation for relating these two modules. The perception module translates the perceived RGB image to semantic image segmentation. The control policy module is implemented as a deep reinforcement learning agent, which performs actions based on the translated image segmentation. Our architecture is evaluated in an obstacle avoidance task and a target following task. Experimental results show that our architecture significantly outperforms all of the baseline methods in both virtual and real environments, and demonstrates a faster learning curve than them. We also present a detailed analysis for a variety of variant configurations, and validate the transferability of our modular architecture.