Millimeter-wave and terahertz systems rely on beamforming/combining codebooks for finding the best beam directions during the initial access procedure. Existing approaches suffer from large codebook sizes and high beam searching overhead in the presence of mobile devices. To alleviate this problem, we suggest utilizing the similarity of the channel in adjacent locations to divide the UE trajectory into a set of separate regions and maintain a set of candidate paths for each region in a database. In this paper, we show the tradeoff between the number of regions and the signalling overhead, i.e., higher number of regions corresponds to higher signal-to-noise ratio (SNR) but also higher signalling overhead for the database. We then propose an optimization framework to find the minimum number of regions based on the trajectory of a mobile device. Using realistic ray tracing datasets, we demonstrate that the proposed method reduces the beam searching complexity and latency while providing high SNR.
In this paper we present the results from an empirical power comparison of 40 goodness-of-fit tests for the univariate Laplace distribution, carried out using Monte Carlo simulations with sample sizes $n = 20, 50, 100, 200$, significance levels $\alpha = 0.01, 0.05, 0.10$, and 400 alternatives consisting of asymmetric and symmetric light/heavy-tailed distributions taken as special cases from 11 models. In addition to the unmatched scope of our study, an interesting contribution is the proposal of an innovative design for the selection of alternatives. The 400 alternatives consist of 20 specific cases of 20 submodels drawn from the main 11 models. For each submodel, the 20 specific cases corresponded to parameter values chosen to cover the full power range. An analysis of the results leads to a recommendation of the best tests for five different groupings of the alternative distributions. A real-data example is also presented, where an appropriate test for the goodness-of-fit of the univariate Laplace distribution is applied to weekly log-returns of Amazon stock over a recent four-year period.
Harnessing parity-time (PT) symmetry with balanced gain and loss profiles has created a variety of opportunities in electronics from wireless energy transfer to telemetry sensing and topological defect engineering. However, existing implementations often employ ad-hoc approaches at low operating frequencies and are unable to accommodate large-scale integration. Here, we report a fully integrated realization of PT-symmetry in a standard complementary metal-oxide-semiconductor technology. Our work demonstrates salient PT-symmetry features such as phase transition as well as the ability to manipulate broadband microwave generation and propagation beyond the limitations encountered by exiting schemes. The system shows 2.1 times bandwidth and 30 percentage noise reduction compared to conventional microwave generation in oscillatory mode and displays large non-reciprocal microwave transport from 2.75 to 3.10 gigahertz in non-oscillatory mode due to enhanced nonlinearities. This approach could enrich integrated circuit (IC) design methodology beyond well-established performance limits and enable the use of scalable IC technology to study topological effects in high-dimensional non-Hermitian systems.
We propose application-layer coding schemes to recover lost data in delay-sensitive uplink (sensor-to-gateway) communications in the Internet of Things. Built on an approach that combines retransmissions and forward erasure correction, the proposed schemes' salient features include low computational complexity and the ability to exploit sporadic receiver feedback for efficient data recovery. Reduced complexity is achieved by keeping the number of coded transmissions as low as possible and by devising a mechanism to compute the optimal degree of a coded packet in O(1). Our major contributions are: (a) An enhancement to an existing scheme called windowed coding, whose complexity is greatly reduced and data recovery performance is improved by our proposed approach. (b) A technique that combines elements of windowed coding with a new feedback structure to further reduce the coding complexity and improve data recovery. (c) A coded forwarding scheme in which a relay node provides further resilience against packet loss by overhearing source-to-destination communications and making forwarding decisions based on overheard information.
Traditionally, extracting patterns from eye movement data relies on statistics of different macro-events such as fixations and saccades. This requires an additional preprocessing step to separate the eye movement subtypes, often with a number of parameters on which the classification results depend. Besides that, definitions of such macro events are formulated in different ways by different researchers. We propose an application of a new class of features to the quantitative analysis of personal eye movement trajectories structure. This new class of features based on algebraic topology allows extracting patterns from different modalities of gaze such as time series of coordinates and amplitudes, heatmaps, and point clouds in a unified way at all scales from micro to macro. We experimentally demonstrate the competitiveness of the new class of features with the traditional ones and their significant synergy while being used together for the person authentication task on the recently published eye movement trajectories dataset.
We consider a cell-free massive multiple-input multiple-output (MIMO) system with multi-antenna access points and user equipments (UEs) over Weichselberger Rician fading channels with random phase-shifts. More specifically, we investigate the uplink spectral efficiency (SE) for two pragmatic processing schemes: 1) the fully centralized processing scheme with global minimum mean square error (MMSE) or maximum ratio (MR) combining; 2) the large-scale fading decoding (LSFD) scheme with local MMSE or MR combining. To improve the system SE performance, we propose a practical uplink precoding scheme based on only the eigenbasis of the UE-side correlation matrices. Moreover, we derive novel closed-form SE expressions for characterizing the LSFD scheme with the MR combining. Numerical results validate the accuracy of our derived expressions and show that the proposed precoding scheme can significantly improve the SE performance compared with the scenario without any precoding scheme.
Space robotics applications, such as Active Space Debris Removal (ASDR), require representative testing before launch. A commonly used approach to emulate the microgravity environment in space is air-bearing based platforms on flat-floors, such as the European Space Agency's Orbital Robotics and GNC Lab (ORGL). This work proposes a control architecture for a floating platform at the ORGL, equipped with eight solenoid-valve-based thrusters and one reaction wheel. The control architecture consists of two main components: a trajectory planner that finds optimal trajectories connecting two states and a trajectory follower that follows any physically feasible trajectory. The controller is first evaluated within an introduced simulation, achieving a 100 % success rate at finding and following trajectories to the origin within a Monte-Carlo test. Individual trajectories are also successfully followed by the physical system. In this work, we showcase the ability of the controller to reject disturbances and follow a straight-line trajectory within tens of centimeters.
When analyzing human motion videos, the output jitters from existing pose estimators are highly-unbalanced with varied estimation errors across frames. Most frames in a video are relatively easy to estimate and only suffer from slight jitters. In contrast, for rarely seen or occluded actions, the estimated positions of multiple joints largely deviate from the ground truth values for a consecutive sequence of frames, rendering significant jitters on them. To tackle this problem, we propose to attach a dedicated temporal-only refinement network to existing pose estimators for jitter mitigation, named SmoothNet. Unlike existing learning-based solutions that employ spatio-temporal models to co-optimize per-frame precision and temporal smoothness at all the joints, SmoothNet models the natural smoothness characteristics in body movements by learning the long-range temporal relations of every joint without considering the noisy correlations among joints. With a simple yet effective motion-aware fully-connected network, SmoothNet improves the temporal smoothness of existing pose estimators significantly and enhances the estimation accuracy of those challenging frames as a side-effect. Moreover, as a temporal-only model, a unique advantage of SmoothNet is its strong transferability across various types of estimators and datasets. Comprehensive experiments on five datasets with eleven popular backbone networks across 2D and 3D pose estimation and body recovery tasks demonstrate the efficacy of the proposed solution. Code is available at //github.com/cure-lab/SmoothNet.
Spiking Neural Networks (SNNs) have recently emerged as a new generation of low-power deep neural networks, which is suitable to be implemented on low-power mobile/edge devices. As such devices have limited memory storage, neural pruning on SNNs has been widely explored in recent years. Most existing SNN pruning works focus on shallow SNNs (2~6 layers), however, deeper SNNs (>16 layers) are proposed by state-of-the-art SNN works, which is difficult to be compatible with the current SNN pruning work. To scale up a pruning technique towards deep SNNs, we investigate Lottery Ticket Hypothesis (LTH) which states that dense networks contain smaller subnetworks (i.e., winning tickets) that achieve comparable performance to the dense networks. Our studies on LTH reveal that the winning tickets consistently exist in deep SNNs across various datasets and architectures, providing up to 97% sparsity without huge performance degradation. However, the iterative searching process of LTH brings a huge training computational cost when combined with the multiple timesteps of SNNs. To alleviate such heavy searching cost, we propose Early-Time (ET) ticket where we find the important weight connectivity from a smaller number of timesteps. The proposed ET ticket can be seamlessly combined with a common pruning techniques for finding winning tickets, such as Iterative Magnitude Pruning (IMP) and Early-Bird (EB) tickets. Our experiment results show that the proposed ET ticket reduces search time by up to 38% compared to IMP or EB methods. Code is available at Github.
With the rapid growth of threats, sophistication and diversity in the manner of intrusion, traditional belt barrier systems are now faced with a major challenge of providing high and concrete coverage quality to expand the guarding service market. Recent efforts aim at constructing a belt barrier by deploying bistatic radar(s) on a specific line regardless of the limitation on deployment locations, to keep the width of the barrier from going below a specific threshold and the total bistatic radar placement cost is minimized, referred to as the Minimum Cost Linear Placement (MCLP) problem. The existing solutions are heuristic, and their validity is tightly bound by the barrier width parameter that these solutions only work for a fixed barrier width value. In this work, we propose an optimal solution, referred to as the Opt_MCLP, for the "open MCLP problem" that works for full range of the barrier width. Through rigorous theoretical analysis and experimentation, we demonstrate that the proposed algorithms perform well in terms of placement cost reduction and barrier coverage guarantee.
The prevalence of networked sensors and actuators in many real-world systems such as smart buildings, factories, power plants, and data centers generate substantial amounts of multivariate time series data for these systems. The rich sensor data can be continuously monitored for intrusion events through anomaly detection. However, conventional threshold-based anomaly detection methods are inadequate due to the dynamic complexities of these systems, while supervised machine learning methods are unable to exploit the large amounts of data due to the lack of labeled data. On the other hand, current unsupervised machine learning approaches have not fully exploited the spatial-temporal correlation and other dependencies amongst the multiple variables (sensors/actuators) in the system for detecting anomalies. In this work, we propose an unsupervised multivariate anomaly detection method based on Generative Adversarial Networks (GANs). Instead of treating each data stream independently, our proposed MAD-GAN framework considers the entire variable set concurrently to capture the latent interactions amongst the variables. We also fully exploit both the generator and discriminator produced by the GAN, using a novel anomaly score called DR-score to detect anomalies by discrimination and reconstruction. We have tested our proposed MAD-GAN using two recent datasets collected from real-world CPS: the Secure Water Treatment (SWaT) and the Water Distribution (WADI) datasets. Our experimental results showed that the proposed MAD-GAN is effective in reporting anomalies caused by various cyber-intrusions compared in these complex real-world systems.