Sixth generation (6G) cellular systems are expected to extend the operational range to sub-Terahertz (THz) frequencies between 100 and 300 GHz due to the broad unexploited spectrum therein. A proper channel model is needed to accurately describe spatial and temporal channel characteristics and faithfully create channel impulse responses at sub-THz frequencies. This paper studies the channel spatial statistics such as the number of spatial clusters and cluster power distribution based on recent radio propagation measurements conducted at 142 GHz in an urban microcell (UMi) scenario. For the 28 measured locations, we observe one to four spatial clusters at most locations. A detailed spatial statistical multiple input multiple output (MIMO) channel generation procedure is introduced based on the derived empirical channel statistics. We find that beamforming provides better spectral efficiency than spatial multiplexing in the LOS scenario due to the boresight path, and two spatial streams usually offer the highest spectral efficiency at most NLOS locations due to the limited number of spatial clusters.
Estimation of the spatial heterogeneity in crime incidence across an entire city is an important step towards reducing crime and increasing our understanding of the physical and social functioning of urban environments. This is a difficult modeling endeavor since crime incidence can vary smoothly across space and time but there also exist physical and social barriers that result in discontinuities in crime rates between different regions within a city. A further difficulty is that there are different levels of resolution that can be used for defining regions of a city in order to analyze crime. To address these challenges, we develop a Bayesian non-parametric approach for the clustering of urban areal units at different levels of resolution simultaneously. Our approach is evaluated with an extensive synthetic data study and then applied to the estimation of crime incidence at various levels of resolution in the city of Philadelphia.
Recent research investigates the decode-and-forward (DF) relaying for mixed radio frequency (RF) and terahertz (THz) wireless links with zero-boresight pointing errors. In this letter, we analyze the performance of a fixed-gain amplify-and-forward (AF) relaying for the RF-THz link to interface the access network on the RF technology with wireless THz transmissions. We develop probability density function (PDF) and cumulative distribution function (CDF) of the end-to-end SNR for the relay-assisted system in terms of bivariate Fox's H function considering $\alpha$-$\mu$ fading for the THz system with non-zero boresight pointing errors and $\alpha$-$\kappa$-$\mu$ shadowed ($\alpha$-KMS) fading model for the RF link. Using the derived PDF and CDF, we present exact analytical expressions of the outage probability, average bit-error-rate (BER), and ergodic capacity of the considered system. We also analyze the outage probability and average BER asymptotically for a better insight into the system behavior at high SNR. We use simulations to compare the performance of the AF relaying having a semi-blind gain factor with the recently proposed DF relaying for THz-RF transmissions.
Agile quadrotor flight in challenging environments has the potential to revolutionize shipping, transportation, and search and rescue applications. Nonlinear model predictive control (NMPC) has recently shown promising results for agile quadrotor control, but relies on highly accurate models for maximum performance. Hence, model uncertainties in the form of unmodeled complex aerodynamic effects, varying payloads and parameter mismatch will degrade overall system performance. In this paper, we propose L1-NMPC, a novel hybrid adaptive NMPC to learn model uncertainties online and immediately compensate for them, drastically improving performance over the non-adaptive baseline with minimal computational overhead. Our proposed architecture generalizes to many different environments from which we evaluate wind, unknown payloads, and highly agile flight conditions. The proposed method demonstrates immense flexibility and robustness, with more than 90% tracking error reduction over non-adaptive NMPC under large unknown disturbances and without any gain tuning. In addition, the same controller with identical gains can accurately fly highly agile racing trajectories exhibiting top speeds of 70 km/h, offering tracking performance improvements of around 50% relative to the non-adaptive NMPC baseline.
Integrated Sensing And Communication (ISAC)forms a symbiosis between the human need for communication and the need for increasing productivity, by extracting environmental information leveraging the communication network. As multiple sensory already create a perception of the environment, an investigation into the advantages of ISAC compare to such modalities is required. Therefore, we introduce MaxRay, an ISAC framework allowing to simulate communication, sensing, and additional sensory jointly. Emphasizing the challenges for creating such sensing networks, we introduce the required propagation properties for sensing and how they are leveraged. To compare the performance of the different sensing techniques, we analyze four commonly used metrics used in different fields and evaluate their advantages and disadvantages for sensing. We depict that a metric based on prominence is suitable to cover most algorithms. Further we highlight the requirement of clutter removal algorithms, using two standard clutter removal techniques to detect a target in a typical industrial scenario. In general a versatile framework, allowing to create automatically labeled datasets to investigate a large variety of tasks is demonstrated.
Demonstrating quantum advantage requires experimental implementation of a computational task that is hard to achieve using state-of-the-art classical systems. One approach is to perform sampling from a probability distribution associated with a class of highly entangled many-body wavefunctions. It has been suggested that this approach can be certified with the Linear Cross-Entropy Benchmark (XEB). We critically examine this notion. First, in a "benign" setting where an honest implementation of noisy quantum circuits is assumed, we characterize the conditions under which the XEB approximates the fidelity. Second, in an "adversarial" setting where all possible classical algorithms are considered for comparison, we show that achieving relatively high XEB values does not imply faithful simulation of quantum dynamics. We present an efficient classical algorithm that, with 1 GPU within 2s, yields high XEB values, namely 2-12% of those obtained in experiments. By identifying and exploiting several vulnerabilities of the XEB, we achieve high XEB values without full simulation of quantum circuits. Remarkably, our algorithm features better scaling with the system size than noisy quantum devices for commonly studied random circuit ensembles. To quantitatively explain the success of our algorithm and the limitations of the XEB, we use a theoretical framework in which the average XEB and fidelity are mapped to statistical models. We illustrate the relation between the XEB and the fidelity for quantum circuits in various architectures, with different gate choices, and in the presence of noise. Our results show that XEB's utility as a proxy for fidelity hinges on several conditions, which must be checked in the benign setting but cannot be assumed in the adversarial setting. Thus, the XEB alone has limited utility as a benchmark for quantum advantage. We discuss ways to overcome these limitations.
Due to the redundant nature of DNA synthesis and sequencing technologies, a basic model for a DNA storage system is a multi-draw "shuffling-sampling" channel. In this model, a random number of noisy copies of each sequence is observed at the channel output. Recent works have characterized the capacity of such a DNA storage channel under different noise and sequencing models, relying on sophisticated typicality-based approaches for the achievability. Here, we consider a multi-draw DNA storage channel in the setting of noise corruption by a binary erasure channel. We show that, in this setting, the capacity is achieved by linear coding schemes. This leads to a considerably simpler derivation of the capacity expression of a multi-draw DNA storage channel than existing results in the literature.
This paper proposes a macroscopic model to describe the equilibrium distribution of passenger arrivals for the morning commute problem in a congested urban rail transit system. We use a macroscopic train operation sub-model developed by Seo et al (2017a,b) to express the interaction between the dynamics of passengers and trains in a simplified manner while maintaining their essential physical relations. The equilibrium conditions of the proposed model are derived and a solution method is provided. The characteristics of the equilibrium are then examined through analytical discussion and numerical examples. As an application of the proposed model, we analyze a simple time-dependent timetable optimization problem with equilibrium constraints and reveal that a "capacity increasing paradox" exists such that a higher dispatch frequency can increase the equilibrium cost. Furthermore, insights into the design of the timetable are obtained and the timetable influence on passengers' equilibrium travel costs are evaluated.
In this work, we propose a generally applicable transformation unit for visual recognition with deep convolutional neural networks. This transformation explicitly models channel relationships with explainable control variables. These variables determine the neuron behaviors of competition or cooperation, and they are jointly optimized with the convolutional weight towards more accurate recognition. In Squeeze-and-Excitation (SE) Networks, the channel relationships are implicitly learned by fully connected layers, and the SE block is integrated at the block-level. We instead introduce a channel normalization layer to reduce the number of parameters and computational complexity. This lightweight layer incorporates a simple l2 normalization, enabling our transformation unit applicable to operator-level without much increase of additional parameters. Extensive experiments demonstrate the effectiveness of our unit with clear margins on many vision tasks, i.e., image classification on ImageNet, object detection and instance segmentation on COCO, video classification on Kinetics.
A major challenge for video captioning is to combine audio and visual cues. Existing multi-modal fusion methods have shown encouraging results in video understanding. However, the temporal structures of multiple modalities at different granularities are rarely explored, and how to selectively fuse the multi-modal representations at different levels of details remains uncharted. In this paper, we propose a novel hierarchically aligned cross-modal attention (HACA) framework to learn and selectively fuse both global and local temporal dynamics of different modalities. Furthermore, for the first time, we validate the superior performance of the deep audio features on the video captioning task. Finally, our HACA model significantly outperforms the previous best systems and achieves new state-of-the-art results on the widely used MSR-VTT dataset.
ESports tournaments, such as Dota 2's The International (TI), attract millions of spectators to watch broadcasts on online streaming platforms, to communicate, and to share their experience and emotions. Unlike traditional streams, tournament broadcasts lack a streamer figure to which spectators can appeal directly. Using topic modelling and cross-correlation analysis of more than three million messages from 86 games of TI7, we uncover main topical and temporal patterns of communication. First, we disentangle contextual meanings of emotes and memes, which play a salient role in communication, and show a meta-topics semantic map of streaming slang. Second, our analysis shows a prevalence of the event-driven game communication during tournament broadcasts and particular topics associated with the event peaks. Third, we show that "copypasta" cascades and other related practices, while occupying a significant share of messages, are strongly associated with periods of lower in-game activity. Based on the analysis, we propose design ideas to support different modes of spectators' communication.