Spectrally efficient communication is studied for short-reach fiber-optic links with chromatic dispersion (CD) and receivers that employ direction-detection and oversampling. Achievable rates and symbol error probabilities are computed by using auxiliary channels that account for memory in the sampled symbol strings. Real-alphabet bipolar and complex-alphabet symmetric modulations are shown to achieve significant energy gains over classic intensity modulation. Moreover, frequency-domain raised-cosine (FD-RC) pulses outperform time-domain RC (TD-RC) pulses in terms of spectral efficiency for two scenarios. First, if one shares the spectrum with other users then inter-channel interference significantly reduces the TD-RC rates. Second, if there is a transmit filter to avoid interference then the detection complexity of FD-RC and TD-RC pulses is similar but FD-RC achieves higher rates.
THz communication is regarded as one of the potential key enablers for next-generation wireless systems. While THz frequency bands provide abundant bandwidths and extremely high data rates, the operation at THz bands is mandated by short communication ranges and narrow pencil beams, which are highly susceptible to user mobility and beam misalignment as well as channel blockages. This raises the need for novel beam tracking methods that take into account the tradeoff between enhancing the received signal strength by increasing beam directivity, and increasing the coverage probability by widening the beam. To address these challenges, a multi-objective optimization problem is formulated with the goal of jointly maximizing the ergodic rate and minimizing the outage probability subject to transmit power and average overhead constraints. Then, a novel parameterized beamformer with dynamic beamwidth adaptation is proposed. In addition to the precoder, an event-based beam tracking approach is introduced that enables reacting to outages caused by beam misalignment and dynamic blockage while maintaining a low pilot overhead. Simulation results show that our proposed beamforming scheme improves average rate performance and reduces the amount of communication outages caused by beam misalignment. Moreover, the proposed event-triggered channel estimation approach enables low-overhead yet reliable communication.
In this paper, we propose a novel class of symmetric key distribution protocols that leverages basic security primitives offered by low-cost, hardware chipsets containing millions of synchronized self-powered timers. The keys are derived from the temporal dynamics of a physical, micro-scale time-keeping device which makes the keys immune to any potential side-channel attacks, malicious tampering, or snooping. Using the behavioral model of the self-powered timers, we first show that the derived key-strings can pass the randomness test as defined by the National Institute of Standards and Technology (NIST) suite. The key-strings are then used in two SPoTKD (Self-Powered Timer Key Distribution) protocols that exploit the timer's dynamics as one-way functions: (a) protocol 1 facilitates secure communications between a user and a remote Server, and (b) protocol 2 facilitates secure communications between two users. In this paper, we investigate the security of these protocols under standard model and against different adversarial attacks. Using Monte-Carlo simulations, we also investigate the robustness of these protocols in the presence of real-world operating conditions and propose error-correcting SPoTKD protocols to mitigate these noise-related artifacts.
IEEE 802.11p standard defines wireless technology protocols that enable vehicular transportation and manage traffic efficiency. A major challenge in the development of this technology is ensuring communication reliability in highly dynamic vehicular environments, where the wireless communication channels are doubly selective, thus making channel estimation and tracking a relevant problem to investigate. In this paper, a novel deep learning (DL)-based weighted interpolation estimator is proposed to accurately estimate vehicular channels especially in high mobility scenarios. The proposed estimator is based on modifying the pilot allocation of the IEEE 802.11p standard so that more transmission data rates are achieved. Extensive numerical experiments demonstrate that the developed estimator significantly outperforms the recently proposed DL-based frame-by-frame estimators in different vehicular scenarios, while substantially reducing the overall computational complexity.
We propose a joint channel estimation and signal detection approach for the uplink non-orthogonal multiple access using unsupervised machine learning. We apply the Gaussian mixture model to cluster the received signals, and accordingly optimize the decision regions to enhance the symbol error rate (SER). We show that, when the received powers of the users are sufficiently different, the proposed clustering-based approach achieves an SER performance on a par with that of the conventional maximum-likelihood detector with full channel state information. However, unlike the proposed approach, the maximum-likelihood detector requires the transmission of a large number of pilot symbols to accurately estimate the channel. The accuracy of the utilized clustering algorithm depends on the number of the data points available at the receiver. Therefore, there exists a tradeoff between accuracy and block length. We provide a comprehensive performance analysis of the proposed approach as well as deriving a theoretical bound on its SER performance as a function of the block length. Our simulation results corroborate the effectiveness of the proposed approach and verify that the calculated theoretical bound can predict the SER performance of the proposed approach well.
Several information-theoretic studies on channels with output quantization have identified the capacity-achieving input distributions for different fading channels with 1-bit in-phase and quadrature (I/Q) output quantization. However, an exact characterization of the capacity-achieving input distribution for channels with multi-bit phase quantization has not been provided. In this paper, we consider four different channel models with multi-bit phase quantization at the output and identify the optimal input distribution for each channel model. We first consider a complex Gaussian channel with $b$-bit phase-quantized output and prove that the capacity-achieving distribution is a rotated $2^b$-phase shift keying (PSK). The analysis is then extended to multiple fading scenarios. We show that the optimality of rotated $2^b$-PSK continues to hold under noncoherent fast fading Rician channels with $b$-bit phase quantization when line-of-sight (LoS) is present. When channel state information (CSI) is available at the receiver, we identify $\frac{2\pi}{2^b}$-symmetry and constant amplitude as the necessary and sufficient conditions for the ergodic capacity-achieving input distribution; which a $2^b$-PSK satisfies. Finally, an optimum power control scheme is presented which achieves ergodic capacity when CSI is also available at the transmitter.
Lattice Boltzmann schemes rely on the enlargement of the size of the target problem in order to solve PDEs in a highly parallelizable and efficient kinetic-like fashion, split into a collision and a stream phase. This structure, despite the well-known advantages from a computational standpoint, is not suitable to construct a rigorous notion of consistency with respect to the target equations and to provide a precise notion of stability. In order to alleviate these shortages and introduce a rigorous framework, we demonstrate that any lattice Boltzmann scheme can be rewritten as a corresponding multi-step Finite Difference scheme on the conserved variables. This is achieved by devising a suitable formalism based on operators, commutative algebra and polynomials. Therefore, the notion of consistency of the corresponding Finite Difference scheme allows to invoke the Lax-Richtmyer theorem in the case of linear lattice Boltzmann schemes. Moreover, we show that the frequently-used von Neumann-like stability analysis for lattice Boltzmann schemes entirely corresponds to the von Neumann stability analysis of their Finite Difference counterpart. More generally, the usual tools for the analysis of Finite Difference schemes are now readily available to study lattice Boltzmann schemes. Their relevance is verified by means of numerical illustrations.
Massive MIMO systems are highly efficient but critically rely on accurate channel state information (CSI) at the base station in order to determine appropriate precoders. CSI acquisition requires sending pilot symbols which induce an important overhead. In this paper, a method whose objective is to determine an appropriate precoder from the knowledge of the user's location only is proposed. Such a way to determine precoders is known as location based beamforming. It allows to reduce or even eliminate the need for pilot symbols, depending on how the location is obtained. the proposed method learns a direct mapping from location to precoder in a supervised way. It involves a neural network with a specific structure based on random Fourier features allowing to learn functions containing high spatial frequencies. It is assessed empirically and yields promising results on realistic synthetic channels. As opposed to previously proposed methods, it allows to handle both line-of-sight (LOS) and non-line-of-sight (NLOS) channels.
One of the key steps in Neural Architecture Search (NAS) is to estimate the performance of candidate architectures. Existing methods either directly use the validation performance or learn a predictor to estimate the performance. However, these methods can be either computationally expensive or very inaccurate, which may severely affect the search efficiency and performance. Moreover, as it is very difficult to annotate architectures with accurate performance on specific tasks, learning a promising performance predictor is often non-trivial due to the lack of labeled data. In this paper, we argue that it may not be necessary to estimate the absolute performance for NAS. On the contrary, we may need only to understand whether an architecture is better than a baseline one. However, how to exploit this comparison information as the reward and how to well use the limited labeled data remains two great challenges. In this paper, we propose a novel Contrastive Neural Architecture Search (CTNAS) method which performs architecture search by taking the comparison results between architectures as the reward. Specifically, we design and learn a Neural Architecture Comparator (NAC) to compute the probability of candidate architectures being better than a baseline one. Moreover, we present a baseline updating scheme to improve the baseline iteratively in a curriculum learning manner. More critically, we theoretically show that learning NAC is equivalent to optimizing the ranking over architectures. Extensive experiments in three search spaces demonstrate the superiority of our CTNAS over existing methods.
Recent CNN based object detectors, no matter one-stage methods like YOLO, SSD, and RetinaNe or two-stage detectors like Faster R-CNN, R-FCN and FPN are usually trying to directly finetune from ImageNet pre-trained models designed for image classification. There has been little work discussing on the backbone feature extractor specifically designed for the object detection. More importantly, there are several differences between the tasks of image classification and object detection. 1. Recent object detectors like FPN and RetinaNet usually involve extra stages against the task of image classification to handle the objects with various scales. 2. Object detection not only needs to recognize the category of the object instances but also spatially locate the position. Large downsampling factor brings large valid receptive field, which is good for image classification but compromises the object location ability. Due to the gap between the image classification and object detection, we propose DetNet in this paper, which is a novel backbone network specifically designed for object detection. Moreover, DetNet includes the extra stages against traditional backbone network for image classification, while maintains high spatial resolution in deeper layers. Without any bells and whistles, state-of-the-art results have been obtained for both object detection and instance segmentation on the MSCOCO benchmark based on our DetNet~(4.8G FLOPs) backbone. The code will be released for the reproduction.
Image segmentation is a fundamental problem in medical image analysis. In recent years, deep neural networks achieve impressive performances on many medical image segmentation tasks by supervised learning on large manually annotated data. However, expert annotations on big medical datasets are tedious, expensive or sometimes unavailable. Weakly supervised learning could reduce the effort for annotation but still required certain amounts of expertise. Recently, deep learning shows a potential to produce more accurate predictions than the original erroneous labels. Inspired by this, we introduce a very weakly supervised learning method, for cystic lesion detection and segmentation in lung CT images, without any manual annotation. Our method works in a self-learning manner, where segmentation generated in previous steps (first by unsupervised segmentation then by neural networks) is used as ground truth for the next level of network learning. Experiments on a cystic lung lesion dataset show that the deep learning could perform better than the initial unsupervised annotation, and progressively improve itself after self-learning.