Cooperative driving systems, such as platooning, rely on communication and information exchange to create situational awareness for each agent. Design and performance of control components are therefore tightly coupled with communication component performance. The information flow between vehicles can significantly affect the dynamics of a platoon. Therefore, both the performance and the stability of a platoon depend not only on the vehicle's controller but also on the information flow Topology (IFT). The IFT can cause limitations for certain platoon properties, i.e., stability and scalability. Cellular Vehicle-To-Everything (C-V2X) has emerged as one of the main communication technologies to support connected and automated vehicle applications. As a result of packet loss, wireless channels create random link interruption and changes in network topologies. In this paper, we model the communication links between vehicles with a first-order Markov model to capture the prevalent time correlations for each link. These models enable performance evaluation through better approximation of communication links during system design stages. Our approach is to use data from experiments to model the Inter-Packet Gap (IPG) using Markov chains and derive transition probability matrices for consecutive IPG states. Training data is collected from high fidelity simulations using models derived based on empirical data for a variety of different vehicle densities and communication rates. Utilizing the IPG models, we analyze the mean-square stability of a platoon of vehicles with the standard consensus protocol tuned for ideal communication and compare the degradation in performance for different scenarios.
In image denoising problems, the increasing density of available images makes an exhaustive visual inspection impossible and therefore automated methods based on machine-learning must be deployed for this purpose. This is particulary the case in seismic signal processing. Engineers/geophysicists have to deal with millions of seismic time series. Finding the sub-surface properties useful for the oil industry may take up to a year and is very costly in terms of computing/human resources. In particular, the data must go through different steps of noise attenuation. Each denoise step is then ideally followed by a quality control (QC) stage performed by means of human expertise. To learn a quality control classifier in a supervised manner, labeled training data must be available, but collecting the labels from human experts is extremely time-consuming. We therefore propose a novel active learning methodology to sequentially select the most relevant data, which are then given back to a human expert for labeling. Beyond the application in geophysics, the technique we promote in this paper, based on estimates of the local error and its uncertainty, is generic. Its performance is supported by strong empirical evidence, as illustrated by the numerical experiments presented in this article, where it is compared to alternative active learning strategies both on synthetic and real seismic datasets.
Online advertising is a major source of income for many online companies. One common approach is to sell online advertisements via waterfall auctions, where a publisher makes sequential price offers to ad networks. The publisher controls the order and prices of the waterfall and by that aims to maximize his revenue. In this work, we propose a methodology to learn a waterfall strategy from historical data by wisely searching in the space of possible waterfalls and selecting the one leading to the highest revenue. The contribution of this work is twofold; First, we propose a novel method to estimate the valuation distribution of each user with respect to each ad network. Second, we utilize the valuation matrix to score our candidate waterfalls as part of a procedure that iteratively searches in local neighborhoods. Our framework guarantees that the waterfall revenue improves between iterations until converging to a local optimum. Real-world demonstrations are provided to show that the proposed method improves the total revenue of real-world waterfalls compared to manual expert optimization. Finally, the code and the data are available here.
One of the most critical aspects of enabling next-generation wireless technologies is developing an accurate and consistent channel model to be validated effectively with the help of real-world measurements. From this point of view, remarkable research has recently been conducted to model propagation channels involving the modification of the wireless propagation environment through the inclusion of reconfigurable intelligent surfaces (RISs). This study mainly aims to present a vision on channel modeling strategies for the RIS-empowered communications systems considering the state-of-the-art channel and propagation modeling efforts in the literature. Moreover, it is also desired to draw attention to open-source and standard-compliant physical channel modeling efforts to provide comprehensive insights regarding the practical use-cases of RISs in future wireless networks.
In this letter, we analyze the performance of covert communications under faster-than-Nyquist (FTN) signaling in the Rayleigh block fading channel. Both Bayesian criterion- and Kullback-Leibler (KL) divergence-based covertness constraints are considered. Especially, for KL divergence-based one, we prove that both the maximum transmit power and covert rate under FTN signaling are higher than those under Nyquist signaling. Numerical results coincide with our analysis and validate the advantages of FTN signaling to realize covert data transmission.
ThThis paper reviews and summarizes the main process of the close combination of computer and network communication to promote the rapid development of information technology, and discusses the important role of a series of technical achievements in information movement and application. Combined with the newly concerned concept of metaverse, this paper studies the relationship between the real world, information space and information system, and puts forward the integrated framework of the real world and information system. According to the recent research and practice results, the basic mathematical theories on information model, nature and measurement are comprehensively revised and supplemented. On this basis, taking the eleven kinds of information measurement as the traction, this paper puts forward eleven kinds of measurement effects of information system and their distribution views in each system link, and then analyzes eight typical dynamic configurations of information system, which constitutes a basic theoretical system of information system dynamics with universal significance, in order to support the analysis, design, R&D and evaluation.
Connected and automated vehicles have shown great potential in improving traffic mobility and reducing emissions, especially at unsignalized intersections. Previous research has shown that vehicle passing order is the key influencing factor in improving intersection traffic mobility. In this paper, we propose a graph-based cooperation method to formalize the conflict-free scheduling problem at an unsignalized intersection. Based on graphical analysis, a vehicle's trajectory conflict relationship is modeled as a conflict directed graph and a coexisting undirected graph. Then, two graph-based methods are proposed to find the vehicle passing order. The first is an improved depth-first spanning tree algorithm, which aims to find the local optimal passing order vehicle by vehicle. The other novel method is a minimum clique cover algorithm, which identifies the global optimal solution. Finally, a distributed control framework and communication topology are presented to realize the conflict-free cooperation of vehicles. Extensive numerical simulations are conducted for various numbers of vehicles and traffic volumes, and the simulation results prove the effectiveness of the proposed algorithms.
High-rate product codes (PCs) and staircase codes (SCs) are ubiquitous codes in high-speed optical communication achieving near-capacity performance on the binary symmetric channel. Their success is mostly due to very efficient iterative decoding algorithms that require very little complexity. In this paper, we extend the density evolution (DE) analysis for PCs and SCs to a channel with ternary output and ternary message passing, where the third symbol marks an erasure. We investigate the performance of a standard error-and-erasure decoder and of its simplification using DE. The proposed analysis can be used to find component code configurations and quantizer levels for the channel output. We also show how the use of even-weight BCH subcodes as component codes can improve the decoding performance at high rates. The DE results are verified by Monte-Carlo simulations, which show that additional coding gains of up to 0.6 dB are possible by ternary decoding, at only a small additional increase in complexity compared to traditional binary message passing.
We consider the problem of in-order packet transmission over a cascade of packet-erasure links with acknowledgment (ACK) signals, interconnected by relays. We treat first the case of transmitting a single packet, in which ACKs are unnecessary, over links with independent identically distributed erasures. For this case, we derive tight upper and lower bounds on the probability of arrive failure within an allowed end-to-end communication delay over a given number of links. When the number of links is commensurate with the allowed delay, we determine the maximal ratio between the two -- coined information velocity -- for which the arrive-failure probability decays to zero; we further derive bounds on the arrive-failure probability when the ratio is below the information velocity, determine the exponential arrive-failure decay rate, and extend the treatment to links with different erasure probabilities. We then elevate all these results for a stream of packets with independent geometrically distributed interarrival times, and prove that the information velocity and the exponential decay rate remain the same for any stationary ergodic arrival process and for deterministic interarrival times. We demonstrate the significance of the derived fundamental limits -- the information velocity and the arrive-failure exponential decay rate -- by comparing them to simulation results.
Efficient contact tracing and isolation is an effective strategy to control epidemics. It was used effectively during the Ebola epidemic and successfully implemented in several parts of the world during the ongoing COVID-19 pandemic. An important consideration in contact tracing is the budget on the number of individuals asked to quarantine -- the budget is limited for socioeconomic reasons. In this paper, we present a Markov Decision Process (MDP) framework to formulate the problem of using contact tracing to reduce the size of an outbreak while asking a limited number of people to quarantine. We formulate each step of the MDP as a combinatorial problem, MinExposed, which we demonstrate is NP-Hard; as a result, we develop an LP-based approximation algorithm. Though this algorithm directly solves MinExposed, it is often impractical in the real world due to information constraints. To this end, we develop a greedy approach based on insights from the analysis of the previous algorithm, which we show is more interpretable. A key feature of the greedy algorithm is that it does not need complete information of the underlying social contact network. This makes the heuristic implementable in practice and is an important consideration. Finally, we carry out experiments on simulations of the MDP run on real-world networks, and show how the algorithms can help in bending the epidemic curve while limiting the number of isolated individuals. Our experimental results demonstrate that the greedy algorithm and its variants are especially effective, robust, and practical in a variety of realistic scenarios, such as when the contact graph and specific transmission probabilities are not known. All code can be found in our GitHub repository: //github.com/gzli929/ContactTracing.
Protein Contact Network (PCN) is a powerful tool for analysing the structure and function of proteins. In particular, PCN has been used for disclosing the molecular features of allosteric regulation through PCN clustering. Such analysis is relevant in many applications, such as the recent study of SARS-CoV-2 Spike Protein. Despite its relevance, methods for the analysis of PCN are spread into a set of different libraries and tools. Therefore, the introduction of a tool that incorporates all the function may help researchers. We present PCN-Miner a software tool implemented in the Python programming language able to import protein in the Protein Data Bank format and generate the corresponding protein contact network. Then it offers a set of algorithms for the analysis of PCS that cover a large set of applications: from clustering to embedding and subsequent analysis. Software is available at \url{//github.com/hguzzi/ProteinContactNetworks}