We investigate the problem of message transmission over time-varying single-user multiple-input multiple-output (MIMO) Rayleigh fading channels with average power constraint and with complete channel state information available at the receiver side (CSIR). To describe the channel variations over the time, we consider a first-order Gauss-Markov model. We completely solve the problem by giving a single-letter characterization of the channel capacity in closed form and by providing a rigorous proof of it.
The approximate uniform sampling of graph realizations with a given degree sequence is an everyday task in several social science, computer science, engineering etc. projects. One approach is using Markov chains. The best available current result about the well-studied switch Markov chain is that it is rapidly mixing on P-stable degree sequences (see DOI:10.1016/j.ejc.2021.103421). The switch Markov chain does not change any degree sequence. However, there are cases where degree intervals are specified rather than a single degree sequence. (A natural scenario where this problem arises is in hypothesis testing on social networks that are only partially observed.) Rechner, Strowick, and M\"uller-Hannemann introduced in 2018 the notion of degree interval Markov chain which uses three (separately well-studied) local operations (switch, hinge-flip and toggle), and employing on degree sequence realizations where any two sequences under scrutiny have very small coordinate-wise distance. Recently Amanatidis and Kleer published a beautiful paper (arXiv:2110.09068), showing that the degree interval Markov chain is rapidly mixing if the sequences are coming from a system of very thin intervals which are centered not far from a regular degree sequence. In this paper we extend substantially their result, showing that the degree interval Markov chain is rapidly mixing if the intervals are centred at P-stable degree sequences.
In this paper we present a proof system that operates on graphs instead of formulas. Starting from the well-known relationship between formulas and cographs, we drop the cograph-conditions and look at arbitrary undirected) graphs. This means that we lose the tree structure of the formulas corresponding to the cographs, and we can no longer use standard proof theoretical methods that depend on that tree structure. In order to overcome this difficulty, we use a modular decomposition of graphs and some techniques from deep inference where inference rules do not rely on the main connective of a formula. For our proof system we show the admissibility of cut and a generalization of the splitting property. Finally, we show that our system is a conservative extension of multiplicative linear logic with mix, and we argue that our graphs form a notion of generalized connective.
Non-probability sampling is prevailing in survey sampling, but ignoring its selection bias leads to erroneous inferences. We offer a unified nonparametric calibration method to estimate the sampling weights for a non-probability sample by calibrating functions of auxiliary variables in a reproducing kernel Hilbert space. The consistency and the limiting distribution of the proposed estimator are established, and the corresponding variance estimator is also investigated. Compared with existing works, the proposed method is more robust since no parametric assumption is made for the selection mechanism of the non-probability sample. Numerical results demonstrate that the proposed method outperforms its competitors, especially when the model is misspecified. The proposed method is applied to analyze the average total cholesterol of Korean citizens based on a non-probability sample from the National Health Insurance Sharing Service and a reference probability sample from the Korea National Health and Nutrition Examination Survey.
We study a heterogeneous Rayleigh fading wireless sensor network (WSN) in which densely deployed sensor nodes monitor an environment and transmit their sensed information to base stations (BSs) using access points (APs) as relays to facilitate the data transfer. We consider both large-scale and small-scale propagation effects in our system model and formulate the node deployment problem as an optimization problem aimed at minimizing the wireless communication network's power consumption. By imposing a desired outage probability constraint on all communication channels, we derive the necessary conditions for the optimal deployment that not only minimize the power consumption, but also guarantee all wireless links to have an outage probability below the given threshold. In addition, we study the necessary conditions for an optimal deployment given ergodic capacity constraints. We compare our node deployment algorithms with similar algorithms in the literature and demonstrate their efficacy and superiority.
Continuous-time measurements are instrumental for a multitude of tasks in quantum engineering and quantum control, including the estimation of dynamical parameters of open quantum systems monitored through the environment. However, such measurements do not extract the maximum amount of information available in the output state, so finding alternative optimal measurement strategies is a major open problem. In this paper we solve this problem in the setting of discrete-time input-output quantum Markov chains. We present an efficient algorithm for optimal estimation of one-dimensional dynamical parameters which consists of an iterative procedure for updating a `measurement filter' operator and determining successive measurement bases for the output units. A key ingredient of the scheme is the use of a coherent quantum absorber as a way to post-process the output after the interaction with the system. This is designed adaptively such that the joint system and absorber stationary state is pure at a reference parameter value. The scheme offers an exciting prospect for optimal continuous-time adaptive measurements, but more work is needed to find realistic practical implementations.
In the upcoming 6G era, existing terrestrial networks have evolved toward space-air-ground integrated networks (SAGIN), providing ultra-high data rates, seamless network coverage, and ubiquitous intelligence for communications of applications and services. However, conventional communications in SAGIN still face data confidentiality issues. Fortunately, the concept of Quantum Key Distribution (QKD) over SAGIN is able to provide information-theoretic security for secure communications in SAGIN with quantum cryptography. Therefore, in this paper, we propose the quantum-secured SAGIN which is feasible to achieve proven secure communications using quantum mechanics to protect data channels between space, air, and ground nodes. Moreover, we propose a universal QKD service provisioning framework to minimize the cost of QKD services under the uncertainty and dynamics of communications in quantum-secured SAGIN. In this framework, fiber-based QKD services are deployed in passive optical networks with the advantages of low loss and high stability. Moreover, the widely covered and flexible satellite- and UAV-based QKD services are provisioned as a supplement during the real-time data transmission phase. Finally, to examine the effectiveness of the proposed concept and framework, a case study of quantum-secured SAGIN in the Metaverse is conducted where uncertain and dynamic factors of the secure communications in Metaverse applications are effectively resolved in the proposed framework.
In this work, we introduce a novel approach to formulating an artificial viscosity for shock capturing in nonlinear hyperbolic systems by utilizing the property that the solutions of hyperbolic conservation laws are not reversible in time in the vicinity of shocks. The proposed approach does not require any additional governing equations or a priori knowledge of the hyperbolic system in question, is independent of the mesh and approximation order, and requires the use of only one tunable parameter. The primary novelty is that the resulting artificial viscosity is unique for each component of the conservation law which is advantageous for systems in which some components exhibit discontinuities while others do not. The efficacy of the method is shown in numerical experiments of multi-dimensional hyperbolic conservation laws such as nonlinear transport, Euler equations, and ideal magnetohydrodynamics using a high-order discontinuous spectral element method on unstructured grids.
We consider M-estimation problems, where the target value is determined using a minimizer of an expected functional of a Levy process. With discrete observations from the Levy process, we can produce a "quasi-path" by shuffling increments of the Levy process, we call it a quasi-process. Under a suitable sampling scheme, a quasi-process can converge weakly to the true process according to the properties of the stationary and independent increments. Using this resampling technique, we can estimate objective functionals similar to those estimated using the Monte Carlo simulations, and it is available as a contrast function. The M-estimator based on these quasi-processes can be consistent and asymptotically normal.
Multihop relaying is a potential technique to mitigate channel impairments in optical wireless communications (OWC). In this paper, multiple fixed-gain amplify-and-forward (AF) relays are employed to enhance the OWC performance under the combined effect of atmospheric turbulence, pointing errors, and fog. We consider a long-range OWC link by modeling the atmospheric turbulence by the Fisher-Snedecor ${\cal{F}}$ distribution, pointing errors by the generalized non-zero boresight model, and random path loss due to fog. We also consider a short-range OWC system by ignoring the impact of atmospheric turbulence. We derive novel upper bounds on the probability density function (PDF) and cumulative distribution function (CDF) of the end-to-end signal-to-noise ratio (SNR) for both short and long-range multihop OWC systems by developing exact statistical results for a single-hop OWC system under the combined effect of ${\cal{F}}$-turbulence channels, non-zero boresight pointing errors, and fog-induced fading. Based on these expressions, we present analytical expressions of outage probability (OP) and average bit-error-rate (ABER) performance for the considered OWC systems involving single-variate Fox's H and Meijer's G functions. Moreover, asymptotic expressions of the outage probability in high SNR region are developed using simpler Gamma functions to provide insights on the effect of channel and system parameters. The derived analytical expressions are validated through Monte-Carlo simulations, and the scaling of the OWC performance with the number of relay nodes is demonstrated with a comparison to the single-hop transmission.
Traditional methods for link prediction can be categorized into three main types: graph structure feature-based, latent feature-based, and explicit feature-based. Graph structure feature methods leverage some handcrafted node proximity scores, e.g., common neighbors, to estimate the likelihood of links. Latent feature methods rely on factorizing networks' matrix representations to learn an embedding for each node. Explicit feature methods train a machine learning model on two nodes' explicit attributes. Each of the three types of methods has its unique merits. In this paper, we propose SEAL (learning from Subgraphs, Embeddings, and Attributes for Link prediction), a new framework for link prediction which combines the power of all the three types into a single graph neural network (GNN). GNN is a new type of neural network which directly accepts graphs as input and outputs their labels. In SEAL, the input to the GNN is a local subgraph around each target link. We prove theoretically that our local subgraphs also reserve a great deal of high-order graph structure features related to link existence. Another key feature is that our GNN can naturally incorporate latent features and explicit features. It is achieved by concatenating node embeddings (latent features) and node attributes (explicit features) in the node information matrix for each subgraph, thus combining the three types of features to enhance GNN learning. Through extensive experiments, SEAL shows unprecedentedly strong performance against a wide range of baseline methods, including various link prediction heuristics and network embedding methods.