Covert communication over an additive white Gaussian noise (AWGN) channel with finite block length is investigated in this paper. The attention is on the covert criterion, which has not been considered in finite block length circumstance. As an accurate quantity metric of discrimination, the variation distance with given finite block length n and signal-noise ratio (snr) is obtained. We give both its analytic solution and expansions which can be easily evaluated. It is shown that K-L distance, which is frequently adopted as the metric of discrimination at the adversary in asymptotic regime, is not convincing in finite block length regime compared with the total variation distance. Moreover, the convergence rate of the total variation with different snr is analyzed when the block length tends to infinity. The results will be very helpful for understanding the behavior of the total variation distance and practical covert communication.
We propose an improved convergence analysis technique that characterizes the distributed learning paradigm of federated learning (FL) with imperfect/noisy uplink and downlink communications. Such imperfect communication scenarios arise in the practical deployment of FL in emerging communication systems and protocols. The analysis developed in this paper demonstrates, for the first time, that there is an asymmetry in the detrimental effects of uplink and downlink communications in FL. In particular, the adverse effect of the downlink noise is more severe on the convergence of FL algorithms. Using this insight, we propose improved Signal-to-Noise (SNR) control strategies that, discarding the negligible higher-order terms, lead to a similar convergence rate for FL as in the case of a perfect, noise-free communication channel while incurring significantly less power resources compared to existing solutions. In particular, we establish that to maintain the $O(\frac{1}{\sqrt{K}})$ rate of convergence like in the case of noise-free FL, we need to scale down the uplink and downlink noise by $\Omega({\sqrt{k}})$ and $\Omega({k})$ respectively, where $k$ denotes the communication round, $k=1,\dots, K$. Our theoretical result is further characterized by two major benefits: firstly, it does not assume the somewhat unrealistic assumption of bounded client dissimilarity, and secondly, it only requires smooth non-convex loss functions, a function class better suited for modern machine learning and deep learning models. We also perform extensive empirical analysis to verify the validity of our theoretical findings.
Forecasting water content dynamics in heterogeneous porous media has significant interest in hydrological applications; in particular, the treatment of infiltration when in presence of cracks and fractures can be accomplished resorting to peridynamic theory, which allows a proper modeling of non localities in space. In this framework, we make use of Chebyshev transform on the diffusive component of the equation and then we integrate forward in time using an explicit method. We prove that the proposed spectral numerical scheme provides a solution converging to the unique solution in some appropriate Sobolev space. We finally exemplify on several different soils, also considering a sink term representing the root water uptake.
This paper studies online convex optimization with stochastic constraints. We propose a variant of the drift-plus-penalty algorithm that guarantees $O(\sqrt{T})$ expected regret and zero constraint violation, after a fixed number of iterations, which improves the vanilla drift-plus-penalty method with $O(\sqrt{T})$ constraint violation. Our algorithm is oblivious to the length of the time horizon $T$, in contrast to the vanilla drift-plus-penalty method. This is based on our novel drift lemma that provides time-varying bounds on the virtual queue drift and, as a result, leads to time-varying bounds on the expected virtual queue length. Moreover, we extend our framework to stochastic-constrained online convex optimization under two-point bandit feedback. We show that by adapting our algorithmic framework to the bandit feedback setting, we may still achieve $O(\sqrt{T})$ expected regret and zero constraint violation, improving upon the previous work for the case of identical constraint functions. Numerical results demonstrate our theoretical results.
Mixtures of factor analysers (MFA) models represent a popular tool for finding structure in data, particularly high-dimensional data. While in most applications the number of clusters, and especially the number of latent factors within clusters, is mostly fixed in advance, in the recent literature models with automatic inference on both the number of clusters and latent factors have been introduced. The automatic inference is usually done by assigning a nonparametric prior and allowing the number of clusters and factors to potentially go to infinity. The MCMC estimation is performed via an adaptive algorithm, in which the parameters associated with the redundant factors are discarded as the chain moves. While this approach has clear advantages, it also bears some significant drawbacks. Running a separate factor-analytical model for each cluster involves matrices of changing dimensions, which can make the model and programming somewhat cumbersome. In addition, discarding the parameters associated with the redundant factors could lead to a bias in estimating cluster covariance matrices. At last, identification remains problematic for infinite factor models. The current work contributes to the MFA literature by providing for the automatic inference on the number of clusters and the number of cluster-specific factors while keeping both cluster and factor dimensions finite. This allows us to avoid many of the aforementioned drawbacks of the infinite models. For the automatic inference on the cluster structure, we employ the dynamic mixture of finite mixtures (MFM) model. Automatic inference on cluster-specific factors is performed by assigning an exchangeable shrinkage process (ESP) prior to the columns of the factor loading matrices. The performance of the model is demonstrated on several benchmark data sets as well as real data applications.
A promising approach to deal with the high hardware cost and energy consumption of massive MIMO transmitters is to use low-resolution digital-to-analog converters (DACs) at each antenna element. This leads to a transmission scheme where the transmitted signals are restricted to a finite set of voltage levels. This paper is concerned with the analysis and optimization of a low-cost quantized precoding strategy, referred to as linear-quantized precoding, for a downlink massive MIMO system under Rayleigh fading. In linear-quantized precoding, the signals are first processed by a linear precoding matrix and subsequently quantized component-wise by the DAC. In this paper, we analyze both the signal-to-interference-plus-noise ratio (SINR) and the symbol error probability (SEP) performances of such linear-quantized precoding schemes in an asymptotic framework where the number of transmit antennas and the number of users grow large with a fixed ratio. Our results provide a rigorous justification for the heuristic arguments based on the Bussgang decomposition that are commonly used in prior works. Based on the asymptotic analysis, we further derive the optimal precoder within a class of linear-quantized precoders that includes several popular precoders as special cases. Our numerical results demonstrate the excellent accuracy of the asymptotic analysis for finite systems and the optimality of the derived precoder.
In this paper we develop a new well-balanced discontinuous Galerkin (DG) finite element scheme with subcell finite volume (FV) limiter for the numerical solution of the Einstein--Euler equations of general relativity based on a first order hyperbolic reformulation of the Z4 formalism. The first order Z4 system, which is composed of 59 equations, is analyzed and proven to be strongly hyperbolic for a general metric. The well-balancing is achieved for arbitrary but a priori known equilibria by subtracting a discrete version of the equilibrium solution from the discretized time-dependent PDE system. Special care has also been taken in the design of the numerical viscosity so that the well-balancing property is achieved. As for the treatment of low density matter, e.g. when simulating massive compact objects like neutron stars surrounded by vacuum, we have introduced a new filter in the conversion from the conserved to the primitive variables, preventing superluminal velocities when the density drops below a certain threshold, and being potentially also very useful for the numerical investigation of highly rarefied relativistic astrophysical flows. Thanks to these improvements, all standard tests of numerical relativity are successfully reproduced, reaching three achievements: (i) we are able to obtain stable long term simulations of stationary black holes, including Kerr black holes with extreme spin, which after an initial perturbation return perfectly back to the equilibrium solution up to machine precision; (ii) a (standard) TOV star under perturbation is evolved in pure vacuum ($\rho=p=0$) up to $t=1000$ with no need to introduce any artificial atmosphere around the star; and, (iii) we solve the head on collision of two punctures black holes, that was previously considered un--tractable within the Z4 formalism.
I consider a class of statistical decision problems in which the policy maker must decide between two alternative policies to maximize social welfare based on a finite sample. The central assumption is that the underlying, possibly infinite-dimensional parameter, lies in a known convex set, potentially leading to partial identification of the welfare effect. An example of such restrictions is the smoothness of counterfactual outcome functions. As the main theoretical result, I derive a finite-sample, exact minimax regret decision rule within the class of all decision rules under normal errors with known variance. When the error distribution is unknown, I obtain a feasible decision rule that is asymptotically minimax regret. I apply my results to the problem of whether to change a policy eligibility cutoff in a regression discontinuity setup, and illustrate them in an empirical application to a school construction program in Burkina Faso.
Efficiently selecting an appropriate spike stream data length to extract precise information is the key to the spike vision tasks. To address this issue, we propose a dynamic timing representation for spike streams. Based on multi-layers architecture, it applies dilated convolutions on temporal dimension to extract features on multi-temporal scales with few parameters. And we design layer attention to dynamically fuse these features. Moreover, we propose an unsupervised learning method for optical flow estimation in a spike-based manner to break the dependence on labeled data. In addition, to verify the robustness, we also build a spike-based synthetic validation dataset for extreme scenarios in autonomous driving, denoted as SSES dataset. It consists of various corner cases. Experiments show that our method can predict optical flow from spike streams in different high-speed scenes, including real scenes. For instance, our method gets $15\%$ and $19\%$ error reduction from the best spike-based work, SCFlow, in $\Delta t=10$ and $\Delta t=20$ respectively which are the same settings as the previous works.
The recently introduced independent fluctuating two-ray (IFTR) fading model, consisting of two specular components fluctuating independently plus a diffuse component, has proven to provide an excellent fit to different wireless environments, including the millimeter-wave band. However, the original formulations of the probability density function (PDF) and cumulative distribution function (CDF) of this model are not applicable to all possible values of its defining parameters, and are given in terms of multifold generalized hypergeometric functions, which prevents their widespread use for the derivation of performance metric expressions. In this paper we present a new formulation of the IFTR model as a countable mixture of Gamma distributions which greatly facilitates the performance evaluation for this model in terms of the metrics already known for the much simpler and widely used Nakagami-m fading. Additionally, a closed-form expression is presented for the generalized moment generating function (GMGF), which permits to readily obtain all the moments of the distribution of the model, as well as several relevant performance metrics. Based on these new derivations, the IFTR model is evaluated for the average channel capacity, the outage probability with and without co-channel interference, and the bit error rate (BER), which are verified by Monte Carlo simulations.
This work addresses the block-diagonal semidefinite program (SDP) relaxations for the clique number of the Paley graphs. The size of the maximal clique (clique number) of a graph is a classic NP-complete problem; a Paley graph is a deterministic graph where two vertices are connected if their difference is a quadratic residue modulo certain prime powers. Improving the upper bound for the Paley graph clique number for odd prime powers is an open problem in combinatorics. Moreover, since quadratic residues exhibit pseudorandom properties, Paley graphs are related to the construction of deterministic restricted isometries, an open problem in compressed sensing and sparse recovery. Recent work provides numerical evidence that the current upper bounds can be improved by the sum-of-squares (SOS) relaxations. In particular, the bounds given by the SOS relaxations of degree 4 (SOS-4) have been empirically observed to be growing at an order smaller than square root of the prime. However, computations of SOS-4 appear to be intractable with respect to large graphs. Gvozdenovic et al. introduced a more computationally efficient block-diagonal hierarchy of SDPs that refines the SOS hierarchy. They computed the values of these SDPs of degrees 2 and 3 (L2 and L3 respectively) for the Paley graph clique numbers associated with primes p less or equal to 809. These values bound from above the values of the corresponding SOS-4 and SOS-6 relaxations respectively. We revisit these computations and compute the values of the L2 relaxations for larger p's. Our results provide additional numerical evidence that the L2 relaxations, and therefore also the SOS-4 relaxations, are asymptotically growing at an order smaller than the square root of p.