Efficient sampling and remote estimation are critical for a plethora of wireless-empowered applications in the Internet of Things and cyber-physical systems. Motivated by such applications, this work proposes decentralized policies for the real-time monitoring and estimation of autoregressive processes over random access channels. Two classes of policies are investigated: (i) oblivious schemes in which sampling and transmission policies are independent of the processes that are monitored, and (ii) non-oblivious schemes in which transmitters causally observe their corresponding processes for decision making. In the class of oblivious policies, we show that minimizing the expected time-average estimation error is equivalent to minimizing the expected age of information. Consequently, we prove lower and upper bounds on the minimum achievable estimation error in this class. Next, we consider non-oblivious policies and design a threshold policy, called error-based thinning, in which each source node becomes active if its instantaneous error has crossed a fixed threshold (which we optimize). Active nodes then transmit stochastically following a slotted ALOHA policy. A closed-form, approximately optimal, solution is found for the threshold as well as the resulting estimation error. It is shown that non-oblivious policies offer a multiplicative gain close to $3$ compared to oblivious policies. Moreover, it is shown that oblivious policies that use the age of information for decision making improve the state-of-the-art at least by the multiplicative factor $2$. The performance of all discussed policies is compared using simulations. The numerical comparison shows that the performance of the proposed decentralized policy is very close to that of centralized greedy scheduling.
We propose a joint channel estimation and data detection (JED) algorithm for densely-populated cell-free massive multiuser (MU) multiple-input multiple-output (MIMO) systems, which reduces the channel training overhead caused by the presence of hundreds of simultaneously transmitting user equipments (UEs). Our algorithm iteratively solves a relaxed version of a maximum a-posteriori JED problem and simultaneously exploits the sparsity of cell-free massive MU-MIMO channels as well as the boundedness of QAM constellations. In order to improve the performance and convergence of the algorithm, we propose methods that permute the access point and UE indices to form so-called virtual cells, which leads to better initial solutions. We assess the performance of our algorithm in terms of root-mean-squared-symbol error, bit error rate, and mutual information, and we demonstrate that JED significantly reduces the pilot overhead compared to orthogonal training, which enables reliable communication with short packets to a large number of UEs.
We present an analytical framework for the channel estimation and the data detection in massive multiple-input multiple-output uplink systems with 1-bit analog-to-digital converters (ADCs) and i.i.d. Rayleigh fading. First, we provide closed-form expressions of the mean squared error (MSE) of the channel estimation considering the state-of-the-art linear minimum MSE estimator and the class of scaled least-squares estimators. For the data detection, we provide closed-form expressions of the expected value and the variance of the estimated symbols when maximum ratio combining is adopted, which can be exploited to efficiently implement minimum distance detection and, potentially, to design the set of transmit symbols. Our analytical findings explicitly depend on key system parameters such as the signal-to-noise ratio (SNR), the number of user equipments, and the pilot length, thus enabling a precise characterization of the performance of the channel estimation and the data detection with 1-bit ADCs. The proposed analysis highlights a fundamental SNR trade-off, according to which operating at the right noise level significantly enhances the system performance.
Estimating causal effects from observational data informs us about which factors are important in an autonomous system, and enables us to take better decisions. This is important because it has applications in selecting a treatment in medical systems or making better strategies in industries or making better policies for our government or even the society. Unavailability of complete data, coupled with high cardinality of data, makes this estimation task computationally intractable. Recently, a regression-based weighted estimator has been introduced that is capable of producing solution using bounded samples of a given problem. However, as the data dimension increases, the solution produced by the regression-based method degrades. Against this background, we introduce a neural network based estimator that improves the solution quality in case of non-linear and finitude of samples. Finally, our empirical evaluation illustrates a significant improvement of solution quality, up to around $55\%$, compared to the state-of-the-art estimators.
We consider an information update system consisting of $N$ sources sending status packets at random instances according to a Poisson process to a remote monitor through a single server. We assume a heteregeneous server with exponentially distributed service times which is equipped with a waiting room holding the freshest packet from each source referred to as Single Buffer Per-Source Queueing (SBPSQ). The sources are assumed to be equally important, i.e., non-weighted average AoI is used as the information freshness metric, and subsequently two symmetric scheduling policies are studied in this paper, namely First Source First Serve (FSFS) and the Earliest Served First Serve (ESFS) policies, the latter policy being proposed the first time in the current paper to the best of our knowledge. By employing the theory of Markov Fluid Queues (MFQ), an analytical model is proposed to obtain the exact distribution of the Age of Information (AoI) for each source when the FSFS and ESFS policies are employed at the server. Subsequently, a benchmark scheduling-free scheme named as Single Buffer with Replacement (SBR) that uses a single one-packet buffer shared by all sources is also studied with a similar but less complex analytical model. We comparatively study the performance of the three schemes through numerical examples and show that the proposed ESFS policy outperforms the other two schemes in terms of the average AoI and the age violation probability averaged across all sources, in a scenario of sources possessing different traffic intensities but sharing a common service time.
We develop a new primitive for stochastic optimization: a low-bias, low-cost estimator of the minimizer $x_\star$ of any Lipschitz strongly-convex function. In particular, we use a multilevel Monte-Carlo approach due to Blanchet and Glynn to turn any optimal stochastic gradient method into an estimator of $x_\star$ with bias $\delta$, variance $O(\log(1/\delta))$, and an expected sampling cost of $O(\log(1/\delta))$ stochastic gradient evaluations. As an immediate consequence, we obtain cheap and nearly unbiased gradient estimators for the Moreau-Yoshida envelope of any Lipschitz convex function, allowing us to perform dimension-free randomized smoothing. We demonstrate the potential of our estimator through four applications. First, we develop a method for minimizing the maximum of $N$ functions, improving on recent results and matching a lower bound up to logarithmic factors. Second and third, we recover state-of-the-art rates for projection-efficient and gradient-efficient optimization using simple algorithms with a transparent analysis. Finally, we show that an improved version of our estimator would yield a nearly linear-time, optimal-utility, differentially-private non-smooth stochastic optimization method.
In this paper, we investigate the problem of pilot optimization and channel estimation of two-way relaying network (TWRN) aided by an intelligent reflecting surface (IRS) with finite discrete phase shifters. In a TWRN, there exists a challenging problem that the two cascading channels from source-to-IRS-to-Relay and destination-to-IRS-to-relay interfere with each other. Via designing the initial phase shifts of IRS and pilot pattern, the two cascading channels are separated by using simple arithmetic operations like addition and subtraction. Then, the least-squares estimator is adopted to estimate the two cascading channels and two direct channels from source to relay and destination to relay. The corresponding mean square errors (MSE) of channel estimators are derived. By minimizing MSE, the optimal phase shift matrix of IRS is proved. Then, two special matrices Hadamard and discrete Fourier transform (DFT) matrix is shown to be two optimal training matrices for IRS. Furthermore, the IRS with discrete finite phase shifters is taken into account. Using theoretical derivation and numerical simulations, we find that 3-4 bits phase shifters are sufficient for IRS to achieve a negligible MSE performance loss. More importantly, the Hadamard matrix requires only one-bit phase shifters to achieve the optimal MSE performance while the DFT matrix requires at least three or four bits to achieve the same performance. Thus, the Hadamard matrix is a perfect choice for channel estimation using low-resolution phase-shifting IRS.
Topic model evaluation, like evaluation of other unsupervised methods, can be contentious. However, the field has coalesced around automated estimates of topic coherence, which rely on the frequency of word co-occurrences in a reference corpus. Contemporary neural topic models surpass classical ones according to these metrics. At the same time, topic model evaluation suffers from a validation gap: automated coherence, developed for classical models, has not been validated using human experimentation for neural models. In addition, a meta-analysis of topic modeling literature reveals a substantial standardization gap in automated topic modeling benchmarks. To address the validation gap, we compare automated coherence with the two most widely accepted human judgment tasks: topic rating and word intrusion. To address the standardization gap, we systematically evaluate a dominant classical model and two state-of-the-art neural models on two commonly used datasets. Automated evaluations declare a winning model when corresponding human evaluations do not, calling into question the validity of fully automatic evaluations independent of human judgments.
A decision maker is choosing between an active action (e.g., purchase a house, invest certain stock) and a passive action. The payoff of the active action depends on the buyer's private type and also an unknown state of nature. An information seller can design experiments to reveal information about the realized state to the decision maker, and would like to maximize profit from selling such information. We characterize, in closed-form, the revenue-optimal information selling mechanism for the seller. After eliciting the buyer's type, the optimal mechanism charges the buyer an upfront payment and then simply reveals whether the realized state exceeds a certain threshold or not. The optimal mechanism features both price discrimination and information discrimination. The special buyer type who is a-priori indifferent between the active and passive action benefits the most from participating in the mechanism.
Annealed importance sampling (AIS) and related algorithms are highly effective tools for marginal likelihood estimation, but are not fully differentiable due to the use of Metropolis-Hastings correction steps. Differentiability is a desirable property as it would admit the possibility of optimizing marginal likelihood as an objective using gradient-based methods. To this end, we propose Differentiable AIS (DAIS), a variant of AIS which ensures differentiability by abandoning the Metropolis-Hastings corrections. As a further advantage, DAIS allows for mini-batch gradients. We provide a detailed convergence analysis for Bayesian linear regression which goes beyond previous analyses by explicitly accounting for the sampler not having reached equilibrium. Using this analysis, we prove that DAIS is consistent in the full-batch setting and provide a sublinear convergence rate. Furthermore, motivated by the problem of learning from large-scale datasets, we study a stochastic variant of DAIS that uses mini-batch gradients. Surprisingly, stochastic DAIS can be arbitrarily bad due to a fundamental incompatibility between the goals of last-iterate convergence to the posterior and elimination of the accumulated stochastic error. This is in stark contrast with other settings such as gradient-based optimization and Langevin dynamics, where the effect of gradient noise can be washed out by taking smaller steps. This indicates that annealing-based marginal likelihood estimation with stochastic gradients may require new ideas.
In this paper, we study the optimal convergence rate for distributed convex optimization problems in networks. We model the communication restrictions imposed by the network as a set of affine constraints and provide optimal complexity bounds for four different setups, namely: the function $F(\xb) \triangleq \sum_{i=1}^{m}f_i(\xb)$ is strongly convex and smooth, either strongly convex or smooth or just convex. Our results show that Nesterov's accelerated gradient descent on the dual problem can be executed in a distributed manner and obtains the same optimal rates as in the centralized version of the problem (up to constant or logarithmic factors) with an additional cost related to the spectral gap of the interaction matrix. Finally, we discuss some extensions to the proposed setup such as proximal friendly functions, time-varying graphs, improvement of the condition numbers.