In this paper, we focus on the throughput of random access with power-domain non-orthogonal multiple access (NOMA) and derive bounds on the throughput. In particular, we demonstrate that the expression for the throughput derived in [1] is an upper-bound and derive a new lower-bound as a closed-form expression. This expression allows to find the traffic intensity that maximizes the lower-bound, which is shown to be the square root of the number of power levels in NOMA. Furthermore, with this expression, for a large number of power levels, we obtain the asymptotic maximum throughput that is increased at a rate of the square root of the number of power levels.
We consider networked sources that generate update messages with a defined rate and we investigate the age of that information at the receiver. Typical applications are in cyber-physical systems that depend on timely sensor updates. We phrase the age of information in the min-plus algebra of the network calculus. This facilitates a variety of models including wireless channels and schedulers with random cross-traffic, as well as sources with periodic and random updates, respectively. We show how the age of information depends on the network service where, e.g., outages of a wireless channel cause delays. Further, our analytical expressions show two regimes depending on the update rate, where the age of information is either dominated by congestive delays or by idle waiting. We find that the optimal update rate strikes a balance between these two effects.
A consistent omnibus goodness-of-fit test for count distributions is proposed. The test is of wide applicability since any count distribution indexed by a $k$-variate parameter with finite moment of order $2k$ can be considered under the null hypothesis. The test statistic is based on the probability generating function and, in addition to have a rather simple form, it is asymptotically normally distributed, allowing a straightforward implementation of the test. The finite-sample properties of the test are investigated by means of an extensive simulation study, where the empirical power is evaluated against some common alternative distributions and against contiguous alternatives. The test shows an empirical significance level always very close to the nominal one already for moderate sample sizes and the empirical power is rather satisfactory, also compared to that of the chi-squared test.
For an ill-posed inverse problem, particularly with incomplete and limited measurement data, regularization is an essential tool for stabilizing the inverse problem. Among various forms of regularization, the lp penalty term provides a suite of regularization of various characteristics depending on the value of p. When there are no explicit features to determine p, a spatially varying inhomogeneous p can be incorporated to apply different regularization characteristics that change over the domain. This study proposes a strategy to design the exponent p when the first and second derivatives of the true signal are not available, such as in the case of indirect and limited measurement data. The proposed method extracts statistical and patch-wise information using multiple reconstructions from a single measurement, which assists in classifying each patch to predefined features with corresponding p values. We validate the robustness and effectiveness of the proposed approach through a suite of numerical tests in 1D and 2D, including a sea ice image recovery from partial Fourier measurement data. Numerical tests show that the exponent distribution is insensitive to the choice of multiple reconstructions.
We consider the achievable rate maximization problem for intelligent reflecting surface (IRS) assisted multiple-input multiple-output systems in an underlay spectrum sharing scenario, subject to interference power constraints at primary users. The formulated non-convex optimization problem is challenging to solve due to its non-convexity as well as coupling design variables in the constraints. Different from existing works that are mostly based on alternating optimization (AO), we propose a penalty dual decomposition based gradient projection (PDDGP) algorithm to solve this problem. We also provide a convergence proof and a complexity analysis for the proposed algorithm. We benchmark the proposed algorithm against two known solutions, namely a minimum mean-square error based AO algorithm and an inner approximation method with block coordinate descent. Specifically, the complexity of the proposed algorithm is $O(N_I^2)$ while that of the two benchmark methods is $O(N_I^3)$, where $N_I$ is the number of IRS elements. Moreover, numerical results show that the proposed PDDGP algorithm yields considerably higher achievable rate than the benchmark solutions.
Multirate time integration methods apply different step sizes to resolve different components of the system based on the local activity levels. This local selection of step sizes allows increased computational efficiency while achieving the desired solution accuracy. While the multirate idea is elegant and has been around for decades, multirate methods are not yet widely used in applications. This is due, in part, to the difficulties raised by the construction of high order multirate schemes. Seeking to overcome these challenges, this work focuses on the design of practical high-order multirate methods using the theoretical framework of generalized additive Runge-Kutta (MrGARK) methods, which provides the generic order conditions and the linear and nonlinear stability analyses. A set of design criteria for practical multirate methods is defined herein: method coefficients should be generic in the step size ratio, but should not depend strongly on this ratio; unnecessary coupling between the fast and the slow components should be avoided; and the step size controllers should adjust both the micro- and the macro-steps. Using these criteria, we develop MrGARK schemes of up to order four that are explicit-explicit (both the fast and slow component are treated explicitly), implicit-explicit (implicit in the fast component and explicit in the slow one), and explicit-implicit (explicit in the fast component and implicit in the slow one). Numerical experiments illustrate the performance of these new schemes.
The event-driven and sparse nature of communication between spiking neurons in the brain holds great promise for flexible and energy-efficient AI. Recent advances in learning algorithms have demonstrated that recurrent networks of spiking neurons can be effectively trained to achieve competitive performance compared to standard recurrent neural networks. Still, as these learning algorithms use error-backpropagation through time (BPTT), they suffer from high memory requirements, are slow to train, and are incompatible with online learning. This limits the application of these learning algorithms to relatively small networks and to limited temporal sequence lengths. Online approximations to BPTT with lower computational and memory complexity have been proposed (e-prop, OSTL), but in practice also suffer from memory limitations and, as approximations, do not outperform standard BPTT training. Here, we show how a recently developed alternative to BPTT, Forward Propagation Through Time (FPTT) can be applied in spiking neural networks. Different from BPTT, FPTT attempts to minimize an ongoing dynamically regularized risk on the loss. As a result, FPTT can be computed in an online fashion and has fixed complexity with respect to the sequence length. When combined with a novel dynamic spiking neuron model, the Liquid-Time-Constant neuron, we show that SNNs trained with FPTT outperform online BPTT approximations, and approach or exceed offline BPTT accuracy on temporal classification tasks. This approach thus makes it feasible to train SNNs in a memory-friendly online fashion on long sequences and scale up SNNs to novel and complex neural architectures.
Cell-Free Massive multiple-input multiple-output (MIMO) and reconfigurable intelligent surface (RIS) are two promising technologies for application to beyond-5G networks. This paper considers Cell-Free Massive MIMO systems with the assistance of an RIS for enhancing the system performance under the presence of spatial correlation among the engineered scattering elements of the RIS. Distributed maximum-ratio processing is considered at the access points (APs). We introduce an aggregated channel estimation approach that provides sufficient information for data processing with the main benefit of reducing the overhead required for channel estimation. The considered system is studied by using asymptotic analysis which lets the number of APs and/or the number of RIS elements grow large. A lower bound for the channel capacity is obtained for a finite number of APs and engineered scattering elements of the RIS, and closed-form expressions for the uplink and downlink ergodic net throughput are formulated in terms of only the channel statistics. Based on the obtained analytical frameworks, we unveil the impact of channel correlation, the number of RIS elements, and the pilot contamination on the net throughput of each user. In addition, a simple control scheme for optimizing the configuration of the engineered scattering elements of the RIS is proposed, which is shown to increase the channel estimation quality, and, hence, the system performance. Numerical results demonstrate the effectiveness of the proposed system design and performance analysis. In particular, the performance benefits of using RISs in Cell-Free Massive MIMO systems are confirmed, especially if the direct links between the APs and the users are of insufficient quality with high probability.
This paper studies an intelligent reflecting surface (IRS)-aided multiple-input-multiple-output (MIMO) full-duplex (FD) wireless-powered communication network (WPCN), where a hybrid access point (HAP) operating in FD broadcasts energy signals to multiple devices for their energy harvesting (EH) in the downlink (DL) and meanwhile receives information signals from devices in the uplink (UL) with the help of an IRS. Taking into account the practical finite self-interference (SI) and the non-linear EH model, we formulate the weighted sum throughput maximization optimization problem by jointly optimizing DL/UL time allocation, precoding matrices at devices, transmit covariance matrices at the HAP, and phase shifts at the IRS. Since the resulting optimization problem is non-convex, there are no standard methods to solve it optimally in general. To tackle this challenge, we first propose an element-wise (EW) based algorithm, where each IRS phase shift is alternately optimized in an iterative manner. To reduce the computational complexity, a minimum mean-square error (MMSE) based algorithm is proposed, where we transform the original problem into an equivalent form based on the MMSE method, which facilities the design of an efficient iterative algorithm. In particular, the IRS phase shift optimization problem is recast as an second-order cone program (SOCP), where all the IRS phase shifts are simultaneously optimized. For comparison, we also study two suboptimal IRS beamforming configurations in simulations, namely partially dynamic IRS beamforming (PDBF) and static IRS beamforming (SBF), which strike a balance between the system performance and practical complexity.
We present a learning-based approach for removing unwanted obstructions, such as window reflections, fence occlusions or raindrops, from a short sequence of images captured by a moving camera. Our method leverages the motion differences between the background and the obstructing elements to recover both layers. Specifically, we alternate between estimating dense optical flow fields of the two layers and reconstructing each layer from the flow-warped images via a deep convolutional neural network. The learning-based layer reconstruction allows us to accommodate potential errors in the flow estimation and brittle assumptions such as brightness consistency. We show that training on synthetically generated data transfers well to real images. Our results on numerous challenging scenarios of reflection and fence removal demonstrate the effectiveness of the proposed method.
We consider the exploration-exploitation trade-off in reinforcement learning and we show that an agent imbued with a risk-seeking utility function is able to explore efficiently, as measured by regret. The parameter that controls how risk-seeking the agent is can be optimized exactly, or annealed according to a schedule. We call the resulting algorithm K-learning and show that the corresponding K-values are optimistic for the expected Q-values at each state-action pair. The K-values induce a natural Boltzmann exploration policy for which the `temperature' parameter is equal to the risk-seeking parameter. This policy achieves an expected regret bound of $\tilde O(L^{3/2} \sqrt{S A T})$, where $L$ is the time horizon, $S$ is the number of states, $A$ is the number of actions, and $T$ is the total number of elapsed time-steps. This bound is only a factor of $L$ larger than the established lower bound. K-learning can be interpreted as mirror descent in the policy space, and it is similar to other well-known methods in the literature, including Q-learning, soft-Q-learning, and maximum entropy policy gradient, and is closely related to optimism and count based exploration methods. K-learning is simple to implement, as it only requires adding a bonus to the reward at each state-action and then solving a Bellman equation. We conclude with a numerical example demonstrating that K-learning is competitive with other state-of-the-art algorithms in practice.