Reconfigurable intelligent surface (RIS) can be crucial in next-generation communication systems. However, designing the RIS phases according to the instantaneous channel state information (CSI) can be challenging in practice due to the short coherent time of the channel. In this regard, we propose a novel algorithm based on the channel statistics of massive multiple input multiple output systems rather than the CSI. The beamforming at the base station (BS), power allocation of the users, and phase shifts at the RIS elements are optimized to maximize the minimum signal to interference and noise ratio (SINR), guaranteeing fair operation among various users. In particular, we design the RIS phases by leveraging the asymptotic deterministic equivalent of the minimum SINR that depends only on the channel statistics. This significantly reduces the computational complexity and the amount of controlling data between the BS and RIS for updating the phases. This setup is also useful for electromagnetic fields (EMF)-aware systems with constraints on the maximum user's exposure to EMF. The numerical results show that the proposed algorithms achieve more than 100% gain in terms of minimum SINR, compared to a system with random RIS phase shifts, with 40 RIS elements, 20 antennas at the BS and 10 users, respectively.
We analyze the influence of an reconfigurable intelligent surface (RIS) on the channel eigenvalues within a high signal-to-noise ratio (SNR) scenario. This allows to connect specific channel properties with the rank improvement capabilities of the RIS. In particular, fundamental limits due to a possible line of sight (LOS) setup between the base station (BS) and the RIS are derived. Furthermore, dirty paper coding (DPC) based schemes are compared to linear precoding in such a scenario and it is shown that under certain channel conditions, the performance gap between DPC and linear precoding can be made arbitrarily small by the RIS.
Relevance Vector Machine (RVM) is a supervised learning algorithm extended from Support Vector Machine (SVM) based on the Bayesian sparsity model. Compared with the regression problem, RVM classification is difficult to be conducted because there is no closed-form solution for the weight parameter posterior. Original RVM classification algorithm used Newton's method in optimization to obtain the mode of weight parameter posterior then approximated it by a Gaussian distribution in Laplace's method. It would work but just applied the frequency methods in a Bayesian framework. This paper proposes a Generic Bayesian approach for the RVM classification. We conjecture that our algorithm achieves convergent estimates of the quantities of interest compared with the nonconvergent estimates of the original RVM classification algorithm. Furthermore, a Fully Bayesian approach with the hierarchical hyperprior structure for RVM classification is proposed, which improves the classification performance, especially in the imbalanced data problem. By the numeric studies, our proposed algorithms obtain high classification accuracy rates. The Fully Bayesian hierarchical hyperprior method outperforms the Generic one for the imbalanced data classification.
Quantized constant envelope (QCE) precoding, a new transmission scheme that only discrete QCE transmit signals are allowed at each antenna, has gained growing research interests due to its ability of reducing the hardware cost and the energy consumption of massive multiple-input multiple-output (MIMO) systems. However, the discrete nature of QCE transmit signals greatly complicates the precoding design. In this paper, we consider the QCE precoding problem for a massive MIMO system with phase shift keying (PSK) modulation and develop an efficient approach for solving the constructive interference (CI) based problem formulation. Our approach is based on a custom-designed (continuous) penalty model that is equivalent to the original discrete problem. Specifically, the penalty model relaxes the discrete QCE constraint and penalizes it in the objective with a negative $\ell_2$-norm term, which leads to a non-smooth non-convex optimization problem. To tackle it, we resort to our recently proposed alternating optimization (AO) algorithm. We show that the AO algorithm admits closed-form updates at each iteration when applied to our problem and thus can be efficiently implemented. Simulation results demonstrate the superiority of the proposed approach over the existing algorithms.
The lasso is the most famous sparse regression and feature selection method. One reason for its popularity is the speed at which the underlying optimization problem can be solved. Sorted L-One Penalized Estimation (SLOPE) is a generalization of the lasso with appealing statistical properties. In spite of this, the method has not yet reached widespread interest. A major reason for this is that current software packages that fit SLOPE rely on algorithms that perform poorly in high dimensions. To tackle this issue, we propose a new fast algorithm to solve the SLOPE optimization problem, which combines proximal gradient descent and proximal coordinate descent steps. We provide new results on the directional derivative of the SLOPE penalty and its related SLOPE thresholding operator, as well as provide convergence guarantees for our proposed solver. In extensive benchmarks on simulated and real data, we show that our method outperforms a long list of competing algorithms.
The high dimensionality of hyperspectral images consisting of several bands often imposes a big computational challenge for image processing. Therefore, spectral band selection is an essential step for removing the irrelevant, noisy and redundant bands. Consequently increasing the classification accuracy. However, identification of useful bands from hundreds or even thousands of related bands is a nontrivial task. This paper aims at identifying a small set of highly discriminative bands, for improving computational speed and prediction accuracy. Hence, we proposed a new strategy based on joint mutual information to measure the statistical dependence and correlation between the selected bands and evaluate the relative utility of each one to classification. The proposed filter approach is compared to an effective reproduced filters based on mutual information. Simulations results on the hyperpectral image HSI AVIRIS 92AV3C using the SVM classifier have shown that the effective proposed algorithm outperforms the reproduced filters strategy performance. Keywords-Hyperspectral images, Classification, band Selection, Joint Mutual Information, dimensionality reduction ,correlation, SVM.
The long-range and low energy consumption requirements in Internet of Things (IoT) applications have led to a new wireless communication technology known as Low Power Wide Area Network (LPWANs). In recent years, the Long Range (LoRa) protocol has gained a lot of attention as one of the most promising technologies in LPWAN. Choosing the right combination of transmission parameters is a major challenge in the LoRa networks. In LoRa, an Adaptive Data Rate (ADR) mechanism is executed to configure each End Device's (ED) transmission parameters, resulting in improved performance metrics. In this paper, we propose a link-based ADR approach that aims to configure the transmission parameters of EDs by making a decision without taking into account the history of the last received packets, resulting in a relatively low space complexity approach. In this study, we present four different scenarios for assessing performance, including a scenario where mobile EDs are considered. Our simulation results show that in a mobile scenario with high channel noise, our proposed algorithm's Packet Delivery Ratio (PDR) is 2.8 times outperforming the original ADR and 1.35 times that of other relevant algorithms.
Especially when facing reliability data with limited information (e.g., a small number of failures), there are strong motivations for using Bayesian inference methods. These include the option to use information from physics-of-failure or previous experience with a failure mode in a particular material to specify an informative prior distribution. Another advantage is the ability to make statistical inferences without having to rely on specious (when the number of failures is small) asymptotic theory needed to justify non-Bayesian methods. Users of non-Bayesian methods are faced with multiple methods of constructing uncertainty intervals (Wald, likelihood, and various bootstrap methods) that can give substantially different answers when there is little information in the data. For Bayesian inference, there is only one method of constructing equal-tail credible intervals-but it is necessary to provide a prior distribution to fully specify the model. Much work has been done to find default prior distributions that will provide inference methods with good (and in some cases exact) frequentist coverage properties. This paper reviews some of this work and provides, evaluates, and illustrates principled extensions and adaptations of these methods to the practical realities of reliability data (e.g., non-trivial censoring).
Adults with mild-to-moderate hearing loss can use over-the-counter hearing aids to treat their hearing loss at a fraction of traditional hearing care costs. These products incorporate self-fitting methods that allow end-users to configure their hearing aids without the help of an audiologist. A self-fitting method helps users configure the gain-frequency responses that control the amplification for each frequency band of the incoming sound. This paper considers how to design effective self-fitting methods and whether we may evaluate certain aspects of their design without resorting to expensive user studies. Most existing fitting methods provide various user interfaces to allow users to select a configuration from a predetermined set of presets. We propose a novel metric for evaluating the performance of preset-based approaches by computing their population coverage. The population coverage estimates the fraction of users for which it is possible to find a configuration they prefer. A unique aspect of our approach is a probabilistic model that captures how a user's unique preferences differ from other users with similar hearing loss. Next, we develop methods for determining presets to maximize population coverage. Exploratory results demonstrate that the proposed algorithms can effectively select a small number of presets that provide higher population coverage than clustering-based approaches. Moreover, we may use our algorithms to configure the number of increments for slider-based methods.
Feedforward neural networks (FNNs) can be viewed as non-linear regression models, where covariates enter the model through a combination of weighted summations and non-linear functions. Although these models have some similarities to the models typically used in statistical modelling, the majority of neural network research has been conducted outside of the field of statistics. This has resulted in a lack of statistically-based methodology, and, in particular, there has been little emphasis on model parsimony. Determining the input layer structure is analogous to variable selection, while the structure for the hidden layer relates to model complexity. In practice, neural network model selection is often carried out by comparing models using out-of-sample performance. However, in contrast, the construction of an associated likelihood function opens the door to information-criteria-based variable and architecture selection. A novel model selection method, which performs both input- and hidden-node selection, is proposed using the Bayesian information criterion (BIC) for FNNs. The choice of BIC over out-of-sample performance as the model selection objective function leads to an increased probability of recovering the true model, while parsimoniously achieving favourable out-of-sample performance. Simulation studies are used to evaluate and justify the proposed method, and applications on real data are investigated.
Class Incremental Learning (CIL) aims at learning a multi-class classifier in a phase-by-phase manner, in which only data of a subset of the classes are provided at each phase. Previous works mainly focus on mitigating forgetting in phases after the initial one. However, we find that improving CIL at its initial phase is also a promising direction. Specifically, we experimentally show that directly encouraging CIL Learner at the initial phase to output similar representations as the model jointly trained on all classes can greatly boost the CIL performance. Motivated by this, we study the difference between a na\"ively-trained initial-phase model and the oracle model. Specifically, since one major difference between these two models is the number of training classes, we investigate how such difference affects the model representations. We find that, with fewer training classes, the data representations of each class lie in a long and narrow region; with more training classes, the representations of each class scatter more uniformly. Inspired by this observation, we propose Class-wise Decorrelation (CwD) that effectively regularizes representations of each class to scatter more uniformly, thus mimicking the model jointly trained with all classes (i.e., the oracle model). Our CwD is simple to implement and easy to plug into existing methods. Extensive experiments on various benchmark datasets show that CwD consistently and significantly improves the performance of existing state-of-the-art methods by around 1\% to 3\%. Code will be released.