Simultaneous transmission and reflection-reconfigurable intelligent surface (STAR-RIS) can provide expanded coverage compared with the conventional reflection-only RIS. This paper exploits the energy efficient potential of STAR-RIS in a multiple-input and multiple-output (MIMO) enabled non-orthogonal multiple access (NOMA) system. Specifically, we mainly focus on energy-efficient resource allocation with MIMO technology in the STAR-RIS assisted NOMA network. To maximize the system energy efficiency, we propose an algorithm to optimize the transmit beamforming and the phases of the low-cost passive elements on the STAR-RIS alternatively until the convergence. Specifically, we first decompose the formulated energy efficiency problem into beamforming and phase shift optimization problems. To efficiently address the non-convex beamforming optimization problem, we exploit signal alignment and zero-forcing precoding methods in each user pair to decompose MIMO-NOMA channels into single-antenna NOMA channels. Then, the Dinkelbach approach and dual decomposition are utilized to optimize the beamforming vectors. In order to solve non-convex phase shift optimization problem, we propose a successive convex approximation (SCA) based method to efficiently obtain the optimized phase shift of STAR-RIS. Simulation results demonstrate that the proposed algorithm with NOMA technology can yield superior energy efficiency performance over the orthogonal multiple access (OMA) scheme and the random phase shift scheme.
We propose a novel Learned Alternating Minimization Algorithm (LAMA) for dual-domain sparse-view CT image reconstruction. LAMA is naturally induced by a variational model for CT reconstruction with learnable nonsmooth nonconvex regularizers, which are parameterized as composite functions of deep networks in both image and sinogram domains. To minimize the objective of the model, we incorporate the smoothing technique and residual learning architecture into the design of LAMA. We show that LAMA substantially reduces network complexity, improves memory efficiency and reconstruction accuracy, and is provably convergent for reliable reconstructions. Extensive numerical experiments demonstrate that LAMA outperforms existing methods by a wide margin on multiple benchmark CT datasets.
In this paper, we propose an efficient decoding algorithm for short low-density parity check (LDPC) codes by carefully combining the belief propagation (BP) decoding and order statistic decoding (OSD) algorithms. Specifically, a modified BP (mBP) algorithm is applied for a certain number of iterations prior to OSD to enhance the reliability of the received message, where an offset parameter is utilized in mBP to control the weight of the extrinsic information in message passing. By carefully selecting the offset parameter and the number of mBP iterations, the number of errors in the most reliable positions (MRPs) in OSD can be reduced, thereby significantly improving the overall decoding performance of error rate and complexity. Simulation results show that the proposed algorithm can approach the maximum-likelihood decoding (MLD) for short LDPC codes with only a slight increase in complexity compared to BP and a significant decrease compared to OSD. Specifically, the order-(m-1) decoding of the proposed algorithm can achieve the performance of the order-m OSD.
In federated learning (FL), distributed clients can collaboratively train a shared global model while retaining their own training data locally. Nevertheless, the performance of FL is often limited by the slow convergence due to poor communications links when FL is deployed over wireless networks. Due to the scarceness of radio resources, it is crucial to select clients precisely and allocate communication resource accurately for enhancing FL performance. To address these challenges, in this paper, a joint optimization problem of client selection and resource allocation is formulated, aiming to minimize the total time consumption of each round in FL over a non-orthogonal multiple access (NOMA) enabled wireless network. Specifically, considering the staleness of the local FL models, we propose an age of update (AoU) based novel client selection scheme. Subsequently, the closed-form expressions for resource allocation are derived by monotonicity analysis and dual decomposition method. In addition, a server-side artificial neural network (ANN) is proposed to predict the FL models of clients who are not selected at each round to further improve FL performance. Finally, extensive simulation results demonstrate the superior performance of the proposed schemes over FL performance, average AoU and total time consumption.
This paper addresses the scheduling problem of coflows in identical parallel networks, which is a well-known $NP$-hard problem. Coflow is a relatively new network abstraction used to characterize communication patterns in data centers. We consider both flow-level scheduling and coflow-level scheduling problems. In the flow-level scheduling problem, flows within a coflow can be transmitted through different network cores. However, in the coflow-level scheduling problem, flows within a coflow must be transmitted through the same network core. The key difference between these two problems lies in their scheduling granularity. Previous approaches relied on linear programming to solve the scheduling order. In this paper, we enhance the efficiency of solving by utilizing the primal-dual method. For the flow-level scheduling problem, we propose a $(6-\frac{2}{m})$-approximation algorithm with arbitrary release times and a $(5-\frac{2}{m})$-approximation algorithm without release time, where $m$ represents the number of network cores. Additionally, for the coflow-level scheduling problem, we introduce a $(4m+1)$-approximation algorithm with arbitrary release times and a $(4m)$-approximation algorithm without release time. To validate the effectiveness of our proposed algorithms, we conduct simulations using both synthetic and real traffic traces. The results demonstrate the superior performance of our algorithms compared to previous approach, emphasizing their practical utility.
Large-scale multi-input multi-output (MIMO) code domain non-orthogonal multiple access (CD-NOMA) techniques are one of the potential candidates to address the next-generation wireless needs such as massive connectivity, and high reliability. This work focuses on two primary CD-NOMA techniques: sparse-code multiple access (SCMA) and dense-code multiple access (DCMA). One of the primary challenges in implementing MIMO-CD-NOMA systems is designing the optimal detector with affordable computation cost and complexity. This paper proposes an iterative linear detector based on the alternating direction method of multipliers (ADMM). First, the maximum likelihood (ML) detection problem is converted into a sharing optimization problem. The set constraint in the ML detection problem is relaxed into the box constraint sharing problem. An alternative variable is introduced via the penalty term, which compensates for the loss incurred by the constraint relaxation. The system models, i.e., the relation between the input signal and the received signal, are reformulated so that the proposed sharing optimization problem can be readily applied. The ADMM is a robust algorithm to solve the sharing problem in a distributed manner. The proposed detector leverages the distributive nature to reduce per-iteration cost and time. An ADMM-based linear detector is designed for three MIMO-CD-NOMA systems: single input multi output CD-NOMA (SIMO-CD-NOMA), spatial multiplexing CD-NOMA (SMX-CD-NOMA), and spatial modulated CD-NOMA (SM-CD-NOMA). The impact of various system parameters and ADMM parameters on computational complexity and symbol error rate (SER) has been thoroughly examined through extensive Monte Carlo simulations.
This paper proposes an energy-efficient scheme for multicell multiple-input, multiple-output (MIMO) simultaneous transmit and reflect (STAR) reconfigurable intelligent surfaces (RIS)-assisted broadcast channels by employing rate splitting (RS) and improper Gaussian signaling (IGS). Regular RISs can only reflect signals. Thus, a regular RIS can assist only when the transmitter and receiver are in the reflection space of the RIS. However, a STAR-RIS can simultaneously transmit and reflect, thus providing a 360-degrees coverage. In this paper, we assume that transceivers may suffer from I/Q imbalance (IQI). To compensate for IQI, we employ IGS. Moreover, we employ RS to manage intracell interference. We show that RIS can significantly improve the energy efficiency (EE) of the system when RIS components are carefully optimized. Additionally, we show that STAR-RIS can significantly outperform a regular RIS when the regular RIS cannot cover all the users. We also show that RS can highly increase the EE comparing to treating interference as noise.
The number of satellites, especially those operating in low-earth orbit (LEO), is exploding in recent years. Additionally, the use of COTS hardware into those satellites enables a new paradigm of computing: orbital edge computing (OEC). OEC entails more technically advanced steps compared to single-satellite computing. This feature allows for vast design spaces with multiple parameters, rendering several novel approaches feasible. The mobility of LEO satellites in the network and limited resources of communication, computation, and storage make it challenging to design an appropriate scheduling algorithm for specific tasks in comparison to traditional ground-based edge computing. This article comprehensively surveys the significant areas of focus in orbital edge computing, which include protocol optimization, mobility management, and resource allocation. This article provides the first comprehensive survey of OEC. Previous survey papers have only concentrated on ground-based edge computing or the integration of space and ground technologies. This article presents a review of recent research from 2000 to 2023 on orbital edge computing that covers network design, computation offloading, resource allocation, performance analysis, and optimization. Moreover, having discussed several related works, both technological challenges and future directions are highlighted in the field.
Reconfigurable intelligent surfaces (RISs) have become one of the key technologies in 6G wireless communications. By configuring the reflection beamforming codebooks, RIS focuses signals on target receivers. In this paper, we investigate the codebook configuration for 1-bit RIS-aided systems. We propose a novel learning-based method built upon the advanced methodology of implicit neural representations. The proposed model learns a continuous and differentiable coordinate-to-codebook representation from samplings. Our method only requires the information of the user's coordinate and avoids the assumption of channel models. Moreover, we propose an encoding-decoding strategy to reduce the dimension of codebooks, and thus improve the learning efficiency of the proposed method. Experimental results on simulation and measured data demonstrated the remarkable advantages of the proposed method.
We provide an analytical characterization of the coverage region of simultaneously transmitting and reflecting reconfigurable intelligent surface (STAR-RIS)-aided two-user downlink communication systems. The cases of orthogonal multiple access (OMA) and non-orthogonal multiple access (NOMA) are considered, under the energy-splitting (ES) protocol. Results confirm that the use of STAR-RISs is beneficial to extend the coverage region, and that the use of NOMA provides a better performance compared to the OMA counterpart.
As soon as abstract mathematical computations were adapted to computation on digital computers, the problem of efficient representation, manipulation, and communication of the numerical values in those computations arose. Strongly related to the problem of numerical representation is the problem of quantization: in what manner should a set of continuous real-valued numbers be distributed over a fixed discrete set of numbers to minimize the number of bits required and also to maximize the accuracy of the attendant computations? This perennial problem of quantization is particularly relevant whenever memory and/or computational resources are severely restricted, and it has come to the forefront in recent years due to the remarkable performance of Neural Network models in computer vision, natural language processing, and related areas. Moving from floating-point representations to low-precision fixed integer values represented in four bits or less holds the potential to reduce the memory footprint and latency by a factor of 16x; and, in fact, reductions of 4x to 8x are often realized in practice in these applications. Thus, it is not surprising that quantization has emerged recently as an important and very active sub-area of research in the efficient implementation of computations associated with Neural Networks. In this article, we survey approaches to the problem of quantizing the numerical values in deep Neural Network computations, covering the advantages/disadvantages of current methods. With this survey and its organization, we hope to have presented a useful snapshot of the current research in quantization for Neural Networks and to have given an intelligent organization to ease the evaluation of future research in this area.