Different from traditional reflection-only reconfigurable intelligent surfaces (RISs), simultaneously transmitting and reflecting RISs (STAR-RISs) represent a novel technology, which extends the half-space coverage to full-space coverage by simultaneously transmitting and reflecting incident signals. STAR-RISs provide new degrees-of-freedom (DoF) for manipulating signal propagation. Motivated by the above, a novel STAR-RIS assisted non-orthogonal multiple access (NOMA) (STAR-RIS-NOMA) system is proposed in this paper. Our objective is to maximize the achievable sum rate by jointly optimizing the decoding order, power allocation coefficients, active beamforming, and transmission and reflection beamforming. However, the formulated problem is non-convex with intricately coupled variables. To tackle this challenge, a suboptimal two-layer iterative algorithm is proposed. Specifically, in the inner-layer iteration, for a given decoding order, the power allocation coefficients, active beamforming, transmission and reflection beamforming are optimized alternatingly. For the outer-layer iteration, the decoding order of NOMA users in each cluster is updated with the solutions obtained from the inner-layer iteration. Moreover, an efficient decoding order determination scheme is proposed based on the equivalent-combined channel gains. Simulation results are provided to demonstrate that the proposed STAR-RIS-NOMA system, aided by our proposed algorithm, outperforms conventional RIS-NOMA and RIS assisted orthogonal multiple access (RIS-OMA) systems.
Mobile Edge Caching (MEC) is a revolutionary technology for the Sixth Generation (6G) of wireless networks with the promise to significantly reduce users' latency via offering storage capacities at the edge of the network. The efficiency of the MEC network, however, critically depends on its ability to dynamically predict/update the storage of caching nodes with the top-K popular contents. Conventional statistical caching schemes are not robust to the time-variant nature of the underlying pattern of content requests, resulting in a surge of interest in using Deep Neural Networks (DNNs) for time-series popularity prediction in MEC networks. However, existing DNN models within the context of MEC fail to simultaneously capture both temporal correlations of historical request patterns and the dependencies between multiple contents. This necessitates an urgent quest to develop and design a new and innovative popularity prediction architecture to tackle this critical challenge. The paper addresses this gap by proposing a novel hybrid caching framework based on the attention mechanism. Referred to as the parallel Vision Transformers with Cross Attention (ViT-CAT) Fusion, the proposed architecture consists of two parallel ViT networks, one for collecting temporal correlation, and the other for capturing dependencies between different contents. Followed by a Cross Attention (CA) module as the Fusion Center (FC), the proposed ViT-CAT is capable of learning the mutual information between temporal and spatial correlations, as well, resulting in improving the classification accuracy, and decreasing the model's complexity about 8 times. Based on the simulation results, the proposed ViT-CAT architecture outperforms its counterparts across the classification accuracy, complexity, and cache-hit ratio.
Quantized constant envelope (QCE) precoding, a new transmission scheme that only discrete QCE transmit signals are allowed at each antenna, has gained growing research interests due to its ability of reducing the hardware cost and the energy consumption of massive multiple-input multiple-output (MIMO) systems. However, the discrete nature of QCE transmit signals greatly complicates the precoding design. In this paper, we consider the QCE precoding problem for a massive MIMO system with phase shift keying (PSK) modulation and develop an efficient approach for solving the constructive interference (CI) based problem formulation. Our approach is based on a custom-designed (continuous) penalty model that is equivalent to the original discrete problem. Specifically, the penalty model relaxes the discrete QCE constraint and penalizes it in the objective with a negative $\ell_2$-norm term, which leads to a non-smooth non-convex optimization problem. To tackle it, we resort to our recently proposed alternating optimization (AO) algorithm. We show that the AO algorithm admits closed-form updates at each iteration when applied to our problem and thus can be efficiently implemented. Simulation results demonstrate the superiority of the proposed approach over the existing algorithms.
Intelligent reflecting surface (IRS) technology has recently attracted a significant interest in non-light-of-sight radar remote sensing. Prior works have largely focused on designing single IRS beamformers for this problem. For the first time in the literature, this paper considers multi-IRS-aided multiple-input multiple-output (MIMO) radar and jointly designs the transmit unimodular waveforms and optimal IRS beamformers. To this end, we derive the Cramer-Rao lower bound (CRLB) of target direction-of-arrival (DoA) as a performance metric. Unimodular transmit sequences are the preferred waveforms from a hardware perspective. We show that, through suitable transformations, the joint design problem can be reformulated as two unimodular quadratic programs (UQP). To deal with the NP-hard nature of both UQPs, we propose unimodular waveform and beamforming design for multi-IRS radar (UBeR) algorithm that takes advantage of the low-cost power method-like iterations. Numerical experiments illustrate that the MIMO waveforms and phase shifts obtained from our UBeR algorithm are effective in improving the CRLB of DoA estimation.
5th Generation Mobile Communication Technology (5G) utilizes Access Traffic Steering, Switching, and Splitting (ATSSS) rule to enable multi-path data transmission, which is currently being standardized. Recently, the 3rd Generation Partnership Project (3GPP) SA1 and SA2 have been working on multi-path solution for possible improvement from different perspectives. However, the existing 3GPP multi-path solution has some limitations on URLLC traffic in terms of reliability and latency requirements. In order to capture the potential gains of multi-path architecture in the context of URLLC services, this paper proposes a new traffic splitting technique which can more efficiently enjoy the benefit of multi-path architecture in reducing users' uplink (UL) End-to-End (E2E) latency. In particular, we formulate an optimization framework which minimizes the UL E2E latency of users via optimizing the ratio of traffic assigned to each path and corresponding transmit power. The performance of the proposed scheme is evaluated via well designed simulations.
This paper is concerned with low-rank matrix optimization, which has found a wide range of applications in machine learning. This problem in the special case of matrix sensing has been studied extensively through the notion of Restricted Isometry Property (RIP), leading to a wealth of results on the geometric landscape of the problem and the convergence rate of common algorithms. However, the existing results can handle the problem in the case with a general objective function subject to noisy data only when the RIP constant is close to 0. In this paper, we develop a new mathematical framework to solve the above-mentioned problem with a far less restrictive RIP constant. We prove that as long as the RIP constant of the noiseless objective is less than $1/3$, any spurious local solution of the noisy optimization problem must be close to the ground truth solution. By working through the strict saddle property, we also show that an approximate solution can be found in polynomial time. We characterize the geometry of the spurious local minima of the problem in a local region around the ground truth in the case when the RIP constant is greater than $1/3$. Compared to the existing results in the literature, this paper offers the strongest RIP bound and provides a complete theoretical analysis on the global and local optimization landscapes of general low-rank optimization problems under random corruptions from any finite-variance family.
Intelligent reflecting surfaces (IRSs) are a promising technology for enhancing coverage and spectral efficiency, both in the sub-6 GHz and the millimeter wave (mmWave) bands. Existing approaches to leverage the benefits of IRS involve the use of a resource-intensive channel estimation step followed by a computationally expensive algorithm to optimize the reflection coefficients at the IRS. In this work, focusing on the sub-6 GHz band of communications, we present and analyze several alternative schemes, where the phase configuration of the IRS is randomized and multi-user diversity is exploited to opportunistically select the best user at each point in time for data transmission. We show that the throughput of an IRS assisted opportunistic communication (OC) system asymptotically converges to the optimal beamforming-based throughput under fair allocation of resources, as the number of users gets large. We also introduce schemes that enhance the rate of convergence of the OC rate to the beamforming rate with the number of users. For all the proposed schemes, we derive the scaling law of the throughput in terms of the system parameters, as the number of users gets large. Following this, we extend the setup to wideband channels via an orthogonal frequency division multiplexing (OFDM) system and discuss two OC schemes in an IRS assisted setting that clearly elucidate the superior performance that IRS aided OC systems can offer over conventional systems, at very low implementation cost and complexity.
In next-generation wireless networks, reconfigurable intelligent surface (RIS)-assisted multiple-input multiple-output (MIMO) systems are foreseeable to support a large number of antennas at the transceiver as well as a large number of reflecting elements at the RIS. To fully unleash the potential of RIS, the phase shifts of RIS elements should be carefully designed, resulting in a high-dimensional non-convex optimization problem that is hard to solve. In this paper, we address this scalability issue by partitioning RIS into sub-surfaces, so as to optimize the phase shifts in sub-surface levels to reduce complexity. Specifically, each subsurface employs a linear phase variation structure to anomalously reflect the incident signal to a desired direction, and the sizes of sub-surfaces can be adaptively adjusted according to channel conditions. We formulate the achievable rate maximization problem by jointly optimizing the transmit covariance matrix and the RIS phase shifts. Under the RIS partitioning framework, the RIS phase shifts optimization reduces to the manipulation of the sub-surface sizes, the phase gradients of sub-surfaces, and the common phase shifts of sub-surfaces. Then, we characterize the asymptotic behavior of the system with an infinitely large number of transceiver antennas and RIS elements. The asymptotic analysis provides useful insights on the understanding of the fundamental performance-complexity tradeoff in RIS partitioning design. We show that in the asymptotic domain, the achievable rate maximization problem has a rather simple form. We develop an efficient algorithm to find an approximate optimal solution via 1D grid search. By applying the asymptotic result to a finite-size system with necessary modifications, we show by numerical results that the proposed design achieves a favorable tradeoff between system performance and computational complexity.
This paper aims to explore the feasibility of the spectrum sharing between the communication and radar system. We investigate the full-duplex (FD) joint radar and communication multi-antenna system in which a node labeled ComRad with a dual communication and radar capability is communicating with a downlink and an uplink users, as well as detecting the target of interest simultaneously. Considering a full interference scenario and imperfect channel state information (CSI), the fundamental performance limits of the FD JRC system are analyzed. In particular, we first obtain the downlink rate when the radar signals act as interference. Then, viewing the uplink channel and radar return channel as a multiple access channel, we propose an alternative successive interference cancellation scheme, based on which the achievable uplink communication rate is obtained. For the radar operation, we first derive the general expression of the estimation rate, which quantifies how much information is obtained about the target in terms of the direction, the range and the velocity. Considering the uniform linear antenna array and linear frequency modulated radar signals, we further obtain the exact closed-form estimation rate. Numerical simulations reveal that the joint manner of the communication and radar operations achieves larger rate regions compared to that of working independently.
We present two designs for an analog circuit that can learn to detect a temporal sequence of two inputs. The training phase is done by feeding the circuit with the desired sequence and, after the training is completed, each time the trained sequence is encountered again the circuit will emit a signal of correct recognition. Sequences are in the order of tens of nanoseconds. The first design can reset the trained sequence on runtime but assumes very strict timing of the inputs. The second design can only be trained once but is lenient in the input's timing.
Deployment of Internet of Things (IoT) devices and Data Fusion techniques have gained popularity in public and government domains. This usually requires capturing and consolidating data from multiple sources. As datasets do not necessarily originate from identical sensors, fused data typically results in a complex data problem. Because military is investigating how heterogeneous IoT devices can aid processes and tasks, we investigate a multi-sensor approach. Moreover, we propose a signal to image encoding approach to transform information (signal) to integrate (fuse) data from IoT wearable devices to an image which is invertible and easier to visualize supporting decision making. Furthermore, we investigate the challenge of enabling an intelligent identification and detection operation and demonstrate the feasibility of the proposed Deep Learning and Anomaly Detection models that can support future application that utilizes hand gesture data from wearable devices.