Integrated sensing and communication (ISAC) is a promising paradigm to provide both sensing and communication (S&C) services in vehicular networks. However, the power of echo signals reflected from vehicles may be too weak to be used for future precise positioning, due to the practically small radar cross section of vehicles with random reflection/scattering coefficient. To tackle this issue, we propose a novel mutual assistance scheme for intelligent surface-mounted vehicles, where S&C are innovatively designed to assist each other for achieving an efficient win-win integration, i.e., sensing-assisted phase shift design and communication-assisted high-precision sensing. Specifically, we first derive closed-form expressions of the echo power and achievable rate under uncertain angle information. Then, the communication rate is maximized while satisfying sensing requirements, which is proved to be a monotonic optimization problem on time allocation. Furthermore, we unveil the feasible condition of the problem and propose a polyblock-based optimal algorithm. Simulation results validate that the performance trade-off bound of S&C is significantly enlarged by the novel design exploiting mutual assistance in intelligent surface-aided vehicular networks.
This paper investigates the broadband channel estimation (CE) for intelligent reflecting surface (IRS)-aided millimeter-wave (mmWave) massive MIMO systems. The CE for such systems is a challenging task due to the large dimension of both the active massive MIMO at the base station (BS) and passive IRS. To address this problem, this paper proposes a compressive sensing (CS)-based CE solution for IRS-aided mmWave massive MIMO systems, whereby the angular channel sparsity of large-scale array at mmWave is exploited for improved CE with reduced pilot overhead. Specifically, we first propose a downlink pilot transmission framework. By designing the pilot signals based on the prior knowledge that the line-of-sight dominated BS-to-IRS channel is known, the high-dimensional channels for BS-to-user and IRS-to-user can be jointly estimated based on CS theory. Moreover, to efficiently estimate broadband channels, a distributed orthogonal matching pursuit algorithm is exploited, where the common sparsity shared by the channels at different subcarriers is utilized. Additionally, the redundant dictionary to combat the power leakage is also designed for the enhanced CE performance. Simulation results demonstrate the effectiveness of the proposed scheme.
The convergence of generative large language models (LLMs), edge networks, and multi-agent systems represents a groundbreaking synergy that holds immense promise for future wireless generations, harnessing the power of collective intelligence and paving the way for self-governed networks where intelligent decision-making happens right at the edge. This article puts the stepping-stone for incorporating multi-agent generative artificial intelligence (AI) in wireless networks, and sets the scene for realizing on-device LLMs, where multi-agent LLMs are collaboratively planning and solving tasks to achieve a number of network goals. We further investigate the profound limitations of cloud-based LLMs, and explore multi-agent LLMs from a game theoretic perspective, where agents collaboratively solve tasks in competitive environments. Moreover, we establish the underpinnings for the architecture design of wireless multi-agent generative AI systems at the network level and the agent level, and we identify the wireless technologies that are envisioned to play a key role in enabling on-device LLM. To demonstrate the promising potentials of wireless multi-agent generative AI networks, we highlight the benefits that can be achieved when implementing wireless generative agents in intent-based networking, and we provide a case study to showcase how on-device LLMs can contribute to solving network intents in a collaborative fashion. We finally shed lights on potential challenges and sketch a research roadmap towards realizing the vision of wireless collective intelligence.
This paper studies a multi-intelligent-reflecting-surface-(IRS)-enabled integrated sensing and communications (ISAC) system, in which multiple IRSs are installed to help the base station (BS) provide ISAC services at separate line-of-sight (LoS) blocked areas. We focus on the scenario with semi-passive uniform linear array (ULA) IRSsfor sensing, in which each IRS is integrated with dedicated sensors for processing echo signals, and each IRS simultaneously serves one sensing target and one communication user (CU) in its coverage area. In particular, we suppose that the BS sends combined information and dedicated sensing signals for ISAC, and we consider two cases with point and extended targets, in which each IRS aims to estimate the direction-of-arrival (DoA) of the corresponding target and the complete target response matrix, respectively. Under this setup, we first derive the closed-form Cram{\'e}r-Rao bounds (CRBs) for parameters estimation under the two target models. For the point target case, the CRB for AoA estimation is shown to be inversely proportional to the cubic of the number of sensors at each IRS, while for the extended target case, the CRB for target response matrix estimation is proportional to the number of IRS sensors. Next, we consider two different types of CU receivers that can and cannot cancel the interference from dedicated sensing signals prior to information decoding. To achieve fair and optimized sensing performance, we minimize the maximum CRB at all IRSs for the two target cases, via jointly optimizing the transmit beamformers at the BS and the reflective beamformers at the multiple IRSs, subject to the minimum signal-to-interference-plus-noise ratio (SINR) constraints at individual CUs, the maximum transmit power constraint at the BS, and the unit-modulus constraints at the multiple IRSs.
In commercial unmanned aerial vehicle (UAV) applications, one of the main restrictions is UAVs' limited battery endurance when executing persistent tasks. With the mature of wireless power transfer (WPT) technologies, by leveraging ground vehicles mounted with WPT facilities on their proofs, we propose a mobile and collaborative recharging scheme for UAVs in an on-demand manner. Specifically, we first present a novel air-ground cooperative UAV recharging framework, where ground vehicles cooperatively share their idle wireless chargers to UAVs and a swarm of UAVs in the task area compete to get recharging services. Considering the mobility dynamics and energy competitions, we formulate an energy scheduling problem for UAVs and vehicles under practical constraints. A fair online auction-based solution with low complexity is also devised to allocate and price idle wireless chargers on vehicular proofs in real time. We rigorously prove that the proposed scheme is strategy-proof, envy-free, and produces stable allocation outcomes. The first property enforces that truthful bidding is the dominant strategy for participants, the second ensures that no user is better off by exchanging his allocation with another user when the auction ends, while the third guarantees the matching stability between UAVs and UGVs. Extensive simulations validate that the proposed scheme outperforms benchmarks in terms of energy allocation efficiency and UAV's utility.
The hardware computing landscape is changing. What used to be distributed systems can now be found on a chip with highly configurable, diverse, specialized and general purpose units. Such Systems-on-a-Chip (SoC) are used to control today's cyber-physical systems, being the building blocks of critical infrastructures. They are deployed in harsh environments and are connected to the cyberspace, which makes them exposed to both accidental faults and targeted cyberattacks. This is in addition to the changing fault landscape that continued technology scaling, emerging devices and novel application scenarios will bring. In this paper, we discuss how the very features, distributed, parallelized, reconfigurable, heterogeneous, that cause many of the imminent and emerging security and resilience challenges, also open avenues for their cure though SoC replication, diversity, rejuvenation, adaptation, and hybridization. We show how to leverage these techniques at different levels across the entire SoC hardware/software stack, calling for more research on the topic.
In this work, we propose a waveform based on Modulation on Conjugate-reciprocal Zeros (MOCZ) originally proposed for short-packet communications in [1], as a new Integrated Sensing and Communication (ISAC) waveform. Having previously established the key advantages of MOCZ for noncoherent and sporadic communication, here we leverage the optimal auto-correlation property of Binary MOCZ (BMOCZ) for sensing applications. Due to this property, which eliminates the need for separate communication and radar-centric waveforms, we propose a new frame structure for ISAC, where pilot sequences and preambles become obsolete and are completely removed from the frame. As a result, the data rate can be significantly improved. Aimed at (hardware-) cost-effective radar-sensing applications, we consider a Hybrid Digital-Analog (HDA) beamforming architecture for data transmission and radar sensing. We demonstrate via extensive simulations, that a communication data rate, significantly higher than existing standards can be achieved, while simultaneously achieving sensing performance comparable to state-of-the-art sensing systems.
Triple Modular Redundancy (TMR) is one of the most common techniques in fault-tolerant systems, in which the output is determined by a majority voter. However, the design diversity of replicated modules and/or soft errors that are more likely to happen in the nanoscale era may affect the majority voting scheme. Besides, the significant overheads of the TMR scheme may limit its usage in energy consumption and area-constrained critical systems. However, for most inherently error-resilient applications such as image processing and vision deployed in critical systems (like autonomous vehicles and robotics), achieving a given level of reliability has more priority than precise results. Therefore, these applications can benefit from the approximate computing paradigm to achieve higher energy efficiency and a lower area. This paper proposes an energy-efficient approximate reliability (X-Rel) framework to overcome the aforementioned challenges of the TMR systems and get the full potential of approximate computing without sacrificing the desired reliability constraint and output quality. The X-Rel framework relies on relaxing the precision of the voter based on a systematical error bounding method that leverages user-defined quality and reliability constraints. Afterward, the size of the achieved voter is used to approximate the TMR modules such that the overall area and energy consumption are minimized. The effectiveness of employing the proposed X-Rel technique in a TMR structure, for different quality constraints as well as with various reliability bounds are evaluated in a 15-nm FinFET technology. The results of the X-Rel voter show delay, area, and energy consumption reductions of up to 86%, 87%, and 98%, respectively, when compared to those of the state-of-the-art approximate TMR voters.
Over-the-air computation (AirComp), as a data aggregation method that can improve network efficiency by exploiting the superposition characteristics of wireless channels, has received much attention recently. Meanwhile, the orthogonal time frequency space (OTFS) modulation can provide a strong Doppler resilience and facilitates reliable transmission for high-mobility communications. Hence, in this work, we investigate an OTFS-based AirComp system in the presence of time-frequency dual-selective channels. In particular, we commence from the development of a novel transmission framework for the considered system, where the pilot signal is sent together with data and the channel estimation is implemented according to the echo from the access point to the sensor, thereby reducing the overhead of channel state information (CSI) feedback. Hereafter, based on the CSI estimated from the previous frame, a robust precoding matrix aiming at minimizing mean square error in the current frame is designed, which takes into account the estimation error from the receiver noise and the outdated CSI. The simulation results demonstrate the effectiveness of the proposed robust precoding scheme by comparing it with the non-robust precoding. The performance gain is more obvious in high signal-to-noise ratio in case of large channel estimation errors.
The container relocation problem is a combinatorial optimisation problem aimed at finding a sequence of container relocations to retrieve all containers in a predetermined order by minimising a given objective. Relocation rules (RRs), which consist of a priority function and relocation scheme, are heuristics commonly used for solving the mentioned problem due to their flexibility and efficiency. Recently, in many real-world problems it is becoming increasingly important to consider energy consumption. However, for this variant no RRs exist and would need to be designed manually. One possibility to circumvent this issue is by applying hyperheuristics to automatically design new RRs. In this study we use genetic programming to obtain priority functions used in RRs whose goal is to minimise energy consumption. We compare the proposed approach with a genetic algorithm from the literature used to design the priority function. The results obtained demonstrate that the RRs designed by genetic programming achieve the best performance.
Games and simulators can be a valuable platform to execute complex multi-agent, multiplayer, imperfect information scenarios with significant parallels to military applications: multiple participants manage resources and make decisions that command assets to secure specific areas of a map or neutralize opposing forces. These characteristics have attracted the artificial intelligence (AI) community by supporting development of algorithms with complex benchmarks and the capability to rapidly iterate over new ideas. The success of artificial intelligence algorithms in real-time strategy games such as StarCraft II have also attracted the attention of the military research community aiming to explore similar techniques in military counterpart scenarios. Aiming to bridge the connection between games and military applications, this work discusses past and current efforts on how games and simulators, together with the artificial intelligence algorithms, have been adapted to simulate certain aspects of military missions and how they might impact the future battlefield. This paper also investigates how advances in virtual reality and visual augmentation systems open new possibilities in human interfaces with gaming platforms and their military parallels.