Beyond-diagonal reconfigurable intelligent surface (BD-RIS) has been proposed recently as a novel and generalized RIS architecture that offers enhanced wave manipulation flexibility and large coverage expansion. However, the beyond-diagonal mathematical model in BD-RIS inevitably introduces additional optimization challenges in beamforming design. In this letter, we derive a closed-form solution for the BD-RIS passive beamforming matrix that maximizes the sum of the effective channel gains among users. We further propose a computationally efficient two-stage beamforming framework to jointly design the active beamforming at the base station and passive beamforming at the BD-RIS to enhance the sum-rate for a BD-RIS aided multi-user multi-antenna network.Numerical results show that our proposed algorithm achieves a higher sum-rate while requiring less computation time compared to state-of-the-art algorithms. The proposed algorithm paves the way for practical beamforming design in BD-RIS aided wireless networks.
This paper introduces a method for computing eigenvalues and eigenvectors of a generalized Hermitian, matrix eigenvalue problem. The work is focused on large scale eigenvalue problems, where the application of a direct inverse is out of reach. Instead, an explicit time-domain integrator for the corresponding wave problem is combined with a proper filtering and a Krylov iteration in order to solve for eigenvalues within a given region of interest. We report results of small scale model problems to confirm the reliability of the method, as well as the computation of acoustic resonances in a three dimensional model of a hunting horn to demonstrate the efficiency.
Digital twin (DT) platforms are increasingly regarded as a promising technology for controlling, optimizing, and monitoring complex engineering systems such as next-generation wireless networks. An important challenge in adopting DT solutions is their reliance on data collected offline, lacking direct access to the physical environment. This limitation is particularly severe in multi-agent systems, for which conventional multi-agent reinforcement (MARL) requires online interactions with the environment. A direct application of online MARL schemes to an offline setting would generally fail due to the epistemic uncertainty entailed by the limited availability of data. In this work, we propose an offline MARL scheme for DT-based wireless networks that integrates distributional RL and conservative Q-learning to address the environment's inherent aleatoric uncertainty and the epistemic uncertainty arising from limited data. To further exploit the offline data, we adapt the proposed scheme to the centralized training decentralized execution framework, allowing joint training of the agents' policies. The proposed MARL scheme, referred to as multi-agent conservative quantile regression (MA-CQR) addresses general risk-sensitive design criteria and is applied to the trajectory planning problem in drone networks, showcasing its advantages.
This study employs a uniform rectangular array (URA) sub-connected hybrid beamforming (SC-HBF) architecture to provide a novel self-interference (SI) suppression scheme in a full-duplex (FD) massive multiple-input multiple-output (mMIMO) system. Our primary objective is to mitigate the strong SI through the design of RF beamforming stages for uplink and downlink transmissions that utilize the spatial degrees of freedom provided due to the use of large array structures. We propose a non-constant modulus RF beamforming (NCM-BF-SIS) scheme that incorporates the gain controllers for both transmit (Tx) and receive (Rx) RF beamforming stages and optimizes the uplink and downlink beam directions jointly with gain controller coefficients. To solve this challenging non-convex optimization problem, we propose a swarm intelligence-based algorithmic solution that finds the optimal beam perturbations while also adjusting the Tx/Rx gain controllers to alleviate SI subject to the directivity degradation constraints for the beams. The data-driven analysis based on the measured SI channel in an anechoic chamber shows that the proposed NCM-BF-SIS scheme can suppress SI by around 80 dB in FD mMIMO systems.
In this paper, a type of novel projection-based, time-divided reduced order model (ROM) is proposed for dynamic fluid-structure interaction (FSI) problems, where spatial and temporal dimensions are partitioned as follows: spatially, each kind of variable is separated from others in terms of its attribution (fluid/structure), its category (velocity/pressure) and its component (horizontal/vertical); temporally, basis functions are deliberately adopted in small time windows tailored through extensive numerical trials. By the combination of space and time decompositions, the proposed ROM enables prolonged simulations under prescribed accuracy thresholds. Numerical experiments are carried out by means of a numerical comparison between the proposed ROM and corresponding full-order model (FOM) on solving a benchmark problem of FSI with a vibrating elastic beam in the fluid flow, where the representation of basis function sets on perturbation parameters is investigated as well. Extensive numerical results demonstrate the accuracy and efficiency of the proposed ROM. The developed numerical techniques are dimension-independent, which can be seamlessly extended to high dimensional FSI problems.
Large language models (LLMs) have recently demonstrated a remarkable ability to generate code from natural language (NL) prompts. However, in the real world, NL is often too ambiguous to capture the true intent behind programming problems, requiring additional input-output (I/O) specifications. Unfortunately, LLMs can have difficulty aligning their outputs with both the NL prompt and the I/O specification. In this paper, we give a way to mitigate this issue in the context of data science programming, where tasks require explicit I/O specifications for clarity. Specifically, we propose GIFT4Code, a novel approach for the instruction fine-tuning of LLMs with respect to I/O specifications. Our method leverages synthetic data produced by the LLM itself and utilizes execution-derived feedback as a key learning signal. This feedback, in the form of program I/O specifications, is provided to the LLM to facilitate instruction fine-tuning. We evaluated our approach on two challenging data science benchmarks, Arcade and DS-1000. The results demonstrate a significant improvement in the LLM's ability to generate code that is not only executable but also accurately aligned with user specifications, substantially improving the quality of code generation for complex data science tasks.
We propose a novel deterministic method for preparing arbitrary quantum states. When our protocol is compiled into CNOT and arbitrary single-qubit gates, it prepares an $N$-dimensional state in depth $O(\log(N))$ and spacetime allocation (a metric that accounts for the fact that oftentimes some ancilla qubits need not be active for the entire circuit) $O(N)$, which are both optimal. When compiled into the $\{\mathrm{H,S,T,CNOT}\}$ gate set, we show that it requires asymptotically fewer quantum resources than previous methods. Specifically, it prepares an arbitrary state up to error $\epsilon$ with optimal depth of $O(\log(N) + \log (1/\epsilon))$ and spacetime allocation $O(N\log(\log(N)/\epsilon))$, improving over $O(\log(N)\log(\log (N)/\epsilon))$ and $O(N\log(N/\epsilon))$, respectively. We illustrate how the reduced spacetime allocation of our protocol enables rapid preparation of many disjoint states with only constant-factor ancilla overhead -- $O(N)$ ancilla qubits are reused efficiently to prepare a product state of $w$ $N$-dimensional states in depth $O(w + \log(N))$ rather than $O(w\log(N))$, achieving effectively constant depth per state. We highlight several applications where this ability would be useful, including quantum machine learning, Hamiltonian simulation, and solving linear systems of equations. We provide quantum circuit descriptions of our protocol, detailed pseudocode, and gate-level implementation examples using Braket.
Learning a universal policy across different robot morphologies can significantly improve learning efficiency and enable zero-shot generalization to unseen morphologies. However, learning a highly performant universal policy requires sophisticated architectures like transformers (TF) that have larger memory and computational cost than simpler multi-layer perceptrons (MLP). To achieve both good performance like TF and high efficiency like MLP at inference time, we propose HyperDistill, which consists of: (1) A morphology-conditioned hypernetwork (HN) that generates robot-wise MLP policies, and (2) A policy distillation approach that is essential for successful training. We show that on UNIMAL, a benchmark with hundreds of diverse morphologies, HyperDistill performs as well as a universal TF teacher policy on both training and unseen test robots, but reduces model size by 6-14 times, and computational cost by 67-160 times in different environments. Our analysis attributes the efficiency advantage of HyperDistill at inference time to knowledge decoupling, i.e., the ability to decouple inter-task and intra-task knowledge, a general principle that could also be applied to improve inference efficiency in other domains.
Non-IID data present a tough challenge for federated learning. In this paper, we explore a novel idea of facilitating pairwise collaborations between clients with similar data. We propose FedAMP, a new method employing federated attentive message passing to facilitate similar clients to collaborate more. We establish the convergence of FedAMP for both convex and non-convex models, and propose a heuristic method to further improve the performance of FedAMP when clients adopt deep neural networks as personalized models. Our extensive experiments on benchmark data sets demonstrate the superior performance of the proposed methods.
We introduce a generic framework that reduces the computational cost of object detection while retaining accuracy for scenarios where objects with varied sizes appear in high resolution images. Detection progresses in a coarse-to-fine manner, first on a down-sampled version of the image and then on a sequence of higher resolution regions identified as likely to improve the detection accuracy. Built upon reinforcement learning, our approach consists of a model (R-net) that uses coarse detection results to predict the potential accuracy gain for analyzing a region at a higher resolution and another model (Q-net) that sequentially selects regions to zoom in. Experiments on the Caltech Pedestrians dataset show that our approach reduces the number of processed pixels by over 50% without a drop in detection accuracy. The merits of our approach become more significant on a high resolution test set collected from YFCC100M dataset, where our approach maintains high detection performance while reducing the number of processed pixels by about 70% and the detection time by over 50%.
Multi-relation Question Answering is a challenging task, due to the requirement of elaborated analysis on questions and reasoning over multiple fact triples in knowledge base. In this paper, we present a novel model called Interpretable Reasoning Network that employs an interpretable, hop-by-hop reasoning process for question answering. The model dynamically decides which part of an input question should be analyzed at each hop; predicts a relation that corresponds to the current parsed results; utilizes the predicted relation to update the question representation and the state of the reasoning process; and then drives the next-hop reasoning. Experiments show that our model yields state-of-the-art results on two datasets. More interestingly, the model can offer traceable and observable intermediate predictions for reasoning analysis and failure diagnosis.