In this paper, we study a multi-input multi-output (MIMO) beamforming design in an integrated sensing and communication (ISAC) system, in which an ISAC base station (BS) is used to communicate with multiple downlink users and simultaneously the communication signals are reused for sensing multiple targets. Our interested sensing parameters are the angle and delay information of the targets, which can be used to locate these targets. Under this consideration, we first derive the Cram\'{e}r-Rao bound (CRB) for angle and delay estimation. Then, we optimize the transmit beamforming at the BS to minimize the CRB, subject to communication rate and power constraints. In particular, we obtain the optimal solution in closed-form in the case of single-target and single-user, and in the case of multi-target and multi-user scenario, the sparsity of the optimal solution is proven, leading to a reduction in computational complexity during optimization. The numerical results demonstrate that the optimized beamforming yields excellent positioning performance and effectively reduces the requirement for a large number of antennas at the BS.
In this paper, we present electromagnetic signal and information theory (ESIT). ESIT is an interdisciplinary scientific discipline, which amalgamates electromagnetic theory, signal processing theory, and information theory. ESIT is aimed at studying and designing physically consistent communication schemes for the transmission and processing of information in communication networks. In simple terms, ESIT can be defined as physics-aware information theory and signal processing for communications. We consider three relevant problems in contemporary communication theory, and we show how they can be tackled under the lenses of ESIT. Specifically, we focus on (i) the theoretical and practical motivations behind antenna designs based on subwavelength radiating elements and interdistances; (ii) the modeling and role played by the electromagnetic mutual coupling, and the appropriateness of multiport network theory for modeling it; and (iii) the analytical tools for unveiling the performance limits and realizing spatial multiplexing in near field, line-of-sight, channels. To exemplify the role played by ESIT and the need for electromagnetic consistency, we consider case studies related to reconfigurable intelligent surfaces and holographic surfaces, and we highlight the inconsistencies of widely utilized communication models, as opposed to communication models that originate from first electromagnetic principles.
In this paper, we reflect on the educational challenges and research opportunities in running data visualization design activities in the context of large courses. With the increasing number and sizes of data visualization course, we need to better understand approaches to scaling our teaching efforts. We draw on experiences organizing and facilitating activities primarily based on one instance of a master's course given to about 130 students. We provide a detailed account of the course with particular focus on the purpose, structure, and outcome of six two-hour design activities. Based on this, we reflect on three aspects of the course: First, how the course scale led us to thoroughly plan, evaluate, and revise communication between students, teaching assistants, and lecturers. Second, how we designed learning scaffolds through the design activities, and the reflections we received from students on this matter. Finally, we reflect on the diversity of the students that followed the course, the visualization exercises we used, the projects they worked on, and when to key in on simple boring problems and data sets. Thus, our paper contributes with discussions about balancing topical diversity, scaling courses to many students, and problem-based learning.
In this paper, we propose a novel and general framework to construct tight framelet systems on graphs with localized supports based on hierarchical partitions. Our construction provides parametrized graph framelet systems with great generality based on partition trees, by which we are able to find the size of a low-dimensional subspace that best fits the low-rank structure of a family of signals. The orthogonal decomposition of subspaces provides a key ingredient for the definition of "generalized vanishing moments" for graph framelets. In a data-adaptive setting, the graph framelet systems can be learned by solving an optimization problem on Stiefel manifolds with respect to our parameterization. Moreover, such graph framelet systems can be further improved by solving a subsequent optimization problem on Stiefel manifolds, aiming at providing the utmost sparsity for a given family of graph signals. Experimental results show that our learned graph framelet systems perform superiorly in non-linear approximation and denoising tasks.
There is a recent interest on first-order methods for linear programming (LP). In this paper,we propose a stochastic algorithm using variance reduction and restarts for solving sharp primal-dual problems such as LP. We show that the proposed stochastic method exhibits a linear convergence rate for solving sharp instances with a high probability. In addition, we propose an efficient coordinate-based stochastic oracle for unconstrained bilinear problems, which has $\mathcal O(1)$ per iteration cost and improves the complexity of the existing deterministic and stochastic algorithms. Finally, we show that the obtained linear convergence rate is nearly optimal (upto $\log$ terms) for a wide class of stochastic primal dual methods.
In this paper, we investigate the capacity of a multiple-input multiple-output (MIMO) optical intensity channel (OIC) under per-antenna peak- and average-intensity constraints. We first consider the case where the average intensities of input are required to be equal to preassigned constants due to the requirement of illumination quality and color temperature. When the channel graph of the MIMO OIC is strongly connected, we prove that the strongest eigen-subchannel must have positive channel gains, which simplifies the capacity analysis. Then we derive various capacity bounds by utilizing linear precoding, generalized entropy power inequality, and QR decomposition, etc. These bounds are numerically verified to approach the capacity in the low or high signal-to-noise ratio regime. Specifically, when the channel rank is one less than the number of transmit antennas, we derive an equivalent capacity expression from the perspective of convex geometry, and new lower bounds are derived based on this equivalent expression. Finally, the developed results are extended to the more general case where the average intensities of input are required to be no larger than preassigned constants.
The paper describes a system that uses large language model (LLM) technology to support the automatic learning of new entries in an intelligent agent's semantic lexicon. The process is bootstrapped by an existing non-toy lexicon and a natural language generator that converts formal, ontologically-grounded representations of meaning into natural language sentences. The learning method involves a sequence of LLM requests and includes an automatic quality control step. To date, this learning method has been applied to learning multiword expressions whose meanings are equivalent to those of transitive verbs in the agent's lexicon. The experiment demonstrates the benefits of a hybrid learning architecture that integrates knowledge-based methods and resources with both traditional data analytics and LLMs.
In this paper, we propose a novel Feature Decomposition and Reconstruction Learning (FDRL) method for effective facial expression recognition. We view the expression information as the combination of the shared information (expression similarities) across different expressions and the unique information (expression-specific variations) for each expression. More specifically, FDRL mainly consists of two crucial networks: a Feature Decomposition Network (FDN) and a Feature Reconstruction Network (FRN). In particular, FDN first decomposes the basic features extracted from a backbone network into a set of facial action-aware latent features to model expression similarities. Then, FRN captures the intra-feature and inter-feature relationships for latent features to characterize expression-specific variations, and reconstructs the expression feature. To this end, two modules including an intra-feature relation modeling module and an inter-feature relation modeling module are developed in FRN. Experimental results on both the in-the-lab databases (including CK+, MMI, and Oulu-CASIA) and the in-the-wild databases (including RAF-DB and SFEW) show that the proposed FDRL method consistently achieves higher recognition accuracy than several state-of-the-art methods. This clearly highlights the benefit of feature decomposition and reconstruction for classifying expressions.
Non-IID data present a tough challenge for federated learning. In this paper, we explore a novel idea of facilitating pairwise collaborations between clients with similar data. We propose FedAMP, a new method employing federated attentive message passing to facilitate similar clients to collaborate more. We establish the convergence of FedAMP for both convex and non-convex models, and propose a heuristic method to further improve the performance of FedAMP when clients adopt deep neural networks as personalized models. Our extensive experiments on benchmark data sets demonstrate the superior performance of the proposed methods.
In this paper, we proposed to apply meta learning approach for low-resource automatic speech recognition (ASR). We formulated ASR for different languages as different tasks, and meta-learned the initialization parameters from many pretraining languages to achieve fast adaptation on unseen target language, via recently proposed model-agnostic meta learning algorithm (MAML). We evaluated the proposed approach using six languages as pretraining tasks and four languages as target tasks. Preliminary results showed that the proposed method, MetaASR, significantly outperforms the state-of-the-art multitask pretraining approach on all target languages with different combinations of pretraining languages. In addition, since MAML's model-agnostic property, this paper also opens new research direction of applying meta learning to more speech-related applications.
In this paper, we propose the joint learning attention and recurrent neural network (RNN) models for multi-label classification. While approaches based on the use of either model exist (e.g., for the task of image captioning), training such existing network architectures typically require pre-defined label sequences. For multi-label classification, it would be desirable to have a robust inference process, so that the prediction error would not propagate and thus affect the performance. Our proposed model uniquely integrates attention and Long Short Term Memory (LSTM) models, which not only addresses the above problem but also allows one to identify visual objects of interests with varying sizes without the prior knowledge of particular label ordering. More importantly, label co-occurrence information can be jointly exploited by our LSTM model. Finally, by advancing the technique of beam search, prediction of multiple labels can be efficiently achieved by our proposed network model.