Developing novel research questions (RQs) often requires extensive literature reviews, especially for interdisciplinary fields. Leveraging Large Language Models (LLMs), we built an LLM-based agent system, called CoQuest, supporting RQ development through human-AI co-creation. We conducted an experimental design with 20 participants to examine the effect of two interaction designs: breadth-first and depth-first RQ generation. The results showed that participants found the breadth-first approach more creative and trustworthy upon task completion. However, during the task, they rated the RQs generated through the depth-first approach as more creative. We also discovered that AI processing delays allowed users to contemplate multiple RQs simultaneously, resulting in more generated RQs and an increased sense of perceived control. Our work makes both theoretical and practical contributions by proposing and assessing a mental model for human-AI co-creation RQs.
Generalized spatial modulation (GSM) is a novel multiple-antenna technique offering flexibility among spectral efficiency, energy efficiency, and the cost of RF chains. In this paper, a novel class of sequence sets, called enhanced cross Zcomplementary set (E-CZCS), is proposed for efficient training sequence design in broadband GSM systems. Specifically, an E-CZCS consists of multiple CZCSs possessing front-end and tail-end zero-correlation zones (ZCZs), whereby any two distinct CZCSs have a tail-end ZCZ when a novel type of cross-channel aperiodic correlation sums is considered. The theoretical upper bound on the ZCZ width is first derived, upon which optimal E-CZCSs with flexible parameters are constructed. For optimal channel estimation over frequency-selective channels, we introduce and evaluate a novel GSM training framework employing the proposed E-CZCSs.
Envisioned as one of the most promising technologies, holographic multiple-input multiple-output (H-MIMO) recently attracts notable research interests for its great potential in expanding wireless possibilities and achieving fundamental wireless limits. Empowered by the nearly continuous, large and energy-efficient surfaces with powerful electromagnetic (EM) wave control capabilities, H-MIMO opens up the opportunity for signal processing in a more fundamental EM-domain, paving the way for realizing holographic imaging level communications in supporting the extremely high spectral efficiency and energy efficiency in future networks. In this article, we try to implement a generalized EM-domain near-field channel modeling and study its capacity limit of point-to-point H-MIMO systems that equips arbitrarily placed surfaces in a line-of-sight (LoS) environment. Two effective and computational-efficient channel models are established from their integral counterpart, where one is with a sophisticated formula but showcases more accurate, and another is concise with a slight precision sacrifice. Furthermore, we unveil the capacity limit using our channel model, and derive a tight upper bound based upon an elaborately built analytical framework. Our result reveals that the capacity limit grows logarithmically with the product of transmit element area, receive element area, and the combined effects of $1/{{d}_{mn}^2}$, $1/{{d}_{mn}^4}$, and $1/{{d}_{mn}^6}$ over all transmit and receive antenna elements, where $d_{mn}$ indicates the distance between each transmit and receive elements. Numerical evaluations validate the effectiveness of our channel models, and showcase the slight disparity between the upper bound and the exact capacity, which is beneficial for predicting practical system performance.
We present a novel technique to discover and exploit weak causal signals directly from images via neural networks for classification purposes. This way, we model how the presence of a feature in one part of the image affects the appearance of another feature in a different part of the image. Our method consists of a convolutional neural network backbone and a causality-factors extractor module, which computes weights to enhance each feature map according to its causal influence in the scene. We developed different architecture variants and empirically evaluated all of our models on two public datasets of prostate MRI images and breast histopathology slides for cancer diagnosis. To confirm our quantitative results, we conduct ablation studies and investigate the explainability of our models via class activation maps. Our findings show that our lightweight block extracts meaningful information and improves the overall classification, together with producing more robust predictions that focus on relevant parts of the image. That is crucial in medical imaging, where accurate and reliable classifications are essential for effective diagnosis and treatment planning.
Product reviews often contain a large number of implicit aspects and object-attribute co-existence cases. Unfortunately, many existing studies in Aspect-Based Sentiment Analysis (ABSA) have overlooked this issue, which can make it difficult to extract opinions comprehensively and fairly. In this paper, we propose a new task called Entity-Aspect-Opinion-Sentiment Quadruple Extraction (EASQE), which aims to hierarchically decompose aspect terms into entities and aspects to avoid information loss, non-exclusive annotations, and opinion misunderstandings in ABSA tasks. To facilitate research in this new task, we have constructed four datasets (Res14-EASQE, Res15-EASQE, Res16-EASQE, and Lap14-EASQE) based on the SemEval Restaurant and Laptop datasets. We have also proposed a novel two-stage sequence-tagging based Trigger-Opinion framework as the baseline for the EASQE task. Empirical evaluations show that our Trigger-Opinion framework can generate satisfactory EASQE results and can also be applied to other ABSA tasks, significantly outperforming state-of-the-art methods. We have made the four datasets and source code of Trigger-Opinion publicly available to facilitate further research in this area.
This article presents a method for learning well-coordinated Human-Robot Interaction (HRI) from Human-Human Interactions (HHI). We devise a hybrid approach using Hidden Markov Models (HMMs) as the latent space priors for a Variational Autoencoder to model a joint distribution over the interacting agents. We leverage the interaction dynamics learned from HHI to learn HRI and incorporate the conditional generation of robot motions from human observations into the training, thereby predicting more accurate robot trajectories. The generated robot motions are further adapted with Inverse Kinematics to ensure the desired physical proximity with a human, combining the ease of joint space learning and accurate task space reachability. For contact-rich interactions, we modulate the robot's stiffness using HMM segmentation for a compliant interaction. We verify the effectiveness of our approach deployed on a Humanoid robot via a user study. Our method generalizes well to various humans despite being trained on data from just two humans. We find that Users perceive our method as more human-like, timely, and accurate and rank our method with a higher degree of preference over other baselines.
This article presents the affordances that Generative Artificial Intelligence can have in disinformation context, one of the major threats to our digitalized society. We present a research framework to generate customized agent-based social networks for disinformation simulations that would enable understanding and evaluation of the phenomena whilst discussing open challenges.
Link prediction on knowledge graphs (KGs) is a key research topic. Previous work mainly focused on binary relations, paying less attention to higher-arity relations although they are ubiquitous in real-world KGs. This paper considers link prediction upon n-ary relational facts and proposes a graph-based approach to this task. The key to our approach is to represent the n-ary structure of a fact as a small heterogeneous graph, and model this graph with edge-biased fully-connected attention. The fully-connected attention captures universal inter-vertex interactions, while with edge-aware attentive biases to particularly encode the graph structure and its heterogeneity. In this fashion, our approach fully models global and local dependencies in each n-ary fact, and hence can more effectively capture associations therein. Extensive evaluation verifies the effectiveness and superiority of our approach. It performs substantially and consistently better than current state-of-the-art across a variety of n-ary relational benchmarks. Our code is publicly available.
Many tasks in natural language processing can be viewed as multi-label classification problems. However, most of the existing models are trained with the standard cross-entropy loss function and use a fixed prediction policy (e.g., a threshold of 0.5) for all the labels, which completely ignores the complexity and dependencies among different labels. In this paper, we propose a meta-learning method to capture these complex label dependencies. More specifically, our method utilizes a meta-learner to jointly learn the training policies and prediction policies for different labels. The training policies are then used to train the classifier with the cross-entropy loss function, and the prediction policies are further implemented for prediction. Experimental results on fine-grained entity typing and text classification demonstrate that our proposed method can obtain more accurate multi-label classification results.
We propose a novel two-layered attention network based on Bidirectional Long Short-Term Memory for sentiment analysis. The novel two-layered attention network takes advantage of the external knowledge bases to improve the sentiment prediction. It uses the Knowledge Graph Embedding generated using the WordNet. We build our model by combining the two-layered attention network with the supervised model based on Support Vector Regression using a Multilayer Perceptron network for sentiment analysis. We evaluate our model on the benchmark dataset of SemEval 2017 Task 5. Experimental results show that the proposed model surpasses the top system of SemEval 2017 Task 5. The model performs significantly better by improving the state-of-the-art system at SemEval 2017 Task 5 by 1.7 and 3.7 points for sub-tracks 1 and 2 respectively.
Deep learning (DL) based semantic segmentation methods have been providing state-of-the-art performance in the last few years. More specifically, these techniques have been successfully applied to medical image classification, segmentation, and detection tasks. One deep learning technique, U-Net, has become one of the most popular for these applications. In this paper, we propose a Recurrent Convolutional Neural Network (RCNN) based on U-Net as well as a Recurrent Residual Convolutional Neural Network (RRCNN) based on U-Net models, which are named RU-Net and R2U-Net respectively. The proposed models utilize the power of U-Net, Residual Network, as well as RCNN. There are several advantages of these proposed architectures for segmentation tasks. First, a residual unit helps when training deep architecture. Second, feature accumulation with recurrent residual convolutional layers ensures better feature representation for segmentation tasks. Third, it allows us to design better U-Net architecture with same number of network parameters with better performance for medical image segmentation. The proposed models are tested on three benchmark datasets such as blood vessel segmentation in retina images, skin cancer segmentation, and lung lesion segmentation. The experimental results show superior performance on segmentation tasks compared to equivalent models including U-Net and residual U-Net (ResU-Net).