This paper investigates the multiple-input-multiple-output (MIMO) massive unsourced random access in an asynchronous orthogonal frequency division multiplexing (OFDM) system, with both timing and frequency offsets (TFO) and non-negligible user collisions. The proposed coding framework splits the data into two parts encoded by sparse regression code (SPARC) and low-density parity check (LDPC) code. Multistage orthogonal pilots are transmitted in the first part to reduce collision density. Unlike existing schemes requiring a quantization codebook with a large size for estimating TFO, we establish a \textit{graph-based channel reconstruction and collision resolution (GB-CR$^2$)} algorithm to iteratively reconstruct channels, resolve collisions, and compensate for TFO rotations on the formulated graph jointly among multiple stages. We further propose to leverage the geometric characteristics of signal constellations to correct TFO estimations. Exhaustive simulations demonstrate remarkable performance superiority in channel estimation and data recovery with substantial complexity reduction compared to state-of-the-art schemes.
Conditional independence models associated with directed acyclic graphs (DAGs) may be characterized in at least three different ways: via a factorization, the global Markov property (given by the d-separation criterion), and the local Markov property. Marginals of DAG models also imply equality constraints that are not conditional independences; the well-known ``Verma constraint'' is an example. Constraints of this type are used for testing edges, and in a computationally efficient marginalization scheme via variable elimination. We show that equality constraints like the ``Verma constraint'' can be viewed as conditional independences in kernel objects obtained from joint distributions via a fixing operation that generalizes conditioning and marginalization. We use these constraints to define, via ordered local and global Markov properties, and a factorization, a graphical model associated with acyclic directed mixed graphs (ADMGs). We prove that marginal distributions of DAG models lie in this model, and that a set of these constraints given by Tian provides an alternative definition of the model. Finally, we show that the fixing operation used to define the model leads to a particularly simple characterization of identifiable causal effects in hidden variable causal DAG models.
We propose a novel class incremental learning approach by incorporating a feature augmentation technique motivated by adversarial attacks. We employ a classifier learned in the past to complement training examples rather than simply play a role as a teacher for knowledge distillation towards subsequent models. The proposed approach has a unique perspective to utilize the previous knowledge in class incremental learning since it augments features of arbitrary target classes using examples in other classes via adversarial attacks on a previously learned classifier. By allowing the cross-class feature augmentations, each class in the old tasks conveniently populates samples in the feature space, which alleviates the collapse of the decision boundaries caused by sample deficiency for the previous tasks, especially when the number of stored exemplars is small. This idea can be easily incorporated into existing class incremental learning algorithms without any architecture modification. Extensive experiments on the standard benchmarks show that our method consistently outperforms existing class incremental learning methods by significant margins in various scenarios, especially under an environment with an extremely limited memory budget.
Knowledge distillation (KD) emerges as a challenging yet promising technique for compressing deep learning models, characterized by the transmission of extensive learning representations from proficient and computationally intensive teacher models to compact student models. However, only a handful of studies have endeavored to compress the models for single image super-resolution (SISR) through KD, with their effects on student model enhancement remaining marginal. In this paper, we put forth an approach from the perspective of efficient data utilization, namely, the Data Upcycling Knowledge Distillation (DUKD) which facilitates the student model by the prior knowledge teacher provided via upcycled in-domain data derived from their inputs. This upcycling process is realized through two efficient image zooming operations and invertible data augmentations which introduce the label consistency regularization to the field of KD for SISR and substantially boosts student model's generalization. The DUKD, due to its versatility, can be applied across a broad spectrum of teacher-student architectures. Comprehensive experiments across diverse benchmarks demonstrate that our proposed DUKD method significantly outperforms previous art, exemplified by an increase of up to 0.5dB in PSNR over baselines methods, and a 67% parameters reduced RCAN model's performance remaining on par with that of the RCAN teacher model.
This paper introduces a new accurate model for periodic fractional optimal control problems (PFOCPs) using Riemann-Liouville (RL) and Caputo fractional derivatives (FDs) with sliding fixed memory lengths. The paper also provides a novel numerical method for solving PFOCPs using Fourier and Gegenbauer pseudospectral methods. By employing Fourier collocation at equally spaced nodes and Fourier and Gegenbauer quadratures, the method transforms the PFOCP into a simple constrained nonlinear programming problem (NLP) that can be treated easily using standard NLP solvers. We propose a new transformation that largely simplifies the problem of calculating the periodic FDs of periodic functions to the problem of evaluating the integral of the first derivatives of their trigonometric Lagrange interpolating polynomials, which can be treated accurately and efficiently using Gegenbauer quadratures. We introduce the notion of the {\alpha}th-order fractional integration matrix with index L based on Fourier and Gegenbauer pseudospectral approximations, which proves to be very effective in computing periodic FDs. We also provide a rigorous priori error analysis to predict the quality of the Fourier-Gegenbauer-based approximations to FDs. The numerical results of the benchmark PFOCP demonstrate the performance of the proposed pseudospectral method.
We introduce an efficient stochastic interacting particle-field (SIPF) algorithm with no history dependence for computing aggregation patterns and near singular solutions of parabolic-parabolic Keller-Segel (KS) chemotaxis system in three space dimensions (3D). The KS solutions are approximated as empirical measures of particles coupled with a smoother field (concentration of chemo-attractant) variable computed by the spectral method. Instead of using heat kernels causing history dependence and high memory cost, we leverage the implicit Euler discretization to derive a one-step recursion in time for stochastic particle positions and the field variable based on the explicit Green's function of an elliptic operator of the form Laplacian minus a positive constant. In numerical experiments, we observe that the resulting SIPF algorithm is convergent and self-adaptive to the high gradient part of solutions. Despite the lack of analytical knowledge (e.g. a self-similar ansatz) of the blowup, the SIPF algorithm provides a low-cost approach to study the emergence of finite time blowup in 3D by only dozens of Fourier modes and through varying the amount of initial mass and tracking the evolution of the field variable. Notably, the algorithm can handle at ease multi-modal initial data and the subsequent complex evolution involving the merging of particle clusters and formation of a finite time singularity.
Document-level information extraction (IE) is a crucial task in natural language processing (NLP). This paper conducts a systematic review of recent document-level IE literature. In addition, we conduct a thorough error analysis with current state-of-the-art algorithms and identify their limitations as well as the remaining challenges for the task of document-level IE. According to our findings, labeling noises, entity coreference resolution, and lack of reasoning, severely affect the performance of document-level IE. The objective of this survey paper is to provide more insights and help NLP researchers to further enhance document-level IE performance.
Recent contrastive representation learning methods rely on estimating mutual information (MI) between multiple views of an underlying context. E.g., we can derive multiple views of a given image by applying data augmentation, or we can split a sequence into views comprising the past and future of some step in the sequence. Contrastive lower bounds on MI are easy to optimize, but have a strong underestimation bias when estimating large amounts of MI. We propose decomposing the full MI estimation problem into a sum of smaller estimation problems by splitting one of the views into progressively more informed subviews and by applying the chain rule on MI between the decomposed views. This expression contains a sum of unconditional and conditional MI terms, each measuring modest chunks of the total MI, which facilitates approximation via contrastive bounds. To maximize the sum, we formulate a contrastive lower bound on the conditional MI which can be approximated efficiently. We refer to our general approach as Decomposed Estimation of Mutual Information (DEMI). We show that DEMI can capture a larger amount of MI than standard non-decomposed contrastive bounds in a synthetic setting, and learns better representations in a vision domain and for dialogue generation.
Social relations are often used to improve recommendation quality when user-item interaction data is sparse in recommender systems. Most existing social recommendation models exploit pairwise relations to mine potential user preferences. However, real-life interactions among users are very complicated and user relations can be high-order. Hypergraph provides a natural way to model complex high-order relations, while its potentials for improving social recommendation are under-explored. In this paper, we fill this gap and propose a multi-channel hypergraph convolutional network to enhance social recommendation by leveraging high-order user relations. Technically, each channel in the network encodes a hypergraph that depicts a common high-order user relation pattern via hypergraph convolution. By aggregating the embeddings learned through multiple channels, we obtain comprehensive user representations to generate recommendation results. However, the aggregation operation might also obscure the inherent characteristics of different types of high-order connectivity information. To compensate for the aggregating loss, we innovatively integrate self-supervised learning into the training of the hypergraph convolutional network to regain the connectivity information with hierarchical mutual information maximization. The experimental results on multiple real-world datasets show that the proposed model outperforms the SOTA methods, and the ablation study verifies the effectiveness of the multi-channel setting and the self-supervised task. The implementation of our model is available via //github.com/Coder-Yu/RecQ.
Non-IID data present a tough challenge for federated learning. In this paper, we explore a novel idea of facilitating pairwise collaborations between clients with similar data. We propose FedAMP, a new method employing federated attentive message passing to facilitate similar clients to collaborate more. We establish the convergence of FedAMP for both convex and non-convex models, and propose a heuristic method to further improve the performance of FedAMP when clients adopt deep neural networks as personalized models. Our extensive experiments on benchmark data sets demonstrate the superior performance of the proposed methods.
Multi-relation Question Answering is a challenging task, due to the requirement of elaborated analysis on questions and reasoning over multiple fact triples in knowledge base. In this paper, we present a novel model called Interpretable Reasoning Network that employs an interpretable, hop-by-hop reasoning process for question answering. The model dynamically decides which part of an input question should be analyzed at each hop; predicts a relation that corresponds to the current parsed results; utilizes the predicted relation to update the question representation and the state of the reasoning process; and then drives the next-hop reasoning. Experiments show that our model yields state-of-the-art results on two datasets. More interestingly, the model can offer traceable and observable intermediate predictions for reasoning analysis and failure diagnosis.