Sensing-assisted communication is critical to enhance the system efficiency in integrated sensing and communication (ISAC) systems. However, most existing literature focuses on large-scale channel sensing, without considering the impacts of small-scale channel aging. In this paper, we investigate a dual-scale channel estimation framework for sensing-assisted communication, where both large-scale channel sensing and small-scale channel aging are considered. By modeling the channel aging effect with block fading and incorporating CRB (Cram\'er-Rao bound)-based sensing errors, we optimize both the time duration of large-scale detection and the frequency of small-scale update within each subframe to maximize the achievable rate while satisfying sensing requirements. Since the formulated optimization problem is non-convex, we propose a two-dimensional search-based optimization algorithm to obtain the optimal solution. Simulation results demonstrate the superiority of our proposed optimal design over three counterparts.
The soft-output successive cancellation list (SOSCL) decoder provides a methodology for estimating the a-posteriori probability log-likelihood ratios by only leveraging the conventional SCL decoder for polar codes. However, the sequential nature of SCL decoding leads to a high decoding latency for the SO-SCL decoder. In this paper, we propose a soft-output fast SCL (SO-FSCL) decoder by incorporating node-based fast decoding into the SO-SCL framework. Simulation results demonstrate that the proposed SO-FSCL decoder significantly reduces the decoding latency without loss of performance compared with the SO-SCL decoder.
We propose a highly flexible distributional copula regression model for bivariate time-to-event data in the presence of right-censoring. The joint survival function of the response is constructed using parametric copulas, allowing for a separate specification of the dependence structure between the time-to-event outcome variables and their respective marginal survival distributions. The latter are specified using well-known parametric distributions such as the log-Normal, log-Logistic (proportional odds model), or Weibull (proportional hazards model) distributions. Hence, the marginal univariate event times can be specified as parametric (also known as Accelerated Failure Time, AFT) models. Embedding our model into the class of generalized additive models for location, scale and shape, possibly all distribution parameters of the joint survival function can depend on covariates. We develop a component-wise gradient-based boosting algorithm for estimation. This way, our approach is able to conduct data-driven variable selection. To the best of our knowledge, this is the first implementation of multivariate AFT models via distributional copula regression with automatic variable selection via statistical boosting. A special merit of our approach is that it works for high-dimensional (p>>n) settings. We illustrate the practical potential of our method on a high-dimensional application related to semi-competing risks responses in ovarian cancer. All of our methods are implemented in the open source statistical software R as add-on functions of the package gamboostLSS.
Time-evolving graphs arise frequently when modeling complex dynamical systems such as social networks, traffic flow, and biological processes. Developing techniques to identify and analyze communities in these time-varying graph structures is an important challenge. In this work, we generalize existing spectral clustering algorithms from static to dynamic graphs using canonical correlation analysis (CCA) to capture the temporal evolution of clusters. Based on this extended canonical correlation framework, we define the spatio-temporal graph Laplacian and investigate its spectral properties. We connect these concepts to dynamical systems theory via transfer operators, and illustrate the advantages of our method on benchmark graphs by comparison with existing methods. We show that the spatio-temporal graph Laplacian allows for a clear interpretation of cluster structure evolution over time for directed and undirected graphs.
To address the challenges of imbalanced multi-class datasets typically used for rare event detection in critical cyber-physical systems, we propose an optimal, efficient, and adaptable mixed integer programming (MIP) ensemble weighting scheme. Our approach leverages the diverse capabilities of the classifier ensemble on a granular per class basis, while optimizing the weights of classifier-class pairs using elastic net regularization for improved robustness and generalization. Additionally, it seamlessly and optimally selects a predefined number of classifiers from a given set. We evaluate and compare our MIP-based method against six well-established weighting schemes, using representative datasets and suitable metrics, under various ensemble sizes. The experimental results reveal that MIP outperforms all existing approaches, achieving an improvement in balanced accuracy ranging from 0.99% to 7.31%, with an overall average of 4.53% across all datasets and ensemble sizes. Furthermore, it attains an overall average increase of 4.63%, 4.60%, and 4.61% in macro-averaged precision, recall, and F1-score, respectively, while maintaining computational efficiency.
Affine frequency division multiplexing (AFDM) is a promising new multicarrier technique for high-mobility communications based on discrete affine Fourier transform (DAFT). By properly tuning two parameters in the DAFT module, the effective channel in the DAFT domain can completely circumvent path overlap, thereby constituting a full representation of delay-Doppler profile. However, AFDM has a crucial problem of high peak-to-average power ratio (PAPR), stemming from randomness of modulated symbols. To reduce the PAPR of AFDM, a novel algorithm named grouped pre-chirp selection (GPS) is proposed in this letter. The GPS varying the pre-chirp parameter across subcarrier groups in a non-enumerated manner and then selects the signal with the smallest PAPR among all candidate signals. We detail the operational procedures of the GPS algorithm, analyzing GPS from four aspects: PAPR performance, computational complexity, spectral efficiency, and bit error rate (BER) performance. Simulations indicate the effectiveness of the proposed GPS in reducing PAPR. At the cost of slight communication performance, AFDM with GPS can achieve better PAPR performance than orthogonal time frequency space (OTFS) while maintaining a lower modulation complexity.
As artificial intelligence (AI) continues advancing, ensuring positive societal impacts becomes critical, especially as AI systems become increasingly ubiquitous in various aspects of life. However, developing "AI for good" poses substantial challenges around aligning systems with complex human values. Presently, we lack mature methods for addressing these challenges. This article presents and evaluates the Positive AI design method aimed at addressing this gap. The method provides a human-centered process to translate wellbeing aspirations into concrete practices. First, we explain the method's four key steps: contextualizing, operationalizing, optimizing, and implementing wellbeing supported by continuous measurement for feedback cycles. We then present a multiple case study where novice designers applied the method, revealing strengths and weaknesses related to efficacy and usability. Next, an expert evaluation study assessed the quality of the resulting concepts, rating them moderately high for feasibility, desirability, and plausibility of achieving intended wellbeing benefits. Together, these studies provide preliminary validation of the method's ability to improve AI design, while surfacing areas needing refinement like developing support for complex steps. Proposed adaptations such as examples and evaluation heuristics could address weaknesses. Further research should examine sustained application over multiple projects. This human-centered approach shows promise for realizing the vision of 'AI for Wellbeing' that does not just avoid harm, but actively benefits humanity.
Concurrent computation and communication (C3) is a pervasive paradigm in ML and other domains, making its performance optimization crucial. In this paper, we carefully characterize C3 in ML on GPUs, which are most widely deployed for ML training and inference. We observe that while C3 leads to performance uplifts, the uplifts are far lower than ideal speedups (serial computation and communication versus maximum of computation or communication; all times from isolated executions). C3 on average achieves only 21% of ideal speedup, this is due to known challenges of compute and memory interference between concurrent GPU kernels (that is, sharing of GPU's compute units, caches and HBM). To attain better performance for C3, first, we evaluate dual strategies of schedule prioritization and careful resource partitioning of compute units on GPUs to push performance attained with C3 (on average 42% of ideal speedup). We also provide heuristics that can guide a runtime while employing these strategies. To further enhance C3 performance, we propose to mitigate C3 interference by offloading communication tasks to the GPU's DMA engines. To this end, we build Concurrent Communication CoLlectives (ConCCL) proof-of-concepts that harness DMA engines for communication. We show how ConCCL considerably closes the gap between realized and ideal speedup for C3 (on average 72% of ideal speedup is realized, up to 1.67x speedup). Overall, our work makes a strong case for GPU DMA engine advancements to better support C3 on GPUs.
High spectral dimensionality and the shortage of annotations make hyperspectral image (HSI) classification a challenging problem. Recent studies suggest that convolutional neural networks can learn discriminative spatial features, which play a paramount role in HSI interpretation. However, most of these methods ignore the distinctive spectral-spatial characteristic of hyperspectral data. In addition, a large amount of unlabeled data remains an unexploited gold mine for efficient data use. Therefore, we proposed an integration of generative adversarial networks (GANs) and probabilistic graphical models for HSI classification. Specifically, we used a spectral-spatial generator and a discriminator to identify land cover categories of hyperspectral cubes. Moreover, to take advantage of a large amount of unlabeled data, we adopted a conditional random field to refine the preliminary classification results generated by GANs. Experimental results obtained using two commonly studied datasets demonstrate that the proposed framework achieved encouraging classification accuracy using a small number of data for training.
Multi-relation Question Answering is a challenging task, due to the requirement of elaborated analysis on questions and reasoning over multiple fact triples in knowledge base. In this paper, we present a novel model called Interpretable Reasoning Network that employs an interpretable, hop-by-hop reasoning process for question answering. The model dynamically decides which part of an input question should be analyzed at each hop; predicts a relation that corresponds to the current parsed results; utilizes the predicted relation to update the question representation and the state of the reasoning process; and then drives the next-hop reasoning. Experiments show that our model yields state-of-the-art results on two datasets. More interestingly, the model can offer traceable and observable intermediate predictions for reasoning analysis and failure diagnosis.
Detecting carried objects is one of the requirements for developing systems to reason about activities involving people and objects. We present an approach to detect carried objects from a single video frame with a novel method that incorporates features from multiple scales. Initially, a foreground mask in a video frame is segmented into multi-scale superpixels. Then the human-like regions in the segmented area are identified by matching a set of extracted features from superpixels against learned features in a codebook. A carried object probability map is generated using the complement of the matching probabilities of superpixels to human-like regions and background information. A group of superpixels with high carried object probability and strong edge support is then merged to obtain the shape of the carried object. We applied our method to two challenging datasets, and results show that our method is competitive with or better than the state-of-the-art.