Reinforcement Learning (RL) has opened up new opportunities to enhance existing smart systems that generally include a complex decision-making process. However, modern RL algorithms, e.g., Deep Q-Networks (DQN), are based on deep neural networks, resulting in high computational costs. In this paper, we propose QHD, an off-policy value-based Hyperdimensional Reinforcement Learning, that mimics brain properties toward robust and real-time learning. QHD relies on a lightweight brain-inspired model to learn an optimal policy in an unknown environment. On both desktop and power-limited embedded platforms, QHD achieves significantly better overall efficiency than DQN while providing higher or comparable rewards. QHD is also suitable for highly-efficient reinforcement learning with great potential for online and real-time learning. Our solution supports a small experience replay batch size that provides 12.3 times speedup compared to DQN while ensuring minimal quality loss. Our evaluation shows QHD capability for real-time learning, providing 34.6 times speedup and significantly better quality of learning than DQN.
A near-field wideband communication system is studied, wherein a base station (BS) employs an extremely large-scale antenna array (ELAA) to serve multiple users situated within its near-field region. To facilitate the near-field beamfocusing and mitigate the wideband beam split, true-time delayer (TTD)-based hybrid beamforming architectures are employed at the BS. Apart from the fully-connected TTD-based architecture, a new sub-connected TTD-based architecture is proposed for enhancing energy efficiency. Three wideband beamfocusing optimization approaches are proposed to maximize spectral efficiency for both architectures. 1) Fully-digital approximation (FDA) approach: In this approach, the TTD-based hybrid beamformers are optimized to approximate the optimal fully-digital beamformers using block coordinate descent. 2) Penalty-based FDA approach: In this approach, the penalty method is leveraged in the FDA approach to guarantee the convergence to a stationary point of the spectral maximization problem. 3) Heuristic two-stage (HTS) approach: In this approach, the closed-form TTD-based analog beamformers are first designed based on the outcomes of near-field beam training and the piecewise-near-field approximation. Subsequently, the low-dimensional digital beamformer is optimized using knowledge of the low-dimensional equivalent channels, resulting in reduced computational complexity and channel estimation complexity. Our numerical results unveil that 1) the proposed approaches effectively eliminate the near-field beam split effect, and 2) compared to the fully-connected architecture, the proposed sub-connected architecture exhibits higher energy efficiency and imposes fewer hardware limitations on TTDs and system bandwidth.
The SOTIF standard (ISO 21448) requires scenario-based testing to verify and validate Advanced Driver Assistance Systems and Automated Driving Systems but does not suggest any practical way to do so effectively and efficiently. Existing scenario generation approaches either focus on exploring or exploiting the scenario space. This generally leads to test suites that cover many known cases but potentially miss edge cases or focused test suites that are effective but also contain less diverse scenarios. To generate SOTIF-compliant test suites that achieve higher coverage and find more faults, this paper proposes semi-concrete scenarios and combines them with parameter sampling to adequately balance scenario space exploration and exploitation. Semi-concrete scenarios enable combinatorial scenario generation techniques that systematically explore the scenario space, while parameter sampling allows for the exploitation of continuous parameters. Our experimental results show that the proposed concept can generate more effective test suites than state-of-the-art coverage-based sampling. Moreover, our results show that including a feedback mechanism to drive parameter sampling further increases test suites' effectiveness.
Several novel statistical methods have been developed to estimate large integrated volatility matrices based on high-frequency financial data. To investigate their asymptotic behaviors, they require a sub-Gaussian or finite high-order moment assumption for observed log-returns, which cannot account for the heavy-tail phenomenon of stock-returns. Recently, a robust estimator was developed to handle heavy-tailed distributions with some bounded fourth-moment assumption. However, we often observe that log-returns have heavier tail distribution than the finite fourth-moment and that the degrees of heaviness of tails are heterogeneous across asset and over time. In this paper, to deal with the heterogeneous heavy-tailed distributions, we develop an adaptive robust integrated volatility estimator that employs pre-averaging and truncation schemes based on jump-diffusion processes. We call this an adaptive robust pre-averaging realized volatility (ARP) estimator. We show that the ARP estimator has a sub-Weibull tail concentration with only finite 2$\alpha$-th moments for any $\alpha>1$. In addition, we establish matching upper and lower bounds to show that the ARP estimation procedure is optimal. To estimate large integrated volatility matrices using the approximate factor model, the ARP estimator is further regularized using the principal orthogonal complement thresholding (POET) method. The numerical study is conducted to check the finite sample performance of the ARP estimator.
This paper studies task-oriented, otherwise known as goal-oriented, communications, in a setting where a transmitter communicates with multiple receivers, each with its own task to complete on a dataset, e.g., images, available at the transmitter. A multi-task deep learning approach that involves training a common encoder at the transmitter and individual decoders at the receivers is presented for joint optimization of completing multiple tasks and communicating with multiple receivers. By providing efficient resource allocation at the edge of 6G networks, the proposed approach allows the communications system to adapt to varying channel conditions and achieves task-specific objectives while minimizing transmission overhead. Joint training of the encoder and decoders using multi-task learning captures shared information across tasks and optimizes the communication process accordingly. By leveraging the broadcast nature of wireless communications, multi-receiver task-oriented communications (MTOC) reduces the number of transmissions required to complete tasks at different receivers. Performance evaluation conducted on the MNIST, Fashion MNIST, and CIFAR-10 datasets (with image classification considered for different tasks) demonstrates the effectiveness of MTOC in terms of classification accuracy and resource utilization compared to single-task-oriented communication systems.
When labeled data is insufficient, semi-supervised learning with the pseudo-labeling technique can significantly improve the performance of automatic speech recognition. However, pseudo-labels are often noisy, containing numerous incorrect tokens. Taking noisy labels as ground-truth in the loss function results in suboptimal performance. Previous works attempted to mitigate this issue by either filtering out the nosiest pseudo-labels or improving the overall quality of pseudo-labels. While these methods are effective to some extent, it is unrealistic to entirely eliminate incorrect tokens in pseudo-labels. In this work, we propose a novel framework named alternative pseudo-labeling to tackle the issue of noisy pseudo-labels from the perspective of the training objective. The framework comprises several components. Firstly, a generalized CTC loss function is introduced to handle noisy pseudo-labels by accepting alternative tokens in the positions of incorrect tokens. Applying this loss function in pseudo-labeling requires detecting incorrect tokens in the predicted pseudo-labels. In this work, we adopt a confidence-based error detection method that identifies the incorrect tokens by comparing their confidence scores with a given threshold, thus necessitating the confidence score to be discriminative. Hence, the second proposed technique is the contrastive CTC loss function that widens the confidence gap between the correctly and incorrectly predicted tokens, thereby improving the error detection ability. Additionally, obtaining satisfactory performance with confidence-based error detection typically requires extensive threshold tuning. Instead, we propose an automatic thresholding method that uses labeled data as a proxy for determining the threshold, thus saving the pain of manual tuning.
Translation Quality Estimation (QE) is the task of predicting the quality of machine translation (MT) output without any reference. This task has gained increasing attention as an important component in the practical applications of MT. In this paper, we first propose XLMRScore, which is a cross-lingual counterpart of BERTScore computed via the XLM-RoBERTa (XLMR) model. This metric can be used as a simple unsupervised QE method, while employing it results in two issues: firstly, the untranslated tokens leading to unexpectedly high translation scores, and secondly, the issue of mismatching errors between source and hypothesis tokens when applying the greedy matching in XLMRScore. To mitigate these issues, we suggest replacing untranslated words with the unknown token and the cross-lingual alignment of the pre-trained model to represent aligned words closer to each other, respectively. We evaluate the proposed method on four low-resource language pairs of WMT21 QE shared task, as well as a new English-Farsi test dataset introduced in this paper. Experiments show that our method could get comparable results with the supervised baseline for two zero-shot scenarios, i.e., with less than 0.01 difference in Pearson correlation, while outperforming unsupervised rivals in all the low-resource language pairs for above 8%, on average.
Multi-object tracking (MOT) at low frame rates can reduce computational, storage and power overhead to better meet the constraints of edge devices. Many existing MOT methods suffer from significant performance degradation in low-frame-rate videos due to significant location and appearance changes between adjacent frames. To this end, we propose to explore collaborative tracking learning (ColTrack) for frame-rate-insensitive MOT in a query-based end-to-end manner. Multiple historical queries of the same target jointly track it with richer temporal descriptions. Meanwhile, we insert an information refinement module between every two temporal blocking decoders to better fuse temporal clues and refine features. Moreover, a tracking object consistency loss is proposed to guide the interaction between historical queries. Extensive experimental results demonstrate that in high-frame-rate videos, ColTrack obtains higher performance than state-of-the-art methods on large-scale datasets Dancetrack and BDD100K, and outperforms the existing end-to-end methods on MOT17. More importantly, ColTrack has a significant advantage over state-of-the-art methods in low-frame-rate videos, which allows it to obtain faster processing speeds by reducing frame-rate requirements while maintaining higher performance. Code will be released at //github.com/yolomax/ColTrack
This paper proposes a novel method for computing bijective density-equalizing quasiconformal (DEQ) flattening maps for multiply-connected open surfaces. In conventional density-equalizing maps, shape deformations are solely driven by prescribed constraints on the density distribution, defined as the population per unit area, while the bijectivity and local geometric distortions of the mappings are uncontrolled. Also, prior methods have primarily focused on simply-connected open surfaces but not surfaces with more complicated topologies. Our proposed method overcomes these issues by formulating the density diffusion process as a quasiconformal flow, which allows us to effectively control the local geometric distortion and guarantee the bijectivity of the mapping by solving an energy minimization problem involving the Beltrami coefficient of the mapping. To achieve an optimal parameterization of multiply-connected surfaces, we develop an iterative scheme that optimizes both the shape of the target planar circular domain and the density-equalizing quasiconformal map onto it. In addition, landmark constraints can be incorporated into our proposed method for consistent feature alignment. The method can also be naturally applied to simply-connected open surfaces. By changing the prescribed population, a large variety of surface flattening maps with different desired properties can be achieved. The method is tested on both synthetic and real examples, demonstrating its efficacy in various applications in computer graphics and medical imaging.
Recent contrastive representation learning methods rely on estimating mutual information (MI) between multiple views of an underlying context. E.g., we can derive multiple views of a given image by applying data augmentation, or we can split a sequence into views comprising the past and future of some step in the sequence. Contrastive lower bounds on MI are easy to optimize, but have a strong underestimation bias when estimating large amounts of MI. We propose decomposing the full MI estimation problem into a sum of smaller estimation problems by splitting one of the views into progressively more informed subviews and by applying the chain rule on MI between the decomposed views. This expression contains a sum of unconditional and conditional MI terms, each measuring modest chunks of the total MI, which facilitates approximation via contrastive bounds. To maximize the sum, we formulate a contrastive lower bound on the conditional MI which can be approximated efficiently. We refer to our general approach as Decomposed Estimation of Mutual Information (DEMI). We show that DEMI can capture a larger amount of MI than standard non-decomposed contrastive bounds in a synthetic setting, and learns better representations in a vision domain and for dialogue generation.
Graph Neural Networks (GNN) has demonstrated the superior performance in many challenging applications, including the few-shot learning tasks. Despite its powerful capacity to learn and generalize from few samples, GNN usually suffers from severe over-fitting and over-smoothing as the model becomes deep, which limit the model scalability. In this work, we propose a novel Attentive GNN to tackle these challenges, by incorporating a triple-attention mechanism, \ie node self-attention, neighborhood attention, and layer memory attention. We explain why the proposed attentive modules can improve GNN for few-shot learning with theoretical analysis and illustrations. Extensive experiments show that the proposed Attentive GNN outperforms the state-of-the-art GNN-based methods for few-shot learning over the mini-ImageNet and Tiered-ImageNet datasets, with both inductive and transductive settings.