Interacting with the actual environment to acquire data is often costly and time-consuming in robotic tasks. Model-based offline reinforcement learning (RL) provides a feasible solution. On the one hand, it eliminates the requirements of interaction with the actual environment. On the other hand, it learns the transition dynamics and reward function from the offline datasets and generates simulated rollouts to accelerate training. Previous model-based offline RL methods adopt probabilistic ensemble neural networks (NN) to model aleatoric uncertainty and epistemic uncertainty. However, this results in an exponential increase in training time and computing resource requirements. Furthermore, these methods are easily disturbed by the accumulative errors of the environment dynamics models when simulating long-term rollouts. To solve the above problems, we propose an uncertainty-aware sequence modeling architecture called Environment Transformer. It models the probability distribution of the environment dynamics and reward function to capture aleatoric uncertainty and treats epistemic uncertainty as a learnable noise parameter. Benefiting from the accurate modeling of the transition dynamics and reward function, Environment Transformer can be combined with arbitrary planning, dynamics programming, or policy optimization algorithms for offline RL. In this case, we perform Conservative Q-Learning (CQL) to learn a conservative Q-function. Through simulation experiments, we demonstrate that our method achieves or exceeds state-of-the-art performance in widely studied offline RL benchmarks. Moreover, we show that Environment Transformer's simulated rollout quality, sample efficiency, and long-term rollout simulation capability are superior to those of previous model-based offline RL methods.
Mediation analysis is an important statistical tool in many research fields. Its aim is to investigate the mechanism along the causal pathway between an exposure and an outcome. The joint significance test is widely utilized as a prominent statistical approach for examining mediation effects in practical applications. Nevertheless, the limitation of this mediation testing method stems from its conservative Type I error, which reduces its statistical power and imposes certain constraints on its popularity and utility. The proposed solution to address this gap is the adaptive joint significance test for one mediator, a novel data-adaptive test for mediation effect that exhibits significant advancements compared to traditional joint significance test. The proposed method is designed to be user-friendly, eliminating the need for complicated procedures. We have derived explicit expressions for size and power, ensuring the theoretical validity of our approach. Furthermore, we extend the proposed adaptive joint significance tests for small-scale mediation hypotheses with family-wise error rate (FWER) control. Additionally, a novel adaptive Sobel-type approach is proposed for the estimation of confidence intervals for the mediation effects, demonstrating significant advancements over conventional Sobel's confidence intervals in terms of achieving desirable coverage probabilities. Our mediation testing and confidence intervals procedure is evaluated through comprehensive simulations, and compared with numerous existing approaches. Finally, we illustrate the usefulness of our method by analysing three real-world datasets with continuous, binary and time-to-event outcomes, respectively.
We propose fast and practical quantum-inspired classical algorithms for solving linear systems. Specifically, given sampling and query access to a matrix $A\in\mathbb{R}^{m\times n}$ and a vector $b\in\mathbb{R}^m$, we propose classical algorithms that produce a data structure for the solution $x\in\mathbb{R}^{n}$ of the linear system $Ax=b$ with the ability to sample and query its entries. The resulting $x$ satisfies $\|x-A^{+}b\|\leq\epsilon\|A^{+}b\|$, where $\|\cdot\|$ is the spectral norm and $A^+$ is the Moore-Penrose inverse of $A$. Our algorithm has time complexity $\widetilde{O}(\kappa_F^4/\kappa\epsilon^2)$ in the general case, where $\kappa_{F} =\|A\|_F\|A^+\|$ and $\kappa=\|A\|\|A^+\|$ are condition numbers. Compared to the prior state-of-the-art result [Shao and Montanaro, arXiv:2103.10309v2], our algorithm achieves a polynomial speedup in condition numbers. When $A$ is $s$-sparse, our algorithm has complexity $\widetilde{O}(s \kappa\log(1/\epsilon))$, matching the quantum lower bound for solving linear systems in $\kappa$ and $1/\epsilon$ up to poly-logarithmic factors [Harrow and Kothari]. When $A$ is $s$-sparse and symmetric positive-definite, our algorithm has complexity $\widetilde{O}(s\sqrt{\kappa}\log(1/\epsilon))$. Technically, our main contribution is the application of the heavy ball momentum method to quantum-inspired classical algorithms for solving linear systems, where we propose two new methods with speedups: quantum-inspired Kaczmarz method with momentum and quantum-inspired coordinate descent method with momentum. Their analysis exploits careful decomposition of the momentum transition matrix and the application of novel spectral norm concentration bounds for independent random matrices. Finally, we also conduct numerical experiments for our algorithms on both synthetic and real-world datasets, and the experimental results support our theoretical claims.
We formally define a novel valuable information retrieval task: image-to-multi-modal-retrieval (IMMR), where the query is an image and the doc is an entity with both image and textual description. IMMR task is valuable in various industrial application. We analyze three key challenges for IMMR: 1) skewed data and noisy label in metric learning, 2) multi-modality fusion, 3) effective and efficient training in large-scale industrial scenario. To tackle the above challenges, we propose a novel framework for IMMR task. Our framework consists of three components: 1) a novel data governance scheme coupled with a large-scale classification-based learning paradigm. 2) model architecture specially designed for multimodal learning, where the proposed concept-aware modality fusion module adaptively fuse image and text modality. 3. a hybrid parallel training approach for tackling large-scale training in industrial scenario. The proposed framework achieves SOTA performance on public datasets and has been deployed in a real-world industrial search system, leading to significant improvements in click-through rate and deal number. Code and data will be made publicly available.
With the increasing demands from passengers for data-intensive services, millimeter-wave (mmWave) communication is considered as an effective technique to release the transmission pressure on high speed train (HST) networks. However, mmWave signals ncounter severe losses when passing through the carriage, which decreases the quality of services on board. In this paper, we investigate an intelligent refracting surface (IRS)-assisted HST communication system. Herein, an IRS is deployed on the train window to dynamically reconfigure the propagation environment, and a hybrid time division multiple access-nonorthogonal multiple access scheme is leveraged for interference mitigation. We aim to maximize the overall throughput while taking into account the constraints imposed by base station beamforming, IRS discrete phase shifts and transmit power. To obtain a practical solution, we employ an alternating optimization method and propose a two-stage algorithm. In the first stage, the successive convex approximation method and branch and bound algorithm are leveraged for IRS phase shift design. In the second stage, the Lagrangian multiplier method is utilized for power allocation. Simulation results demonstrate the benefits of IRS adoption and power allocation for throughput improvement in mmWave HST networks.
The surge in real-time data collection across various industries has underscored the need for advanced anomaly detection in both univariate and multivariate time series data. Traditional methods, while comprehensive, often struggle to capture the complex interdependencies in such data. This paper introduces TransNAS-TSAD, a novel framework that synergizes transformer architecture with neural architecture search (NAS), enhanced through NSGA-II algorithm optimization. This innovative approach effectively tackles the complexities of both univariate and multivariate time series, balancing computational efficiency with detection accuracy. Our evaluation reveals that TransNAS-TSAD surpasses conventional anomaly detection models, demonstrating marked improvements in diverse data scenarios. We also propose the Efficiency-Accuracy-Complexity Score (EACS) as a new metric for assessing model performance, emphasizing the crucial balance between accuracy and computational resources. TransNAS-TSAD sets a new benchmark in time series anomaly detection, offering a versatile, efficient solution for complex real-world applications. This research paves the way for future developments in the field, highlighting its potential in a wide range of industry applications.
Relaying increases the coverage area and reliability of wireless communications systems by mitigating the fading effect on the received signal. Most technical contributions in the context of these systems assume ideal hardware (ID) by neglecting the non-idealities of the transceivers, which include phase noise, in-phase/quadrature mismatch and high power amplifier nonlinearities. These non-idealities create distortion on the received signal by causing variations in the phase and attenuating the amplitude. The resulting deterioration of the performance of wireless communication systems is further magnified as the frequency of transmission increases. In this paper, we investigate the aggregate impact of hardware impairments (HI) on the general multi-hop relay system using amplify-and-forward (AF) and decode-and-forward (DF) relaying techniques over a general H-fading model. H-fading model includes free space optics, radio frequency, millimeter wave, Terahertz, and underwater fading models. Closed-form expressions of outage probability, bit error probability and ergodic capacity are derived in terms of H-functions. Following an asymptotic analysis at high signal-to-noise ratio (SNR), practical optimization problems have been formulated with the objective of finding the optimal level of HI subject to the limitation on the total HI level. The analytical solution has been derived for the Nakagami-m fading channel which is a special case of H-fading for AF and DF relaying techniques. The overall instantaneous signal-to-noise-plus-distortion ratio has been demonstrated to reach a ceiling at high SNRs which has a reciprocal proportion to the HI level of all hops transceivers on the contrary to the ID.
Modern time series forecasting methods, such as Transformer and its variants, have shown strong ability in sequential data modeling. To achieve high performance, they usually rely on redundant or unexplainable structures to model complex relations between variables and tune the parameters with large-scale data. Many real-world data mining tasks, however, lack sufficient variables for relation reasoning, and therefore these methods may not properly handle such forecasting problems. With insufficient data, time series appear to be affected by many exogenous variables, and thus, the modeling becomes unstable and unpredictable. To tackle this critical issue, in this paper, we develop a novel algorithmic framework for inferring the intrinsic latent factors implied by the observable time series. The inferred factors are used to form multiple independent and predictable signal components that enable not only sparse relation reasoning for long-term efficiency but also reconstructing the future temporal data for accurate prediction. To achieve this, we introduce three characteristics, i.e., predictability, sufficiency, and identifiability, and model these characteristics via the powerful deep latent dynamics models to infer the predictable signal components. Empirical results on multiple real datasets show the efficiency of our method for different kinds of time series forecasting. The statistical analysis validates the predictability of the learned latent factors.
Object detection typically assumes that training and test data are drawn from an identical distribution, which, however, does not always hold in practice. Such a distribution mismatch will lead to a significant performance drop. In this work, we aim to improve the cross-domain robustness of object detection. We tackle the domain shift on two levels: 1) the image-level shift, such as image style, illumination, etc, and 2) the instance-level shift, such as object appearance, size, etc. We build our approach based on the recent state-of-the-art Faster R-CNN model, and design two domain adaptation components, on image level and instance level, to reduce the domain discrepancy. The two domain adaptation components are based on H-divergence theory, and are implemented by learning a domain classifier in adversarial training manner. The domain classifiers on different levels are further reinforced with a consistency regularization to learn a domain-invariant region proposal network (RPN) in the Faster R-CNN model. We evaluate our newly proposed approach using multiple datasets including Cityscapes, KITTI, SIM10K, etc. The results demonstrate the effectiveness of our proposed approach for robust object detection in various domain shift scenarios.
Multi-relation Question Answering is a challenging task, due to the requirement of elaborated analysis on questions and reasoning over multiple fact triples in knowledge base. In this paper, we present a novel model called Interpretable Reasoning Network that employs an interpretable, hop-by-hop reasoning process for question answering. The model dynamically decides which part of an input question should be analyzed at each hop; predicts a relation that corresponds to the current parsed results; utilizes the predicted relation to update the question representation and the state of the reasoning process; and then drives the next-hop reasoning. Experiments show that our model yields state-of-the-art results on two datasets. More interestingly, the model can offer traceable and observable intermediate predictions for reasoning analysis and failure diagnosis.
Image segmentation is considered to be one of the critical tasks in hyperspectral remote sensing image processing. Recently, convolutional neural network (CNN) has established itself as a powerful model in segmentation and classification by demonstrating excellent performances. The use of a graphical model such as a conditional random field (CRF) contributes further in capturing contextual information and thus improving the segmentation performance. In this paper, we propose a method to segment hyperspectral images by considering both spectral and spatial information via a combined framework consisting of CNN and CRF. We use multiple spectral cubes to learn deep features using CNN, and then formulate deep CRF with CNN-based unary and pairwise potential functions to effectively extract the semantic correlations between patches consisting of three-dimensional data cubes. Effective piecewise training is applied in order to avoid the computationally expensive iterative CRF inference. Furthermore, we introduce a deep deconvolution network that improves the segmentation masks. We also introduce a new dataset and experimented our proposed method on it along with several widely adopted benchmark datasets to evaluate the effectiveness of our method. By comparing our results with those from several state-of-the-art models, we show the promising potential of our method.