This study conducts a thorough evaluation of text augmentation techniques across a variety of datasets and natural language processing (NLP) tasks to address the lack of reliable, generalized evidence for these methods. It examines the effectiveness of these techniques in augmenting training sets to improve performance in tasks such as topic classification, sentiment analysis, and offensive language detection. The research emphasizes not only the augmentation methods, but also the strategic order in which real and augmented instances are introduced during training. A major contribution is the development and evaluation of Modified Cyclical Curriculum Learning (MCCL) for augmented datasets, which represents a novel approach in the field. Results show that specific augmentation methods, especially when integrated with MCCL, significantly outperform traditional training approaches in NLP model performance. These results underscore the need for careful selection of augmentation techniques and sequencing strategies to optimize the balance between speed and quality improvement in various NLP tasks. The study concludes that the use of augmentation methods, especially in conjunction with MCCL, leads to improved results in various classification tasks, providing a foundation for future advances in text augmentation strategies in NLP.
This paper investigates guesswork over ordered statistics and formulates the complexity of ordered statistics decoding (OSD) in binary additive white Gaussian noise (AWGN) channels. It first develops a new upper bound of guesswork for independent sequences, by applying the Holder's inequity to Hamming shell-based subspaces. This upper bound is then extended to the ordered statistics, by constructing the conditionally independent sequences within the ordered statistics sequences. We leverage the established bounds to formulate the best achievable decoding complexity of OSD that ensures no loss in error performance, where OSD stops immediately when the correct codeword estimate is found. We show that the average complexity of OSD at maximum decoding order can be accurately approximated by the modified Bessel function, which increases near-exponentially with code dimension. We also identify a complexity saturation threshold, where increasing the OSD decoding order beyond this threshold improves error performance without further raising decoding complexity. Finally, the paper presents insights on applying these findings to enhance the efficiency of practical decoder implementations.
The rapid evolution of programming languages and software systems has necessitated the implementation of multilingual and scalable clone detection tools. However, it is difficult to achieve the above requirements at the same time. Most existing tools only focus on one challenge. In this work, we propose TGMM, a tree and GPU-based tool for multilingual and multi-granularity code clone detection. By generating parse trees based on user-provided grammar files, TGMM can extract code blocks at a specified granularity and detect Type-3 clones efficiently. In order to show the performance of TGMM, we compare it with seven state-of-the-art tools in terms of recall, precision, and execution time. TGMM ranks first in execution time and precision, while its recall is comparable to the others. Moreover, we analyzed the language extensibility of TGMM across 30 mainstream programming languages. Out of these, a total of 25 languages were supported, while the remaining five currently lack the necessary grammar files. Finally, we analyzed the clone characteristics of nine popular languages at five common granularities, hoping to inspire future researchers. The source code of TGMM is available at: //github.com/TGMM24/TGMM.git.
This paper introduces a novel multi-step preview foot placement planning algorithm designed to enhance the robustness of bipedal robotic walking across challenging terrains with restricted footholds. Traditional one-step preview planning struggles to maintain stability when stepping areas are severely limited, such as with random stepping stones. In this work, we developed a discrete-time Model Predictive Control (MPC) based on the step-to-step discrete evolution of the Divergent Component of Motion (DCM) of bipedal locomotion. This approach adaptively changes the step duration for optimal foot placement under constraints, thereby ensuring the robot's operational viability over multiple future steps and significantly improving its ability to navigate through environments with tight constraints on possible footholds. The effectiveness of this planning algorithm is demonstrated through simulations that include a variety of complex stepping-stone configurations and external perturbations. These tests underscore the algorithm's improved performance for navigating foothold-restricted environments, even with the presence of external disturbances.
In this study, the statistical downscaling model (SDSM) is employed for downscaling the precipitation (PREC), maximum temperature (T max ) and minimum temperature (T min ) over Krishna River Basin (KRB). The Canadian Earth System Model, version 2 (CanESM2) General Circulation Model (GCM) outputs were considered as predictor variables. First, the SDSM is calibrated using 30-years (1961-1990) of data and subsequently validated for 15-years (1991-2005). Upon perceiving the satisfactory performance, the SDSM is further used for projecting the predictand variables (PRECP, T max and T min ) for the 21 st century considering three Representative Concentration Pathway (RCP) scenarios viz. RCP2.6, RCP4.5 and RCP8.5. The future period is divided into three 30-year time slices named epoch-1 (2011-2040), epoch-2 (2041-2070) and epoch-3 (2071-2100) respectively. Further, 1976-2005 is considered as baseline period and all the future results are compared with this data. The results were analysed at various temporal scales, i.e., monthly, seasonal and annual. The study reveals that the KRB is going to become wetter during all the seasons. The results are discussed for the worst-case scenario i.e., RCP8.5 epoch-3. The average annual maximum and minimum temperature is expected to increase. The extreme event analysis is also carried out considering the 90 th and 95 th percentile values. It is noticed that the extreme (90 th and 95 th percentiles) are going to increase. There are events more than extreme values. The outcome of this study can be used in flood modelling for the KRB and also for the modelling of future irrigation demands along with the planning of optimal irrigation in the KRB culturable command area.
Untargeted metabolomics based on liquid chromatography-mass spectrometry technology is quickly gaining widespread application given its ability to depict the global metabolic pattern in biological samples. However, the data is noisy and plagued by the lack of clear identity of data features measured from samples. Multiple potential matchings exist between data features and known metabolites, while the truth can only be one-to-one matches. Some existing methods attempt to reduce the matching uncertainty, but are far from being able to remove the uncertainty for most features. The existence of the uncertainty causes major difficulty in downstream functional analysis. To address these issues, we develop a novel approach for Bayesian Analysis of Untargeted Metabolomics data (BAUM) to integrate previously separate tasks into a single framework, including matching uncertainty inference, metabolite selection, and functional analysis. By incorporating the knowledge graph between variables and using relatively simple assumptions, BAUM can analyze datasets with small sample sizes. By allowing different confidence levels of feature-metabolite matching, the method is applicable to datasets in which feature identities are partially known. Simulation studies demonstrate that, compared with other existing methods, BAUM achieves better accuracy in selecting important metabolites that tend to be functionally consistent and assigning confidence scores to feature-metabolite matches. We analyze a COVID-19 metabolomics dataset and a mouse brain metabolomics dataset using BAUM. Even with a very small sample size of 16 mice per group, BAUM is robust and stable. It finds pathways that conform to existing knowledge, as well as novel pathways that are biologically plausible.
Knowledge graph embedding (KGE) is a increasingly popular technique that aims to represent entities and relations of knowledge graphs into low-dimensional semantic spaces for a wide spectrum of applications such as link prediction, knowledge reasoning and knowledge completion. In this paper, we provide a systematic review of existing KGE techniques based on representation spaces. Particularly, we build a fine-grained classification to categorise the models based on three mathematical perspectives of the representation spaces: (1) Algebraic perspective, (2) Geometric perspective, and (3) Analytical perspective. We introduce the rigorous definitions of fundamental mathematical spaces before diving into KGE models and their mathematical properties. We further discuss different KGE methods over the three categories, as well as summarise how spatial advantages work over different embedding needs. By collating the experimental results from downstream tasks, we also explore the advantages of mathematical space in different scenarios and the reasons behind them. We further state some promising research directions from a representation space perspective, with which we hope to inspire researchers to design their KGE models as well as their related applications with more consideration of their mathematical space properties.
Influenced by the stunning success of deep learning in computer vision and language understanding, research in recommendation has shifted to inventing new recommender models based on neural networks. In recent years, we have witnessed significant progress in developing neural recommender models, which generalize and surpass traditional recommender models owing to the strong representation power of neural networks. In this survey paper, we conduct a systematic review on neural recommender models, aiming to summarize the field to facilitate future progress. Distinct from existing surveys that categorize existing methods based on the taxonomy of deep learning techniques, we instead summarize the field from the perspective of recommendation modeling, which could be more instructive to researchers and practitioners working on recommender systems. Specifically, we divide the work into three types based on the data they used for recommendation modeling: 1) collaborative filtering models, which leverage the key source of user-item interaction data; 2) content enriched models, which additionally utilize the side information associated with users and items, like user profile and item knowledge graph; and 3) context enriched models, which account for the contextual information associated with an interaction, such as time, location, and the past interactions. After reviewing representative works for each type, we finally discuss some promising directions in this field, including benchmarking recommender systems, graph reasoning based recommendation models, and explainable and fair recommendations for social good.
Many tasks in natural language processing can be viewed as multi-label classification problems. However, most of the existing models are trained with the standard cross-entropy loss function and use a fixed prediction policy (e.g., a threshold of 0.5) for all the labels, which completely ignores the complexity and dependencies among different labels. In this paper, we propose a meta-learning method to capture these complex label dependencies. More specifically, our method utilizes a meta-learner to jointly learn the training policies and prediction policies for different labels. The training policies are then used to train the classifier with the cross-entropy loss function, and the prediction policies are further implemented for prediction. Experimental results on fine-grained entity typing and text classification demonstrate that our proposed method can obtain more accurate multi-label classification results.
Small data challenges have emerged in many learning problems, since the success of deep neural networks often relies on the availability of a huge amount of labeled data that is expensive to collect. To address it, many efforts have been made on training complex models with small data in an unsupervised and semi-supervised fashion. In this paper, we will review the recent progresses on these two major categories of methods. A wide spectrum of small data models will be categorized in a big picture, where we will show how they interplay with each other to motivate explorations of new ideas. We will review the criteria of learning the transformation equivariant, disentangled, self-supervised and semi-supervised representations, which underpin the foundations of recent developments. Many instantiations of unsupervised and semi-supervised generative models have been developed on the basis of these criteria, greatly expanding the territory of existing autoencoders, generative adversarial nets (GANs) and other deep networks by exploring the distribution of unlabeled data for more powerful representations. While we focus on the unsupervised and semi-supervised methods, we will also provide a broader review of other emerging topics, from unsupervised and semi-supervised domain adaptation to the fundamental roles of transformation equivariance and invariance in training a wide spectrum of deep networks. It is impossible for us to write an exclusive encyclopedia to include all related works. Instead, we aim at exploring the main ideas, principles and methods in this area to reveal where we are heading on the journey towards addressing the small data challenges in this big data era.
In order to answer natural language questions over knowledge graphs, most processing pipelines involve entity and relation linking. Traditionally, entity linking and relation linking has been performed either as dependent sequential tasks or independent parallel tasks. In this paper, we propose a framework called "EARL", which performs entity linking and relation linking as a joint single task. EARL uses a graph connection based solution to the problem. We model the linking task as an instance of the Generalised Travelling Salesman Problem (GTSP) and use GTSP approximate algorithm solutions. We later develop EARL which uses a pair-wise graph-distance based solution to the problem.The system determines the best semantic connection between all keywords of the question by referring to a knowledge graph. This is achieved by exploiting the "connection density" between entity candidates and relation candidates. The "connection density" based solution performs at par with the approximate GTSP solution.We have empirically evaluated the framework on a dataset with 5000 questions. Our system surpasses state-of-the-art scores for entity linking task by reporting an accuracy of 0.65 to 0.40 from the next best entity linker.