This study investigates the translation of circumlocution from Arabic to English in a corpus of short stories by renowned Arabic authors. By analyzing the source and target texts, the study aims to identify and categorize circumlocution instances in Arabic and their corresponding renditions in English. The study employs Nida's (1964) translation theory as a framework to assess the appropriateness of the translation strategies employed. It examines the extent to which translators successfully rendered Arabic circumlocution into English, identifying potential challenges and limitations in the translation process. The findings reveal significant similarities between Arabic circumlocution categories and English metadiscourse categories, particularly in terms of textual and interpersonal functions. However, the study also highlights instances where translators encountered difficulties in accurately conveying the nuances of circumlocution, often resorting to strategies like addition, subtraction, and alteration.
This study explores the role of pretesting when integrated with conversational AI tools, specifically ChatGPT, in enhancing learning outcomes. Drawing on existing research, which demonstrates the benefits of pretesting in memory activation and retention, this experiment extends these insights into the context of digital learning environments. A randomized true experimental study was utilized. Participants were divided into two groups: one engaged in pretesting before using ChatGPT for a problem-solving task involving chi-square analysis, while the control group accessed ChatGPT immediately. The results indicate that the pretest group significantly outperformed the no-pretest group in a subsequent test, which suggests that pretesting enhances the retention of complex material. This study contributes to the field by demonstrating that pretesting can augment the learning process in technology-assisted environments by preparing the memory and promoting active engagement with the material. The findings also suggest that learning strategies like pretesting retain their relevance in the context of rapidly evolving AI technologies. Further research and practical implications are presented.
This study investigates whether assessments fostering higher-order thinking skills can reduce plagiarism involving generative AI tools. Participants completed three tasks of varying complexity in four groups: control, e-textbook, Google, and ChatGPT. Findings show that AI plagiarism decreases as task complexity increases, with higher-order tasks resulting in lower similarity scores and AI plagiarism percentages. The study also highlights the distinction between similarity scores and AI plagiarism, recommending both for effective plagiarism detection. Results suggest that assessments promoting higher-order thinking are a viable strategy for minimizing AI-driven plagiarism.
This work introduces a new approach to the Epileptic Spasms (ESES) detection based on the EEG signals using Vision Transformers (ViT). Classic ESES detection approaches have usually been performed with manual processing or conventional algorithms, suffering from poor sample sizes, single-channel-based analyses, and low generalization abilities. In contrast, the proposed ViT model overcomes these limitations by using the attention mechanism to focus on the important features in multi-channel EEG data, which is contributing to both better accuracy and efficiency. The model processes frequency-domain representations of EEG signals, such as spectrograms, as image data to capture long-range dependencies and complex patterns in the signal. The model demonstrates high performance with an accuracy of 97% without requiring intensive data preprocessing, thus rendering it suitable for real-time clinical applications on a large scale. The method represents a significant development in the advancement of neurological disorders such as ESES in detection and analysis.
With the increased popularity of Deep Neural Networks (DNNs), increases also the need for tools to assist developers in the DNN implementation, testing and debugging process. Several approaches have been proposed that automatically analyse and localise potential faults in DNNs under test. In this work, we evaluate and compare existing state-of-the-art fault localisation techniques, which operate based on both dynamic and static analysis of the DNN. The evaluation is performed on a benchmark consisting of both real faults obtained from bug reporting platforms and faulty models produced by a mutation tool. Our findings indicate that the usage of a single, specific ground truth (e.g., the human defined one) for the evaluation of DNN fault localisation tools results in pretty low performance (maximum average recall of 0.31 and precision of 0.23). However, such figures increase when considering alternative, equivalent patches that exist for a given faulty DNN. Results indicate that \dfd is the most effective tool, achieving an average recall of 0.61 and precision of 0.41 on our benchmark.
The study of biomechanics during locomotion provides valuable insights into the effects of varying conditions on specific movement patterns. This research focuses on examining the influence of different shoe parameters on walking biomechanics, aiming to understand their impact on gait patterns. To achieve this, various methodologies are explored to estimate human body biomechanics, including computer vision techniques and wearable devices equipped with advanced sensors. Given privacy considerations and the need for robust, accurate measurements, this study employs wearable devices with Inertial Measurement Unit (IMU) sensors. These devices offer a non-invasive, precise, and high-resolution approach to capturing biomechanical data during locomotion. Raw sensor data collected from wearable devices is processed using an Extended Kalman Filter to reduce noise and extract meaningful information. This includes calculating joint angles throughout the gait cycle, enabling a detailed analysis of movement dynamics. The analysis identifies correlations between shoe parameters and key gait characteristics, such as stability, mobility, step time, and propulsion forces. The findings provide deeper insights into how footwear design influences walking efficiency and biomechanics. This study paves the way for advancements in footwear technology and contributes to the development of personalized solutions for enhancing gait performance and mobility.
Active imaging systems sample the Transient Light Transport Matrix (TLTM) for a scene by sequentially illuminating various positions in this scene using a controllable light source, and then measuring the resulting spatiotemporal light transport with time of flight (ToF) sensors. Time-resolved Non-line-of-sight (NLOS) imaging employs an active imaging system that measures part of the TLTM of an intermediary relay surface, and uses the indirect reflections of light encoded within this TLTM to "see around corners". Such imaging systems have applications in diverse areas such as disaster response, remote surveillance, and autonomous navigation. While existing NLOS imaging systems usually measure a subset of the full TLTM, development of customized gated Single Photon Avalanche Diode (SPAD) arrays \cite{riccardo_fast-gated_2022} has made it feasible to probe the full measurement space. In this work, we demonstrate that the full TLTM on the relay surface can be processed with efficient algorithms to computationally focus and detect our illumination in different parts of the hidden scene, turning the relay surface into a second-order active imaging system. These algorithms allow us to iterate on the measured, first-order TLTM, and extract a \textbf{second order TLTM for surfaces in the hidden scene}. We showcase three applications of TLTMs in NLOS imaging: (1) Scene Relighting with novel illumination, (2) Separation of direct and indirect components of light transport in the hidden scene, and (3) Dual Photography. Additionally, we empirically demonstrate that SPAD arrays enable parallel acquisition of photons, effectively mitigating long acquisition times.
This survey presents an in-depth exploration of knowledge distillation (KD) techniques within the realm of Large Language Models (LLMs), spotlighting the pivotal role of KD in transferring sophisticated capabilities from proprietary giants such as GPT-4 to accessible, open-source models like LLaMA and Mistral. Amidst the evolving AI landscape, this work elucidates the critical disparities between proprietary and open-source LLMs, demonstrating how KD serves as an essential conduit for imbuing the latter with the former's advanced functionalities and nuanced understandings. Our survey is meticulously structured around three foundational pillars: algorithm, skill, and verticalization -- providing a comprehensive examination of KD mechanisms, the enhancement of specific cognitive abilities, and their practical implications across diverse fields. Crucially, the survey navigates the intricate interplay between data augmentation (DA) and KD, illustrating how DA emerges as a powerful paradigm within the KD framework to bolster LLMs' performance. By leveraging DA to generate context-rich, skill-specific training data, KD transcends traditional boundaries, enabling open-source models to approximate the contextual adeptness, ethical alignment, and deep semantic insights characteristic of their proprietary counterparts. This work aims to provide an insightful guide for researchers and practitioners, offering a detailed overview of current methodologies in knowledge distillation and proposing future research directions. By bridging the gap between proprietary and open-source LLMs, this survey underscores the potential for more accessible, efficient, and sustainable AI solutions, fostering a more inclusive and equitable landscape in AI advancements. An associated Github repository is available at //github.com/Tebmer/Awesome-Knowledge-Distillation-of-LLMs.
In pace with developments in the research field of artificial intelligence, knowledge graphs (KGs) have attracted a surge of interest from both academia and industry. As a representation of semantic relations between entities, KGs have proven to be particularly relevant for natural language processing (NLP), experiencing a rapid spread and wide adoption within recent years. Given the increasing amount of research work in this area, several KG-related approaches have been surveyed in the NLP research community. However, a comprehensive study that categorizes established topics and reviews the maturity of individual research streams remains absent to this day. Contributing to closing this gap, we systematically analyzed 507 papers from the literature on KGs in NLP. Our survey encompasses a multifaceted review of tasks, research types, and contributions. As a result, we present a structured overview of the research landscape, provide a taxonomy of tasks, summarize our findings, and highlight directions for future work.
We study the problem of incorporating prior knowledge into a deep Transformer-based model,i.e.,Bidirectional Encoder Representations from Transformers (BERT), to enhance its performance on semantic textual matching tasks. By probing and analyzing what BERT has already known when solving this task, we obtain better understanding of what task-specific knowledge BERT needs the most and where it is most needed. The analysis further motivates us to take a different approach than most existing works. Instead of using prior knowledge to create a new training task for fine-tuning BERT, we directly inject knowledge into BERT's multi-head attention mechanism. This leads us to a simple yet effective approach that enjoys fast training stage as it saves the model from training on additional data or tasks other than the main task. Extensive experiments demonstrate that the proposed knowledge-enhanced BERT is able to consistently improve semantic textual matching performance over the original BERT model, and the performance benefit is most salient when training data is scarce.
Translational distance-based knowledge graph embedding has shown progressive improvements on the link prediction task, from TransE to the latest state-of-the-art RotatE. However, N-1, 1-N and N-N predictions still remain challenging. In this work, we propose a novel translational distance-based approach for knowledge graph link prediction. The proposed method includes two-folds, first we extend the RotatE from 2D complex domain to high dimension space with orthogonal transforms to model relations for better modeling capacity. Second, the graph context is explicitly modeled via two directed context representations. These context representations are used as part of the distance scoring function to measure the plausibility of the triples during training and inference. The proposed approach effectively improves prediction accuracy on the difficult N-1, 1-N and N-N cases for knowledge graph link prediction task. The experimental results show that it achieves better performance on two benchmark data sets compared to the baseline RotatE, especially on data set (FB15k-237) with many high in-degree connection nodes.