In recent years, the rapid advancement of Large Language Models (LLMs) such as the Generative Pre-trained Transformer (GPT) has attracted increasing attention due to their potential in a variety of practical applications. The application of LLMs with Embodied Intelligence has emerged as a significant area of focus. Among the myriad applications of LLMs, navigation tasks are particularly noteworthy because they demand a deep understanding of the environment and quick, accurate decision-making. LLMs can augment embodied intelligence systems with sophisticated environmental perception and decision-making support, leveraging their robust language and image-processing capabilities. This article offers an exhaustive summary of the symbiosis between LLMs and embodied intelligence with a focus on navigation. It reviews state-of-the-art models, research methodologies, and assesses the advantages and disadvantages of existing embodied navigation models and datasets. Finally, the article elucidates the role of LLMs in embodied intelligence, based on current research, and forecasts future directions in the field. A comprehensive list of studies in this survey is available at //github.com/Rongtao-Xu/Awesome-LLM-EN
NFTs (Non-Fungible Tokens) have seen significant growth since they first captured public attention in 2021. However, the NFT market is plagued by fake transactions and economic bubbles, e.g., NFT wash trading. Wash trading typically refers to a transaction involving the same person or two colluding individuals, and has become a major threat to the NFT ecosystem. Previous studies only detect NFT wash trading from the financial aspect, while the real-world wash trading cases are much more complicated (e.g., not aiming at inflating the market value). There is still a lack of multi-dimension analysis to better understand NFT wash trading. Therefore, we present the most comprehensive study of NFT wash trading, analyzing 8,717,031 transfer events and 3,830,141 sale events from 2,701,883 NFTs. We first optimize the dataset collected via the OpenSea API. Next, we identify three types of NFT wash trading and propose identification algorithms. Our experimental results reveal 824 transfer events and 5,330 sale events (accounting for a total of \$8,857,070.41) and 370 address pairs related to NFT wash trading behaviors, causing a minimum loss of \$3,965,247.13. Furthermore, we provide insights from six aspects, i.e., marketplace design, profitability, NFT project design, payment token, user behavior, and NFT ecosystem
In recent years, the rapid advancement and impressive capabilities of Large Language Models (LLMs) have been evident across various domains. This paper explores the application, implications, and potential of LLMs in building energy efficiency and decarbonization studies. The wide-ranging capabilities of LLMs are examined in the context of the building energy field, including intelligent control systems, code generation, data infrastructure, knowledge extraction, and education. Despite the promising potential of LLMs, challenges including complex and expensive computation, data privacy, security and copyright, complexity in fine-tuned LLMs, and self-consistency are discussed. The paper concludes with a call for future research focused on the enhancement of LLMs for domain-specific tasks, multi-modal LLMs, and collaborative research between AI and energy experts.
The Tigray War was an armed conflict that took place primarily in the Tigray region of northern Ethiopia from November 3, 2020 to November 2, 2022. Given the importance of agriculture in Tigray to livelihoods and food security, determining the impact of the war on cultivated area is critical, but quantifying this impact was difficult due to restricted movement within and into the region due to conflict-driven insecurity and blockages. Using satellite imagery and statistical area estimation techniques, we assessed changes in crop cultivation area in Tigray before and during the war. Our findings show that cultivated area was largely stable between 2020-2021 despite the widespread impacts of the war. We estimated 1,132,000 +/- 133,000 hectares of cultivation in pre-war 2020 compared to 1,217,000 +/- 132,000 hectares in mid-war 2021. Comparing changes inside and outside of a 5 km buffer around conflict events, we found a slightly higher upper confidence limit of cropland loss within the buffer (0-3%) compared to outside the buffer (0-1%). Our results support other reports that despite widespread war-related disruptions, Tigrayan farmers were largely able to sustain cultivation. Our study demonstrates the capability of remote sensing combined with machine learning and statistical techniques to provide timely, transparent area estimates for monitoring food security in regions inaccessible due to conflict.
In recent years, significant advancements have been made in the text generation capabilities of Large Language Models (LLMs), demonstrating exceptional performance in downstream tasks such as abstract summarization, dialogue generation, and data-to-text conversion. However, their generative abilities also pose risks such as the rapid spread of fake news, infringement of datasets/LLM copyrights, and challenges to academic integrity. Text watermarking technology emerges as a potential solution. By embedding invisible yet detectable patterns in generated texts, it helps in tracking and verifying text origins, thus preventing misuse and piracy. This survey aims to comprehensively summarize current text watermarking technologies, covering three main aspects: (1) an overview and comparison of different text watermarking techniques; (2) evaluation methods for text watermarking algorithms, including their success rate, impact on text quality, robustness, and unforgeability; (3) potential applications of text watermarking technologies. This survey aims to help researchers thoroughly understanding the text watermarking technologies, thereby fostering further development.
As coding challenges become more complex, recent advancements in Large Language Models (LLMs) have led to notable successes, such as achieving a 94.6\% solve rate on the HumanEval benchmark. Concurrently, there is an increasing commercial push for repository-level inline code completion tools, such as GitHub Copilot and Tab Nine, aimed at enhancing developer productivity. This paper delves into the transition from individual coding problems to repository-scale solutions, presenting a thorough review of the current literature on effective LLM prompting for code generation at the repository level. We examine approaches that will work with black-box LLMs such that they will be useful and applicable to commercial use cases, and their applicability in interpreting code at a repository scale. We juxtapose the Repository-Level Prompt Generation technique with RepoCoder, an iterative retrieval and generation method, to highlight the trade-offs inherent in each approach and to establish best practices for their application in cutting-edge coding benchmarks. The interplay between iterative refinement of prompts and the development of advanced retrieval systems forms the core of our discussion, offering a pathway to significantly improve LLM performance in code generation tasks. Insights from this study not only guide the application of these methods but also chart a course for future research to integrate such techniques into broader software engineering contexts.
We present the design of a mixed reality (MR) telehealth training system that aims to close the gap between in-person and distance training and re-training for medical procedures. Our system uses real-time volumetric capture as a means for communicating and relating spatial information between the non-colocated trainee and instructor. The system's design is based on a requirements elicitation study performed in situ, at a medical school simulation training center. The focus is on the lightweight real-time transmission of volumetric data - meaning the use of consumer hardware, easy and quick deployment, and low-demand computations. We evaluate the MR system design by analyzing the workload for the users during medical training. We compare in-person, video, and MR training workloads. The results indicate that the overall workload for central line placement training with MR does not increase significantly compared to video communication. Our work shows that, when designed strategically together with domain experts, an MR communication system can be used effectively for complex medical procedural training without increasing the overall workload for users significantly. Moreover, MR systems offer new opportunities for teaching due to spatial information, hand tracking, and augmented communication.
Pre-trained Language Models (PLMs) have achieved great success in various Natural Language Processing (NLP) tasks under the pre-training and fine-tuning paradigm. With large quantities of parameters, PLMs are computation-intensive and resource-hungry. Hence, model pruning has been introduced to compress large-scale PLMs. However, most prior approaches only consider task-specific knowledge towards downstream tasks, but ignore the essential task-agnostic knowledge during pruning, which may cause catastrophic forgetting problem and lead to poor generalization ability. To maintain both task-agnostic and task-specific knowledge in our pruned model, we propose ContrAstive Pruning (CAP) under the paradigm of pre-training and fine-tuning. It is designed as a general framework, compatible with both structured and unstructured pruning. Unified in contrastive learning, CAP enables the pruned model to learn from the pre-trained model for task-agnostic knowledge, and fine-tuned model for task-specific knowledge. Besides, to better retain the performance of the pruned model, the snapshots (i.e., the intermediate models at each pruning iteration) also serve as effective supervisions for pruning. Our extensive experiments show that adopting CAP consistently yields significant improvements, especially in extremely high sparsity scenarios. With only 3% model parameters reserved (i.e., 97% sparsity), CAP successfully achieves 99.2% and 96.3% of the original BERT performance in QQP and MNLI tasks. In addition, our probing experiments demonstrate that the model pruned by CAP tends to achieve better generalization ability.
Connecting Vision and Language plays an essential role in Generative Intelligence. For this reason, in the last few years, a large research effort has been devoted to image captioning, i.e. the task of describing images with syntactically and semantically meaningful sentences. Starting from 2015 the task has generally been addressed with pipelines composed of a visual encoding step and a language model for text generation. During these years, both components have evolved considerably through the exploitation of object regions, attributes, and relationships and the introduction of multi-modal connections, fully-attentive approaches, and BERT-like early-fusion strategies. However, regardless of the impressive results obtained, research in image captioning has not reached a conclusive answer yet. This work aims at providing a comprehensive overview and categorization of image captioning approaches, from visual encoding and text generation to training strategies, used datasets, and evaluation metrics. In this respect, we quantitatively compare many relevant state-of-the-art approaches to identify the most impactful technical innovations in image captioning architectures and training strategies. Moreover, many variants of the problem and its open challenges are analyzed and discussed. The final goal of this work is to serve as a tool for understanding the existing state-of-the-art and highlighting the future directions for an area of research where Computer Vision and Natural Language Processing can find an optimal synergy.
Deep Convolutional Neural Networks (CNNs) are a special type of Neural Networks, which have shown state-of-the-art results on various competitive benchmarks. The powerful learning ability of deep CNN is largely achieved with the use of multiple non-linear feature extraction stages that can automatically learn hierarchical representation from the data. Availability of a large amount of data and improvements in the hardware processing units have accelerated the research in CNNs and recently very interesting deep CNN architectures are reported. The recent race in deep CNN architectures for achieving high performance on the challenging benchmarks has shown that the innovative architectural ideas, as well as parameter optimization, can improve the CNN performance on various vision-related tasks. In this regard, different ideas in the CNN design have been explored such as use of different activation and loss functions, parameter optimization, regularization, and restructuring of processing units. However, the major improvement in representational capacity is achieved by the restructuring of the processing units. Especially, the idea of using a block as a structural unit instead of a layer is gaining substantial appreciation. This survey thus focuses on the intrinsic taxonomy present in the recently reported CNN architectures and consequently, classifies the recent innovations in CNN architectures into seven different categories. These seven categories are based on spatial exploitation, depth, multi-path, width, feature map exploitation, channel boosting and attention. Additionally, it covers the elementary understanding of the CNN components and sheds light on the current challenges and applications of CNNs.
Deep Convolutional Neural Networks have pushed the state-of-the art for semantic segmentation provided that a large amount of images together with pixel-wise annotations is available. Data collection is expensive and a solution to alleviate it is to use transfer learning. This reduces the amount of annotated data required for the network training but it does not get rid of this heavy processing step. We propose a method of transfer learning without annotations on the target task for datasets with redundant content and distinct pixel distributions. Our method takes advantage of the approximate content alignment of the images between two datasets when the approximation error prevents the reuse of annotation from one dataset to another. Given the annotations for only one dataset, we train a first network in a supervised manner. This network autonomously learns to generate deep data representations relevant to the semantic segmentation. Then the images in the new dataset, we train a new network to generate a deep data representation that matches the one from the first network on the previous dataset. The training consists in a regression between feature maps and does not require any annotations on the new dataset. We show that this method reaches performances similar to a classic transfer learning on the PASCAL VOC dataset with synthetic transformations.