Research suggests that providing specific and timely feedback to human tutors enhances their performance. However, it presents challenges due to the time-consuming nature of assessing tutor performance by human evaluators. Large language models, such as the AI-chatbot ChatGPT, hold potential for offering constructive feedback to tutors in practical settings. Nevertheless, the accuracy of AI-generated feedback remains uncertain, with scant research investigating the ability of models like ChatGPT to deliver effective feedback. In this work-in-progress, we evaluate 30 dialogues generated by GPT-4 in a tutor-student setting. We use two different prompting approaches, the zero-shot chain of thought and the few-shot chain of thought, to identify specific components of effective praise based on five criteria. These approaches are then compared to the results of human graders for accuracy. Our goal is to assess the extent to which GPT-4 can accurately identify each praise criterion. We found that both zero-shot and few-shot chain of thought approaches yield comparable results. GPT-4 performs moderately well in identifying instances when the tutor offers specific and immediate praise. However, GPT-4 underperforms in identifying the tutor's ability to deliver sincere praise, particularly in the zero-shot prompting scenario where examples of sincere tutor praise statements were not provided. Future work will focus on enhancing prompt engineering, developing a more general tutoring rubric, and evaluating our method using real-life tutoring dialogues.
Change blindness is a phenomenon where an individual fails to notice alterations in a visual scene when a change occurs during a brief interruption or distraction. Understanding this phenomenon is specifically important for the technique that uses a visual stimulus, such as Virtual Reality (VR) or Augmented Reality (AR). Previous research had primarily focused on 2D environments or conducted limited controlled experiments in 3D immersive environments. In this paper, we design and conduct two formal user experiments to investigate the effects of different visual attention-disrupting conditions (Flickering and Head-Turning) and object alternative conditions (Removal, Color Alteration, and Size Alteration) on change blindness detection in VR and AR environments. Our results reveal that participants detected changes more quickly and had a higher detection rate with Flickering compared to Head-Turning. Furthermore, they spent less time detecting changes when an object disappeared compared to changes in color or size. Additionally, we provide a comparison of the results between VR and AR environments.
Modern tourism in the 21st century is facing numerous challenges. Among these the rapidly growing number of tourists visiting space-limited regions like historical cities, museums and bottlenecks such as bridges is one of the biggest. In this context, a proper and accurate prediction of tourism volume and tourism flow within a certain area is important and critical for visitor management tasks such as sustainable treatment of the environment and prevention of overcrowding. Static flow control methods like conventional low-level controllers or limiting access to overcrowded venues could not solve the problem yet. In this paper, we empirically evaluate the performance of state-of-the-art deep-learning methods such as RNNs, GNNs, and Transformers as well as the classic statistical ARIMA method. Granular limited data supplied by a tourism region is extended by exogenous data such as geolocation trajectories of individual tourists, weather and holidays. In the field of visitor flow prediction with sparse data, we are thereby capable of increasing the accuracy of our predictions, incorporating modern input feature handling as well as mapping geolocation data on top of discrete POI data.
Large Language Models (LLMs) have gathered significant attention due to their impressive performance on a variety of tasks. ChatGPT, developed by OpenAI, is a recent addition to the family of language models and is being called a disruptive technology by a few, owing to its human-like text-generation capabilities. Although, many anecdotal examples across the internet have evaluated ChatGPT's strength and weakness, only a few systematic research studies exist. To contribute to the body of literature of systematic research on ChatGPT, we evaluate the performance of ChatGPT on Abstractive Summarization by the means of automated metrics and blinded human reviewers. We also build automatic text classifiers to detect ChatGPT generated summaries. We found that while text classification algorithms can distinguish between real and generated summaries, humans are unable to distinguish between real summaries and those produced by ChatGPT.
In this paper, we develop a geometric framework for generating non-slip quadrupedal two-beat gaits. We consider a four-bar mechanism as a surrogate model for a contact state and develop the geometric tools such as shape-change basis to aid in gait generation, local connection as the matrix-equation of motion, and stratified panels to model net locomotion in line with previous work\cite{prasad2023contactswitch}. Standard two-beat gaits in quadrupedal systems like trot divide the shape space into two equal, decoupled subspaces. The subgaits generated in each subspace space are designed independently and when combined with appropriate phasing generate a two-beat gait where the displacements add up due to the geometric nature of the system. By adding ``scaling" and ``sliding" control knobs to subgaits defined as flows over the shape-change basis, we continuously steer an arbitrary, planar quadrupedal system. This exhibits translational anisotropy when modulated using the scaling inputs. To characterize the steering induced by sliding inputs, we define an average path curvature function analytically and show that the steering gaits can be generated using a geometric nonslip contact modeling framework.
We present H&E Otsu thresholding, a scheme for rapidly detecting tissue in whole-slide images (WSIs) that eliminates a wide range of undesirable artefacts such as pen marks and scanning artefacts. Our method involves obtaining a bid-modal representation of a low-magnification RGB overview image which enables simple Otsu thresholding to separate tissue from background and artefacts. We demonstrate our method on WSIs prepared from a wide range of institutions and WSI digital scanners, each containing substantial artefacts that cause other methods to fail. The beauty of our approach lies in its simplicity: manipulating RGB colour space and using Otsu thresholding allows for the rapid removal of artefacts and segmentation of tissue.
The Kadison-Singer Conjecture, as proved by Marcus, Spielman, and Srivastava (MSS) [Ann. Math. 182, 327-350 (2015)], has been informally thought of as a strengthening of Batson, Spielman, and Srivastava's theorem that every undirected graph has a linear-sized spectral sparsifier [SICOMP 41, 1704-1721 (2012)]. We formalize this intuition by using a corollary of the MSS result to derive the existence of spectral sparsifiers with a number of edges linear in its number of vertices for all undirected, weighted graphs. The proof consists of two steps. First, following a suggestion of Srivastava [Asia Pac. Math. Newsl. 3, 15-20 (2013)], we show the result in the special case of graphs with bounded leverage scores by repeatedly applying the MSS corollary to partition the graph, while maintaining an appropriate bound on the leverage scores of each subgraph. Then, we extend to the general case by constructing a recursive algorithm that repeatedly (i) divides edges with high leverage scores into multiple parallel edges and (ii) uses the bounded leverage score case to sparsify the resulting graph.
Recently, ChatGPT, along with DALL-E-2 and Codex,has been gaining significant attention from society. As a result, many individuals have become interested in related resources and are seeking to uncover the background and secrets behind its impressive performance. In fact, ChatGPT and other Generative AI (GAI) techniques belong to the category of Artificial Intelligence Generated Content (AIGC), which involves the creation of digital content, such as images, music, and natural language, through AI models. The goal of AIGC is to make the content creation process more efficient and accessible, allowing for the production of high-quality content at a faster pace. AIGC is achieved by extracting and understanding intent information from instructions provided by human, and generating the content according to its knowledge and the intent information. In recent years, large-scale models have become increasingly important in AIGC as they provide better intent extraction and thus, improved generation results. With the growth of data and the size of the models, the distribution that the model can learn becomes more comprehensive and closer to reality, leading to more realistic and high-quality content generation. This survey provides a comprehensive review on the history of generative models, and basic components, recent advances in AIGC from unimodal interaction and multimodal interaction. From the perspective of unimodality, we introduce the generation tasks and relative models of text and image. From the perspective of multimodality, we introduce the cross-application between the modalities mentioned above. Finally, we discuss the existing open problems and future challenges in AIGC.
Deep neural networks (DNNs) are successful in many computer vision tasks. However, the most accurate DNNs require millions of parameters and operations, making them energy, computation and memory intensive. This impedes the deployment of large DNNs in low-power devices with limited compute resources. Recent research improves DNN models by reducing the memory requirement, energy consumption, and number of operations without significantly decreasing the accuracy. This paper surveys the progress of low-power deep learning and computer vision, specifically in regards to inference, and discusses the methods for compacting and accelerating DNN models. The techniques can be divided into four major categories: (1) parameter quantization and pruning, (2) compressed convolutional filters and matrix factorization, (3) network architecture search, and (4) knowledge distillation. We analyze the accuracy, advantages, disadvantages, and potential solutions to the problems with the techniques in each category. We also discuss new evaluation metrics as a guideline for future research.
Deep convolutional neural networks (CNNs) have recently achieved great success in many visual recognition tasks. However, existing deep neural network models are computationally expensive and memory intensive, hindering their deployment in devices with low memory resources or in applications with strict latency requirements. Therefore, a natural thought is to perform model compression and acceleration in deep networks without significantly decreasing the model performance. During the past few years, tremendous progress has been made in this area. In this paper, we survey the recent advanced techniques for compacting and accelerating CNNs model developed. These techniques are roughly categorized into four schemes: parameter pruning and sharing, low-rank factorization, transferred/compact convolutional filters, and knowledge distillation. Methods of parameter pruning and sharing will be described at the beginning, after that the other techniques will be introduced. For each scheme, we provide insightful analysis regarding the performance, related applications, advantages, and drawbacks etc. Then we will go through a few very recent additional successful methods, for example, dynamic capacity networks and stochastic depths networks. After that, we survey the evaluation matrix, the main datasets used for evaluating the model performance and recent benchmarking efforts. Finally, we conclude this paper, discuss remaining challenges and possible directions on this topic.
Machine learning techniques have deeply rooted in our everyday life. However, since it is knowledge- and labor-intensive to pursue good learning performance, human experts are heavily involved in every aspect of machine learning. In order to make machine learning techniques easier to apply and reduce the demand for experienced human experts, automated machine learning (AutoML) has emerged as a hot topic with both industrial and academic interest. In this paper, we provide an up to date survey on AutoML. First, we introduce and define the AutoML problem, with inspiration from both realms of automation and machine learning. Then, we propose a general AutoML framework that not only covers most existing approaches to date but also can guide the design for new methods. Subsequently, we categorize and review the existing works from two aspects, i.e., the problem setup and the employed techniques. Finally, we provide a detailed analysis of AutoML approaches and explain the reasons underneath their successful applications. We hope this survey can serve as not only an insightful guideline for AutoML beginners but also an inspiration for future research.