The authors of this study aim to assess the capabilities of the OpenAI ChatGPT tool to understand just how effective such a system might be for students to utilize in their studies as well as deepen understanding of faculty/staff and student perceptions about ChatGPT in general. The purpose of what is learned from the study is to continue the design of a model to facilitate the development of faculty for becoming adept at embracing change, the DANCE model (Designing Adaptations for the Next Changes in Education). This model is used in this study to help faculty with examining the impact that a disruptive new tool, such as ChatGPT, can pose for the learning environment. The authors analyzed the performance of ChatGPT used to complete course assignments at a variety of levels by novice engineering students working as research assistants. Those completed works have been assessed by the faculty who created those assignments to understand how these completed assignments might compare with the performance of a typical student. A set of surveys conducted by the authors of this work are discussed where students, faculty, and staff respondents in March of 2023 addressed their perceptions of ChatGPT (A follow-up survey is being administered now, February 2024). These survey instruments were analyzed, and the data visualized in this work to bring attention to relevant findings by the researchers. This work reports the findings of the researchers with the purpose of sharing the current state of this work at Texas A&M University with the intention to provide insights to scholars both at our own institution and around the world. This work is not intended to be a finished work but reports these findings with full transparency that this work is currently continuing as the researchers gather new data and develop and validate various measurement instruments.
Generative AI tools exemplified by ChatGPT are becoming a new reality. This study is motivated by the premise that ``AI generated content may exhibit a distinctive behavior that can be separated from scientific articles''. In this study, we show how articles can be generated using means of prompt engineering for various diseases and conditions. We then show how we tested this premise in two phases and prove its validity. Subsequently, we introduce xFakeSci, a novel learning algorithm, that is capable of distinguishing ChatGPT-generated articles from publications produced by scientists. The algorithm is trained using network models driven from both sources. As for the classification step, it was performed using 300 articles per condition. The actual label steps took place against an equal mix of 50 generated articles and 50 authentic PubMed abstracts. The testing also spanned publication periods from 2010 to 2024 and encompassed research on three distinct diseases: cancer, depression, and Alzheimer's. Further, we evaluated the accuracy of the xFakeSci algorithm against some of the classical data mining algorithms (e.g., Support Vector Machines, Regression, and Naive Bayes). The xFakeSci algorithm achieved F1 scores ranging from 80% to 94%, outperforming common data mining algorithms, which scored F1 values between 38% and 52%. We attribute the noticeable difference to the introduction of calibration and a proximity distance heuristic, which underscores this promising performance. Indeed, the prediction of fake science generated by ChatGPT presents a considerable challenge. Nonetheless, the introduction of the xFakeSci algorithm is a significant step on the way to combating fake science.
Generative artificial intelligence (AI) and large language models (LLMs) are increasingly being used in the academic writing process. This is despite the current lack of unified framework for reporting the use of machine assistance. In this work, we propose "Cardwriter", an intuitive interface that produces a short report for authors to declare their use of generative AI in their writing process. The demo is available online, at //cardwriter.vercel.app
Increasing diversity in educational settings is challenging in part due to the lack of access to resources for non-traditional learners in remote communities. Post-pandemic platforms designed specifically for remote and hybrid learning -- supporting team-based collaboration online -- are positioned to bridge this gap. Our work combines the use of these new platforms with co-creation and collaboration tools for AI assisted remote Work-Integrated-Learning (WIL) opportunities, including efforts in community and with the public library system. This paper outlines some of our experiences to date, and proposes methods to further integrate AI education into community-driven applications for remote WIL.
The use of Potential Based Reward Shaping (PBRS) has shown great promise in the ongoing research effort to tackle sample inefficiency in Reinforcement Learning (RL). However, the choice of the potential function is critical for this technique to be effective. Additionally, RL techniques are usually constrained to use a finite horizon for computational limitations. This introduces a bias when using PBRS, thus adding an additional layer of complexity. In this paper, we leverage abstractions to automatically produce a "good" potential function. We analyse the bias induced by finite horizons in the context of PBRS producing novel insights. Finally, to asses sample efficiency and performance impact, we evaluate our approach on four environments including a goal-oriented navigation task and three Arcade Learning Environments (ALE) games demonstrating that we can reach the same level of performance as CNN-based solutions with a simple fully-connected network.
The presence of political misinformation and ideological echo chambers on social media platforms is concerning given the important role that these sites play in the public's exposure to news and current events. Algorithmic systems employed on these platforms are presumed to play a role in these phenomena, but little is known about their mechanisms and effects. In this work, we conduct an algorithmic audit of Twitter's Who-To-Follow friend recommendation system, the first empirical audit that investigates the impact of this algorithm in-situ. We create automated Twitter accounts that initially follow left and right affiliated U.S. politicians during the 2022 U.S. midterm elections and then grow their information networks using the platform's recommender system. We pair the experiment with an observational study of Twitter users who already follow the same politicians. Broadly, we find that while following the recommendation algorithm leads accounts into dense and reciprocal neighborhoods that structurally resemble echo chambers, the recommender also results in less political homogeneity of a user's network compared to accounts growing their networks through social endorsement. Furthermore, accounts that exclusively followed users recommended by the algorithm had fewer opportunities to encounter content centered on false or misleading election narratives compared to choosing friends based on social endorsement.
The field of DevOps security education necessitates innovative approaches to effectively address the ever-evolving challenges of cybersecurity. In adopting a student-centered ap-proach, there is the need for the design and development of a comprehensive set of hands-on learning modules. In this paper, we introduce hands-on learning modules that enable learners to be familiar with identifying known security weaknesses, based on taint tracking to accurately pinpoint vulnerable code. To cultivate an engaging and motivating learning environment, our hands-on approach includes a pre-lab, hands-on and post lab sections. They all provide introduction to specific DevOps topics and software security problems at hand, followed by practicing with real world code examples having security issues to detect them using tools. The initial evaluation results from a number of courses across multiple schools show that the hands-on modules are enhancing the interests among students on software security and cybersecurity, while preparing them to address DevOps security vulnerabilities.
We investigate the role of uncertainty in decision-making problems with natural language as input. For such tasks, using Large Language Models as agents has become the norm. However, none of the recent approaches employ any additional phase for estimating the uncertainty the agent has about the world during the decision-making task. We focus on a fundamental decision-making framework with natural language as input, which is the one of contextual bandits, where the context information consists of text. As a representative of the approaches with no uncertainty estimation, we consider an LLM bandit with a greedy policy, which picks the action corresponding to the largest predicted reward. We compare this baseline to LLM bandits that make active use of uncertainty estimation by integrating the uncertainty in a Thompson Sampling policy. We employ different techniques for uncertainty estimation, such as Laplace Approximation, Dropout, and Epinets. We empirically show on real-world data that the greedy policy performs worse than the Thompson Sampling policies. These findings suggest that, while overlooked in the LLM literature, uncertainty plays a fundamental role in bandit tasks with LLMs.
Graph Neural Networks (GNNs) have gained significant attention owing to their ability to handle graph-structured data and the improvement in practical applications. However, many of these models prioritize high utility performance, such as accuracy, with a lack of privacy consideration, which is a major concern in modern society where privacy attacks are rampant. To address this issue, researchers have started to develop privacy-preserving GNNs. Despite this progress, there is a lack of a comprehensive overview of the attacks and the techniques for preserving privacy in the graph domain. In this survey, we aim to address this gap by summarizing the attacks on graph data according to the targeted information, categorizing the privacy preservation techniques in GNNs, and reviewing the datasets and applications that could be used for analyzing/solving privacy issues in GNNs. We also outline potential directions for future research in order to build better privacy-preserving GNNs.
In pace with developments in the research field of artificial intelligence, knowledge graphs (KGs) have attracted a surge of interest from both academia and industry. As a representation of semantic relations between entities, KGs have proven to be particularly relevant for natural language processing (NLP), experiencing a rapid spread and wide adoption within recent years. Given the increasing amount of research work in this area, several KG-related approaches have been surveyed in the NLP research community. However, a comprehensive study that categorizes established topics and reviews the maturity of individual research streams remains absent to this day. Contributing to closing this gap, we systematically analyzed 507 papers from the literature on KGs in NLP. Our survey encompasses a multifaceted review of tasks, research types, and contributions. As a result, we present a structured overview of the research landscape, provide a taxonomy of tasks, summarize our findings, and highlight directions for future work.
This work considers the question of how convenient access to copious data impacts our ability to learn causal effects and relations. In what ways is learning causality in the era of big data different from -- or the same as -- the traditional one? To answer this question, this survey provides a comprehensive and structured review of both traditional and frontier methods in learning causality and relations along with the connections between causality and machine learning. This work points out on a case-by-case basis how big data facilitates, complicates, or motivates each approach.