Since the inception of human research studies, researchers often need to interact with participants on a set schedule to collect data. While some human research is automated, most is not; which costs researchers both time and money. Usually, user-provided data collection consists of surveys administered via telephone or email. While these methods are simplest, they are tedious for the survey administrators, which could incur fatigue and potentially lead to collection mistakes. A solution to this was the creation of "chatbots". Early developments relied on mostly rule-based tactics (e.g. ELIZA), which were suitable for uniform input. However, as the complexity of interactions increases, rule-based systems begin breaking down since there exist a variety of ways for a user to express the same intention. This is especially true when tracking states within a research study (or protocol). Recently, natural language processing (NLP) models and, subsequently, virtual assistants have become increasingly more sophisticated when communicating with users. Examples of these efforts range from research studies to commercial health products. This project leverages recent advancements in conversational artificial intelligence (AI), speech-to-text, natural language understanding (NLU), and finite-state machines to automate protocols, specifically in research settings. This application must be generalized, fully customizable, and irrespective of any research study. These parameters allow new research protocols to be created quickly once envisioned. With this in mind, I present SmartState, a fully-customizable, state-driven protocol manager combined with supporting AI components to autonomously manage user data and intelligently determine the intention of users through chat and end device interactions to drive protocols.
Autonomous driving systems have witnessed a significant development during the past years thanks to the advance in machine learning-enabled sensing and decision-making algorithms. One critical challenge for their massive deployment in the real world is their safety evaluation. Most existing driving systems are still trained and evaluated on naturalistic scenarios collected from daily life or heuristically-generated adversarial ones. However, the large population of cars, in general, leads to an extremely low collision rate, indicating that the safety-critical scenarios are rare in the collected real-world data. Thus, methods to artificially generate scenarios become crucial to measure the risk and reduce the cost. In this survey, we focus on the algorithms of safety-critical scenario generation in autonomous driving. We first provide a comprehensive taxonomy of existing algorithms by dividing them into three categories: data-driven generation, adversarial generation, and knowledge-based generation. Then, we discuss useful tools for scenario generation, including simulation platforms and packages. Finally, we extend our discussion to five main challenges of current works -- fidelity, efficiency, diversity, transferability, controllability -- and research opportunities lighted up by these challenges.
Vertical bars, horizontal bars, dot, scatter, and line plots provide a diverse set of visualizations to represent data. To understand these plots, one must be able to recognize textual components, locate data points in a plot, and process diverse visual contexts to extract information. In recent works such as Pix2Struct, Matcha, and Deplot, OCR-free chart-to-text translation has achieved state-of-the-art results on visual language tasks. These results outline the importance of chart-derendering as a pre-training objective, yet existing datasets provide a fixed set of training examples. In this paper, we propose GenPlot; a plot generator that can generate billions of additional plots for chart-derendering using synthetic data.
Recently, the advent of pre-trained large-scale language models (LLMs) like ChatGPT and GPT-4 have significantly advanced the machine's natural language understanding capabilities. This breakthrough has allowed us to seamlessly integrate these open-source LLMs into a unified robot simulator environment to help robots accurately understand and execute human natural language instructions. To this end, in this work, we introduce a realistic robotic manipulation simulator and build a Robotic Manipulation with Progressive Reasoning Tasks (RM-PRT) benchmark on this basis. Specifically, the RM-PRT benchmark builds a new high-fidelity digital twin scene based on Unreal Engine 5, which includes 782 categories, 2023 objects, and 15K natural language instructions generated by ChatGPT for a detailed evaluation of robot manipulation. We propose a general pipeline for the RM-PRT benchmark that takes as input multimodal prompts containing natural language instructions and automatically outputs actions containing the movement and position transitions. We set four natural language understanding tasks with progressive reasoning levels and evaluate the robot's ability to understand natural language instructions in two modes of adsorption and grasping. In addition, we also conduct a comprehensive analysis and comparison of the differences and advantages of 10 different LLMs in instruction understanding and generation quality. We hope the new simulator and benchmark will facilitate future research on language-guided robotic manipulation. Project website: //necolizer.github.io/RM-PRT/ .
With the rise in popularity of digital Atlases to communicate spatial variation, there is an increasing need for robust small-area estimates. However, current small-area estimation methods suffer from various modelling problems when data are very sparse or when estimates are required for areas with very small populations. These issues are particularly heightened when modelling proportions. Additionally, recent work has shown significant benefits in modelling at both the individual and area levels. We propose a two-stage Bayesian hierarchical small area estimation model for proportions that can: account for survey design; use both individual-level survey-only covariates and area-level census covariates; reduce direct estimate instability; and generate prevalence estimates for small areas with no survey data. Using a simulation study we show that, compared with existing Bayesian small area estimation methods, our model can provide optimal predictive performance (Bayesian mean relative root mean squared error, mean absolute relative bias and coverage) of proportions under a variety of data conditions, including very sparse and unstable data. To assess the model in practice, we compare modeled estimates of current smoking prevalence for 1,630 small areas in Australia using the 2017-2018 National Health Survey data combined with 2016 census data.
Recently, Blockchain-Enabled Federated Learning (BCFL), an innovative approach that combines the advantages of Federated Learning and Blockchain technology, is receiving great attention. Federated Learning (FL) allows multiple participants to jointly train machine learning models in a decentralized manner while maintaining data privacy and security. This paper proposes a reference architecture for blockchain-enabled federated learning, which enables multiple entities to collaboratively train machine learning models while preserving data privacy and security. A critical component of this architecture is the implementation of a decentralized identifier (DID)-based access system. DID introduces a decentralized, self-sovereign identity (ID) management system that allows participants to manage their IDs independently of central authorities. Within this proposed architecture, participants can authenticate and gain access to the federated learning platform via their DIDs, which are securely stored on the blockchain. The access system administers access control and permissions through the execution of smart contracts, further enhancing the security and decentralization of the system. This approach, integrating blockchain-enabled federated learning with a DID access system, offers a robust solution for collaborative machine learning in a distributed and secure manner. As a result, participants can contribute to global model training while maintaining data privacy and identity control without the need to share local data. These DIDs are stored on the blockchain and the access system uses smart contracts to manage access control and permissions. The source code will be available to the public soon.
Technical debt is a well-known challenge in software development, and its negative impact on software quality, maintainability, and performance is widely recognized. In recent years, artificial intelligence (AI) has proven to be a promising approach to assist in managing technical debt. This paper presents a comprehensive literature review of existing research on the use of AI powered tools for technical debt avoidance in software development. In this literature review we analyzed 15 related research papers which covers various AI-powered techniques, such as code analysis and review, automated testing, code refactoring, predictive maintenance, code generation, and code documentation, and explores their effectiveness in addressing technical debt. The review also discusses the benefits and challenges of using AI for technical debt management, provides insights into the current state of research, and highlights gaps and opportunities for future research. The findings of this review suggest that AI has the potential to significantly improve technical debt management in software development, and that existing research provides valuable insights into how AI can be leveraged to address technical debt effectively and efficiently. However, the review also highlights several challenges and limitations of current approaches, such as the need for high-quality data and ethical considerations and underscores the importance of further research to address these issues. The paper provides a comprehensive overview of the current state of research on AI for technical debt avoidance and offers practical guidance for software development teams seeking to leverage AI in their development processes to mitigate technical debt effectively
Accurate price predictions are essential for market participants in order to optimize their operational schedules and bidding strategies, especially in the current context where electricity prices become more volatile and less predictable using classical approaches. Locational Marginal Pricing (LMP) pricing mechanism is used in many modern power markets, where the traditional approach utilizes optimal power flow (OPF) solvers. However, for large electricity grids this process becomes prohibitively time-consuming and computationally intensive. Machine learning solutions could provide an efficient tool for LMP prediction, especially in energy markets with intermittent sources like renewable energy. The study evaluates the performance of popular machine learning and deep learning models in predicting LMP on multiple electricity grids. The accuracy and robustness of these models in predicting LMP is assessed considering multiple scenarios. The results show that machine learning models can predict LMP 4-5 orders of magnitude faster than traditional OPF solvers with 5-6\% error rate, highlighting the potential of machine learning models in LMP prediction for large-scale power models with the help of hardware solutions like multi-core CPUs and GPUs in modern HPC clusters.
Games and simulators can be a valuable platform to execute complex multi-agent, multiplayer, imperfect information scenarios with significant parallels to military applications: multiple participants manage resources and make decisions that command assets to secure specific areas of a map or neutralize opposing forces. These characteristics have attracted the artificial intelligence (AI) community by supporting development of algorithms with complex benchmarks and the capability to rapidly iterate over new ideas. The success of artificial intelligence algorithms in real-time strategy games such as StarCraft II have also attracted the attention of the military research community aiming to explore similar techniques in military counterpart scenarios. Aiming to bridge the connection between games and military applications, this work discusses past and current efforts on how games and simulators, together with the artificial intelligence algorithms, have been adapted to simulate certain aspects of military missions and how they might impact the future battlefield. This paper also investigates how advances in virtual reality and visual augmentation systems open new possibilities in human interfaces with gaming platforms and their military parallels.
Federated Learning (FL) is a decentralized machine-learning paradigm, in which a global server iteratively averages the model parameters of local users without accessing their data. User heterogeneity has imposed significant challenges to FL, which can incur drifted global models that are slow to converge. Knowledge Distillation has recently emerged to tackle this issue, by refining the server model using aggregated knowledge from heterogeneous users, other than directly averaging their model parameters. This approach, however, depends on a proxy dataset, making it impractical unless such a prerequisite is satisfied. Moreover, the ensemble knowledge is not fully utilized to guide local model learning, which may in turn affect the quality of the aggregated model. Inspired by the prior art, we propose a data-free knowledge distillation} approach to address heterogeneous FL, where the server learns a lightweight generator to ensemble user information in a data-free manner, which is then broadcasted to users, regulating local training using the learned knowledge as an inductive bias. Empirical studies powered by theoretical implications show that, our approach facilitates FL with better generalization performance using fewer communication rounds, compared with the state-of-the-art.
It has been a long time that computer architecture and systems are optimized to enable efficient execution of machine learning (ML) algorithms or models. Now, it is time to reconsider the relationship between ML and systems, and let ML transform the way that computer architecture and systems are designed. This embraces a twofold meaning: the improvement of designers' productivity, and the completion of the virtuous cycle. In this paper, we present a comprehensive review of work that applies ML for system design, which can be grouped into two major categories, ML-based modelling that involves predictions of performance metrics or some other criteria of interest, and ML-based design methodology that directly leverages ML as the design tool. For ML-based modelling, we discuss existing studies based on their target level of system, ranging from the circuit level to the architecture/system level. For ML-based design methodology, we follow a bottom-up path to review current work, with a scope of (micro-)architecture design (memory, branch prediction, NoC), coordination between architecture/system and workload (resource allocation and management, data center management, and security), compiler, and design automation. We further provide a future vision of opportunities and potential directions, and envision that applying ML for computer architecture and systems would thrive in the community.