Robots are integrating more huge-size models to enrich functions and improve accuracy, which leads to out-of-control computing pressure. And thus robots are encountering bottlenecks in computing power and battery capacity. Fog or cloud robotics is one of the most anticipated theories to address these issues. Approaches of cloud robotics have developed from system-level to node-level. However, the present node-level systems are not flexible enough to dynamically adapt to changing conditions. To address this, we present ElasticROS, which evolves the present node-level systems into an algorithm-level one. ElasticROS is based on ROS and ROS2. For fog and cloud robotics, it is the first robot operating system with algorithm-level collaborative computing. ElasticROS develops elastic collaborative computing to achieve adaptability to dynamic conditions. The collaborative computing algorithm is the core and challenge of ElasticROS. We abstract the problem and then propose an algorithm named ElasAction to address. It is a dynamic action decision algorithm based on online learning, which determines how robots and servers cooperate. The algorithm dynamically updates parameters to adapt to changes of conditions where the robot is currently in. It achieves elastically distributing of computing tasks to robots and servers according to configurations. In addition, we prove that the regret upper bound of the ElasAction is sublinear, which guarantees its convergence and thus enables ElasticROS to be stable in its elasticity. Finally, we conducted experiments with ElasticROS on common tasks of robotics, including SLAM, grasping and human-robot dialogue, and then measured its performances in latency, CPU usage and power consumption. The algorithm-level ElasticROS performs significantly better than the present node-level system.
Sparse matrix vector multiplication (SpMV) is central to numerous data-intensive applications, but requires streaming indirect memory accesses that severely degrade both processing and memory throughput in state-of-the-art architectures. Near-memory hardware units, decoupling indirect streams from processing elements, partially alleviate the bottleneck, but rely on low DRAM access granularity, which is highly inefficient for modern DRAM standards like HBM and LPDDR. To fully address the end-to-end challenge, we propose a low-overhead data coalescer combined with a near-memory indirect streaming unit for AXI-Pack, an extension to the widespread AXI4 protocol packing narrow irregular stream elements onto wide memory buses. Our combined solution leverages the memory-level parallelism and coalescence of streaming indirect accesses in irregular applications like SpMV to maximize the performance and bandwidth efficiency attained on wide memory interfaces. Our solution delivers an average speedup of 8x in effective indirect access, often reaching the full memory bandwidth. As a result, we achieve an average end-to-end speedup on SpMV of 3x. Moreover, our approach demonstrates remarkable on-chip efficiency, requiring merely 27kB of on-chip storage and a very compact implementation area of 0.2-0.3mm^2 in a 12nm node.
Algebraic effects and handlers are a mechanism to structure programs with computational effects in a modular way. They are recently gaining popularity and being adopted in practical languages, such as OCaml. Meanwhile, there has been substantial progress in program verification via refinement type systems. However, thus far there has not been a satisfactory refinement type system for algebraic effects and handlers. In this paper, we fill the void by proposing a novel refinement type system for algebraic effects and handlers. The expressivity and usefulness of algebraic effects and handlers come from their ability to manipulate delimited continuations, but delimited continuations also complicate programs' control flow and make their verification harder. To address the complexity, we introduce a novel concept that we call answer refinement modification (ARM for short), which allows the refinement type system to precisely track what effects occur and in what order when a program is executed, and reflect such information as modifications to the refinements in the types of delimited continuations. We formalize our type system that supports ARM (as well as answer type modification, or ATM) and prove its soundness. Additionally, as a proof of concept, we have implemented a corresponding type checking and inference algorithm for a subset of OCaml 5, and evaluated it on a number of benchmark programs. The evaluation demonstrates that ARM is conceptually simple and practically useful. Finally, a natural alternative to directly reasoning about a program with delimited continuations is to apply a continuation passing style (CPS) transformation that transforms the program to a pure program. We investigate this alternative, and show that the approach is indeed possible by proposing a novel CPS transformation for algebraic effects and handlers that enjoys bidirectional (refinement-)type-preservation.
Automatic hate speech detection using deep neural models is hampered by the scarcity of labeled datasets, leading to poor generalization. To mitigate this problem, generative AI has been utilized to generate large amounts of synthetic hate speech sequences from available labeled examples, leveraging the generated data in finetuning large pre-trained language models (LLMs). In this chapter, we provide a review of relevant methods, experimental setups and evaluation of this approach. In addition to general LLMs, such as BERT, RoBERTa and ALBERT, we apply and evaluate the impact of train set augmentation with generated data using LLMs that have been already adapted for hate detection, including RoBERTa-Toxicity, HateBERT, HateXplain, ToxDect, and ToxiGen. An empirical study corroborates our previous findings, showing that this approach improves hate speech generalization, boosting recall performance across data distributions. In addition, we explore and compare the performance of the finetuned LLMs with zero-shot hate detection using a GPT-3.5 model. Our results demonstrate that while better generalization is achieved using the GPT-3.5 model, it achieves mediocre recall and low precision on most datasets. It is an open question whether the sensitivity of models such as GPT-3.5, and onward, can be improved using similar techniques of text generation.
In the rapidly evolving domain of artificial intelligence, chatbots have emerged as a potent tool for various applications ranging from e-commerce to healthcare. This research delves into the intricacies of chatbot technology, from its foundational concepts to advanced generative models like ChatGPT. We present a comprehensive taxonomy of existing chatbot approaches, distinguishing between rule-based, retrieval-based, generative, and hybrid models. A specific emphasis is placed on ChatGPT, elucidating its merits for frequently asked questions (FAQs)-based chatbots, coupled with an exploration of associated Natural Language Processing (NLP) techniques such as named entity recognition, intent classification, and sentiment analysis. The paper further delves into the customization and fine-tuning of ChatGPT, its integration with knowledge bases, and the consequent challenges and ethical considerations that arise. Through real-world applications in domains such as online shopping, healthcare, and education, we underscore the transformative potential of chatbots. However, we also spotlight open challenges and suggest future research directions, emphasizing the need for optimizing conversational flow, advancing dialogue mechanics, improving domain adaptability, and enhancing ethical considerations. The research culminates in a call for further exploration in ensuring transparent, ethical, and user-centric chatbot systems.
Evaluating retrieval-augmented generation (RAG) systems traditionally relies on hand annotations for input queries, passages to retrieve, and responses to generate. We introduce ARES, an Automated RAG Evaluation System, for evaluating RAG systems along the dimensions of context relevance, answer faithfulness, and answer relevance. Using synthetic training data, ARES finetunes lightweight LM judges to assess the quality of individual RAG components. To mitigate potential prediction errors, ARES utilizes a small set of human-annotated datapoints for prediction-powered inference (PPI). Across six different knowledge-intensive tasks in KILT and SuperGLUE, ARES accurately evaluates RAG systems while using a few hundred human annotations during evaluation. Furthermore, ARES judges remain effective across domain shifts, proving accurate even after changing the type of queries and/or documents used in the evaluated RAG systems. We make our datasets and code for replication and deployment available at //github.com/stanford-futuredata/ARES.
Secure multiparty computation (MPC) techniques enable multiple parties to compute joint functions over their private data without sharing that data to other parties, typically by employing powerful cryptographic protocols to protect individual's data. One challenge when writing such functions is that most MPC languages force users to intermix programmatic and privacy concerns in a single application, making it difficult to change or audit a program's underlying privacy policy. Existing policy-agnostic MPC languages rely on run-time / dynamic enforcement to decouple privacy requirements from program logic. Unfortunately, the resulting overhead makes it difficult to scale MPC applications that manipulate structured data. This work proposes to eliminate this overhead by instead transforming programs to semantically equivalent versions that statically enforce user-provided privacy policies. We have implemented this approach in a new MPC language, called Taypsi; our experimental evaluation demonstrates that the resulting system features considerable performance improvements on a variety of MPC applications involving structured data and complex privacy polices.
Graphs are used widely to model complex systems, and detecting anomalies in a graph is an important task in the analysis of complex systems. Graph anomalies are patterns in a graph that do not conform to normal patterns expected of the attributes and/or structures of the graph. In recent years, graph neural networks (GNNs) have been studied extensively and have successfully performed difficult machine learning tasks in node classification, link prediction, and graph classification thanks to the highly expressive capability via message passing in effectively learning graph representations. To solve the graph anomaly detection problem, GNN-based methods leverage information about the graph attributes (or features) and/or structures to learn to score anomalies appropriately. In this survey, we review the recent advances made in detecting graph anomalies using GNN models. Specifically, we summarize GNN-based methods according to the graph type (i.e., static and dynamic), the anomaly type (i.e., node, edge, subgraph, and whole graph), and the network architecture (e.g., graph autoencoder, graph convolutional network). To the best of our knowledge, this survey is the first comprehensive review of graph anomaly detection methods based on GNNs.
Interpretability methods are developed to understand the working mechanisms of black-box models, which is crucial to their responsible deployment. Fulfilling this goal requires both that the explanations generated by these methods are correct and that people can easily and reliably understand them. While the former has been addressed in prior work, the latter is often overlooked, resulting in informal model understanding derived from a handful of local explanations. In this paper, we introduce explanation summary (ExSum), a mathematical framework for quantifying model understanding, and propose metrics for its quality assessment. On two domains, ExSum highlights various limitations in the current practice, helps develop accurate model understanding, and reveals easily overlooked properties of the model. We also connect understandability to other properties of explanations such as human alignment, robustness, and counterfactual minimality and plausibility.
Autonomic computing investigates how systems can achieve (user) specified control outcomes on their own, without the intervention of a human operator. Autonomic computing fundamentals have been substantially influenced by those of control theory for closed and open-loop systems. In practice, complex systems may exhibit a number of concurrent and inter-dependent control loops. Despite research into autonomic models for managing computer resources, ranging from individual resources (e.g., web servers) to a resource ensemble (e.g., multiple resources within a data center), research into integrating Artificial Intelligence (AI) and Machine Learning (ML) to improve resource autonomy and performance at scale continues to be a fundamental challenge. The integration of AI/ML to achieve such autonomic and self-management of systems can be achieved at different levels of granularity, from full to human-in-the-loop automation. In this article, leading academics, researchers, practitioners, engineers, and scientists in the fields of cloud computing, AI/ML, and quantum computing join to discuss current research and potential future directions for these fields. Further, we discuss challenges and opportunities for leveraging AI and ML in next generation computing for emerging computing paradigms, including cloud, fog, edge, serverless and quantum computing environments.
With the capability of modeling bidirectional contexts, denoising autoencoding based pretraining like BERT achieves better performance than pretraining approaches based on autoregressive language modeling. However, relying on corrupting the input with masks, BERT neglects dependency between the masked positions and suffers from a pretrain-finetune discrepancy. In light of these pros and cons, we propose XLNet, a generalized autoregressive pretraining method that (1) enables learning bidirectional contexts by maximizing the expected likelihood over all permutations of the factorization order and (2) overcomes the limitations of BERT thanks to its autoregressive formulation. Furthermore, XLNet integrates ideas from Transformer-XL, the state-of-the-art autoregressive model, into pretraining. Empirically, XLNet outperforms BERT on 20 tasks, often by a large margin, and achieves state-of-the-art results on 18 tasks including question answering, natural language inference, sentiment analysis, and document ranking.