Language instructions and demonstrations are two natural ways for users to teach robots personalized tasks. Recent progress in Large Language Models (LLMs) has shown impressive performance in translating language instructions into code for robotic tasks. However, translating demonstrations into task code continues to be a challenge due to the length and complexity of both demonstrations and code, making learning a direct mapping intractable. This paper presents Demo2Code, a novel framework that generates robot task code from demonstrations via an extended chain-of-thought and defines a common latent specification to connect the two. Our framework employs a robust two-stage process: (1) a recursive summarization technique that condenses demonstrations into concise specifications, and (2) a code synthesis approach that expands each function recursively from the generated specifications. We conduct extensive evaluation on various robot task benchmarks, including a novel game benchmark Robotouille, designed to simulate diverse cooking tasks in a kitchen environment. The project's website is available at //portal-cornell.github.io/demo2code-webpage
We present a novel method for a multi-party, zero-trust validator infrastructure deployment arrangement via smart contracts to secure Proof-of-Stake (PoS) blockchains. The proposed arrangement architecture employs a combination of non-fungible tokens (NFTs), a treasury contract, and validator smart contract wallets to facilitate trustless participation in staking mechanisms. The NFT minting process allows depositors to exchange their capital for an NFT representing their stake in a validator, while the treasury contract manages the registry of NFT holders and handles rewards distribution. Validator smart contract wallets are employed to create a trustless connection between the validator operator and the treasury, enabling autonomous staking and unstaking processes based on predefined conditions. In addition, the proposed system incorporates protection mechanisms for depositors, such as triggered exits in case of non-payment of rewards and a penalty payout from the validator operator. The arrangement benefits from the extensibility and interoperability of web3 technologies, with potential applications in the broader digital ecosystem. This zero-trust staking mechanism aims to serve users who desire increased privacy, trust, and flexibility in managing their digital wealth, while promoting greater decentralization and transparency in the PoS ecosystem.
The recent progress in large language models (LLMs), especially the invention of chain-of-thoughts (CoT) prompting, makes it possible to solve reasoning problems. However, even the strongest LLMs are still struggling with more complicated problems that require non-linear thinking and multi-step reasoning. In this work, we explore whether LLMs have the ability to recognize their own errors, without resorting to external resources. In particular, we investigate whether they can be used to identify individual errors within a step-by-step reasoning. To this end, we propose a zero-shot verification scheme to recognize such errors. We then use this verification scheme to improve question-answering performance, by using it to perform weighted voting on different generated answers. We test the method on three math datasets-GSM8K, MathQA, and MATH-and find that it successfully recognizes errors and, in turn, increases final predictive performance.
With rapid technological growth, security attacks are drastically increasing. In many crucial Internet-of-Things (IoT) applications such as healthcare and defense, the early detection of security attacks plays a significant role in protecting huge resources. An intrusion detection system is used to address this problem. The signature-based approaches fail to detect zero-day attacks. So anomaly-based detection particularly AI tools, are becoming popular. In addition, the imbalanced dataset leads to biased results. In Machine Learning (ML) models, F1 score is an important metric to measure the accuracy of class-level correct predictions. The model may fail to detect the target samples if the F1 is considerably low. It will lead to unrecoverable consequences in sensitive applications such as healthcare and defense. So, any improvement in the F1 score has significant impact on the resource protection. In this paper, we present a framework for ML-based intrusion detection system for an imbalanced dataset. In this study, the most recent dataset, namely CICIoT2023 is considered. The random forest (RF) algorithm is used in the proposed framework. The proposed approach improves 3.72%, 3.75% and 4.69% in precision, recall and F1 score, respectively, with the existing method. Additionally, for unsaturated classes (i.e., classes with F1 score < 0.99), F1 score improved significantly by 7.9%. As a result, the proposed approach is more suitable for IoT security applications for efficient detection of intrusion and is useful in further studies.
Large Language Models (LLMs) have the potential to revolutionize automated traceability by overcoming the challenges faced by previous methods and introducing new possibilities. However, the optimal utilization of LLMs for automated traceability remains unclear. This paper explores the process of prompt engineering to extract link predictions from an LLM. We provide detailed insights into our approach for constructing effective prompts, offering our lessons learned. Additionally, we propose multiple strategies for leveraging LLMs to generate traceability links, improving upon previous zero-shot methods on the ranking of candidate links after prompt refinement. The primary objective of this paper is to inspire and assist future researchers and engineers by highlighting the process of constructing traceability prompts to effectively harness LLMs for advancing automatic traceability.
Humanoid robots are expected to navigate in changing environments and perform a variety of tasks. Frequently, these tasks require the robot to make decisions online regarding the speed and precision of following a reference path. For example, a robot may want to decide to temporarily deviate from its path to overtake a slowly moving obstacle that shares the same path and is ahead. In this case, path following performance is compromised in favor of fast path traversal. Available global trajectory tracking approaches typically assume a given -- specified in advance -- time parametrization of the path and seek to minimize the norm of the Cartesian error. As a result, when the robot should be where on the path is fixed and temporary deviations from the path are strongly discouraged. Given a global path, this paper presents a Model Predictive Contouring Control (MPCC) approach to selecting footsteps that maximize path traversal while simultaneously allowing the robot to decide between faithful versus fast path following. The method is evaluated in high-fidelity simulations of the bipedal robot Digit in terms of tracking performance of curved paths under disturbances and is also applied to the case where Digit overtakes a moving obstacle.
The recent advancements in deep learning have brought about significant changes in various aspects of people's lives. Meanwhile, these rapid developments have raised concerns about the legitimacy of the training process of deep networks. However, to protect the intellectual properties of untrusted AI developers, directly examining the training process by accessing the model parameters and training data by verifiers is often prohibited. In response to this challenge, we present zkDL, an efficient zero-knowledge proof of deep learning training. At the core of zkDL is zkReLU, a specialized zero-knowledge proof protocol with optimized proving time and proof size for the ReLU activation function, a major obstacle in verifiable training due to its non-arithmetic nature. To integrate zkReLU into the proof system for the entire training process, we devise a novel construction of an arithmetic circuit from neural networks. By leveraging the abundant parallel computation resources, this construction reduces proving time and proof sizes by a factor of the network depth. As a result, zkDL enables the generation of complete and sound proofs, taking less than a minute with a size of less than 20 kB per training step, for a 16-layer neural network with 200M parameters, while ensuring the privacy of data and model parameters.
Public libraries play a crucial role in disseminating knowledge to society. However, most of their users do not have the specialized knowledge to understand the new research findings. Providing plain language summaries (PLSs) in public libraries is a way to make the new research findings more accessible and understandable for the public. This article proposes a framework for providing PLSs as a new service in public libraries. Drawing from the literature on science and society, PLSs, and public libraries, a theoretical framework is developed. The findings suggest that public libraries can collect PLSs through different methods, such as professional teams, researchers, crowdsourcing, etc. Library newsletters, special publications, brochures, independent online databases, and social networks are among the most effective for making PLSs accessible to users. By proposing a framework for providing PLSs in public libraries, this study helps to bridge the gap between scientific research and the public.
Polynomial regression is widely used and can help to express nonlinear patterns. However, considering very high polynomial orders may lead to overfitting and poor extrapolation ability for unseen data. The paper presents a method for constructing a high-order polynomial regression based on the Taylor map factorization. This method naturally implements multi-target regression and can capture internal relationships between targets. Additionally, we introduce an approach for model interpretation in the form of systems of differential equations. By benchmarking on UCI open access datasets, Feynman symbolic regression datasets, and Friedman-1 datasets, we demonstrate that the proposed method performs comparable to the state-of-the-art regression methods and outperforms them on specific tasks.
Large-scale language models such as DNABert and LOGO aim to learn optimal gene representations and are trained on the entire Human Reference Genome. However, standard tokenization schemes involve a simple sliding window of tokens like k-mers that do not leverage any gene-based semantics and thus may lead to (trivial) masking of easily predictable sequences and subsequently inefficient Masked Language Modeling (MLM) training. Therefore, we propose a novel masking algorithm, GeneMask, for MLM training of gene sequences, where we randomly identify positions in a gene sequence as mask centers and locally select the span around the mask center with the highest Normalized Pointwise Mutual Information (NPMI) to mask. We observe that in the absence of human-understandable semantics in the genomics domain (in contrast, semantic units like words and phrases are inherently available in NLP), GeneMask-based models substantially outperform the SOTA models (DNABert and LOGO) over four benchmark gene sequence classification datasets in five few-shot settings (10 to 1000-shot). More significantly, the GeneMask-based DNABert model is trained for less than one-tenth of the number of epochs of the original SOTA model. We also observe a strong correlation between top-ranked PMI tokens and conserved DNA sequence motifs, which may indicate the incorporation of latent genomic information. The codes (including trained models) and datasets are made publicly available at //github.com/roysoumya/GeneMask.
This work aims at decreasing the end-to-end generation latency of large language models (LLMs). One of the major causes of the high generation latency is the sequential decoding approach adopted by almost all state-of-the-art LLMs. In this work, motivated by the thinking and writing process of humans, we propose "Skeleton-of-Thought" (SoT), which guides LLMs to first generate the skeleton of the answer, and then conducts parallel API calls or batched decoding to complete the contents of each skeleton point in parallel. Not only does SoT provide considerable speed-up (up to 2.39x across 11 different LLMs), but it can also potentially improve the answer quality on several question categories in terms of diversity and relevance. SoT is an initial attempt at data-centric optimization for efficiency, and reveal the potential of pushing LLMs to think more like a human for answer quality.