Traditional static user interfaces (UI) have given way to dynamic systems that can intelligently adapt to and respond to users' changing needs. Temporal interaction is an emerging field in human-computer interaction (HCI), which refers to the study and design of UI that are capable of adapting and responding to the user's changing behavioral and emotional states. By comprehending and incorporating the temporal component of user interactions, it focuses on developing dynamic and individualized user experiences. This idea places a strong emphasis on the value of adjusting to user behavior and emotions in order to create a more unique and interesting user experience. The potential of temporal interaction to alter user interface design is highlighted by this paper's examination of its capacity to adjust to user behavior and react to emotional states. Designers can create interfaces that respond to the changing demands, emotions, and behaviors of users by utilizing temporal interactions. This produces interfaces that are not only highly functional but also form an emotional connection with the users.
While enabling large language models to implement function calling (known as APIs) can greatly enhance the performance of LLMs, function calling is still a challenging task due to the complicated relations between different APIs, especially in a context-learning setting without fine-tuning. This paper proposes a simple yet controllable target-driven approach called Reverse Chain to empower LLMs with capabilities to use external APIs with only prompts. Given that most open-source LLMs have limited tool-use or tool-plan capabilities, LLMs in Reverse Chain are only employed to implement simple tasks, e.g., API selection and argument completion, and a generic rule is employed to implement a controllable multiple functions calling. In this generic rule, after selecting a final API to handle a given task via LLMs, we first ask LLMs to fill the required arguments from user query and context. Some missing arguments could be further completed by letting LLMs select another API based on API description before asking user. This process continues until a given task is completed. Extensive numerical experiments indicate an impressive capability of Reverse Chain on implementing multiple function calling. Interestingly enough, the experiments also reveal that tool-use capabilities of the existing LLMs, e.g., ChatGPT, can be greatly improved via Reverse Chain.
We examine how a human-robot interaction (HRI) system may be designed when input-output data from previous experiments are available. In particular, we consider how to select an optimal impedance in the assistance design for a cooperative manipulation task with a new operator. Due to the variability between individuals, the design parameters that best suit one operator of the robot may not be the best parameters for another one. However, by incorporating historical data using a linear auto-regressive (AR-1) Gaussian process, the search for a new operator's optimal parameters can be accelerated. We lay out a framework for optimizing the human-robot cooperative manipulation that only requires input-output data. We establish how the AR-1 model improves the bound on the regret and numerically simulate a human-robot cooperative manipulation task to show the regret improvement. Further, we show how our approach's input-output nature provides robustness against modeling error through an additional numerical study.
Terminology correctness is important in the downstream application of machine translation, and a prevalent way to ensure this is to inject terminology constraints into a translation system. In our submission to the WMT 2023 terminology translation task, we adopt a translate-then-refine approach which can be domain-independent and requires minimal manual efforts. We annotate random source words with pseudo-terminology translations obtained from word alignment to first train a terminology-aware model. Further, we explore two post-processing methods. First, we use an alignment process to discover whether a terminology constraint has been violated, and if so, we re-decode with the violating word negatively constrained. Alternatively, we leverage a large language model to refine a hypothesis by providing it with terminology constraints. Results show that our terminology-aware model learns to incorporate terminologies effectively, and the large language model refinement process can further improve terminology recall.
Offline pretraining with a static dataset followed by online fine-tuning (offline-to-online, or OtO) is a paradigm that is well matched to a real-world RL deployment process: in few real settings would one deploy an offline policy with no test runs and tuning. In this scenario, we aim to find the best-performing policy within a limited budget of online interactions. Previous work in the OtO setting has focused on correcting for bias introduced by the policy-constraint mechanisms of offline RL algorithms. Such constraints keep the learned policy close to the behavior policy that collected the dataset, but this unnecessarily limits policy performance if the behavior policy is far from optimal. Instead, we forgo policy constraints and frame OtO RL as an exploration problem: we must maximize the benefit of the online data-collection. We study major online RL exploration paradigms, adapting them to work well with the OtO setting. These adapted methods contribute several strong baselines. Also, we introduce an algorithm for planning to go out of distribution (PTGOOD), which targets online exploration in relatively high-reward regions of the state-action space unlikely to be visited by the behavior policy. By leveraging concepts from the Conditional Entropy Bottleneck, PTGOOD encourages data collected online to provide new information relevant to improving the final deployment policy. In that way the limited interaction budget is used effectively. We show that PTGOOD significantly improves agent returns during online fine-tuning and finds the optimal policy in as few as 10k online steps in Walker and in as few as 50k in complex control tasks like Humanoid. Also, we find that PTGOOD avoids the suboptimal policy convergence that many of our baselines exhibit in several environments.
We propose a new distributed-computing model, inspired by permissionless distributed systems such as Bitcoin and Ethereum, that allows studying permissionless consensus in a mathematically regular setting. Like in the sleepy model of Pass and Shi, we consider a synchronous, round-by-round message-passing system in which the set of online processors changes each round. Unlike the sleepy model, the set of processors may be infinite. Moreover, processors never fail; instead, an adversary can temporarily or permanently impersonate some processors. Finally, processors have access to a strong form of message-authentication that authenticates not only the sender of a message but also the round in which the message was sent. Assuming that, each round, the adversary impersonates less than 1/2 of the online processors, we present two consensus algorithms. The first ensures deterministic safety and constant latency in expectation, assuming a probabilistic leader-election oracle. The second ensures deterministic safety and deterministic liveness assuming irrevocable impersonation and eventually-stabilizing participation. The model is unrealistic in full generality. However, if we assume finitely many processes and that the set of faulty processes remains constant, the model coincides with a practically-motivated model: the static version of the sleepy model.
The resolution of near-field beamforming is an important metric to measure how effectively users with different locations can be located. This letter identifies the condition under which the resolution of near-field beamforming is not perfect. This imperfect resolution means that one user's near-field beam can be still useful to other users, which motivates the application of non-orthogonal multiple access (NOMA). Both the analytical and simulation results are developed to demonstrate that those near-field beams preconfigured for legacy users can indeed be used to effectively serve additional NOMA users, which improves the overall connectivity and system throughput.
To answer the call for a new theoretical framework to simultaneously accommodate random user activity and heterogeneous delay traffic in Internet of Things (IoT) systems, in this paper we propose coding schemes and information-theoretic converse results for the transmission of heterogeneous delay traffic over interference networks with random user activity and random data arrivals. The heterogeneous traffic is composed of delay-tolerant traffic and delay-sensitive traffic where only the former can benefit from transmitter and receiver cooperation since the latter is subject to stringent decoding delays. The total number of cooperation rounds at transmitter and receiver sides is limited to $\D$ rounds. Each transmitter is active with probability $\rho \in [0,1]$. We consider two different models for the arrival of the mixed-delay traffic: in Model~$1$, each active transmitter sends a delay-tolerant message, and with probability $\rho_f \in [0,1]$ also transmits an additional delay-sensitive message; in Model~$2$, each active transmitter sends either a delay-sensitive message with probability $\rho_f$ or a delay-tolerant message with probability $1-\rho_f$. We derive inner and outer bounds on the fundamental per-user multiplexing gain (MG) region of the symmetric Wyner network as well as inner bounds on the fundamental MG region of the hexagonal model. Our inner and outer bounds are generally very close and coincide in special cases. They also show that when both transmitters and receivers can cooperate, then under Model~$1$, transmitting delay-sensitive messages hardly causes any penalty on the sum per-user MG, and under Model~$2$, operating at large delay-sensitive per-user MGs incurs no penalty on the delay-tolerant per-user MG and thus increases the sum per-user MG.
Deploying large language models (LLMs) is challenging because they are memory inefficient and compute-intensive for practical applications. In reaction, researchers train smaller task-specific models by either finetuning with human labels or distilling using LLM-generated labels. However, finetuning and distillation require large amounts of training data to achieve comparable performance to LLMs. We introduce Distilling step-by-step, a new mechanism that (a) trains smaller models that outperform LLMs, and (b) achieves so by leveraging less training data needed by finetuning or distillation. Our method extracts LLM rationales as additional supervision for small models within a multi-task training framework. We present three findings across 4 NLP benchmarks: First, compared to both finetuning and distillation, our mechanism achieves better performance with much fewer labeled/unlabeled training examples. Second, compared to LLMs, we achieve better performance using substantially smaller model sizes. Third, we reduce both the model size and the amount of data required to outperform LLMs; our 770M T5 model outperforms the 540B PaLM model using only 80% of available data on a benchmark task.
Existing recommender systems extract the user preference based on learning the correlation in data, such as behavioral correlation in collaborative filtering, feature-feature, or feature-behavior correlation in click-through rate prediction. However, regretfully, the real world is driven by causality rather than correlation, and correlation does not imply causation. For example, the recommender systems can recommend a battery charger to a user after buying a phone, in which the latter can serve as the cause of the former, and such a causal relation cannot be reversed. Recently, to address it, researchers in recommender systems have begun to utilize causal inference to extract causality, enhancing the recommender system. In this survey, we comprehensively review the literature on causal inference-based recommendation. At first, we present the fundamental concepts of both recommendation and causal inference as the basis of later content. We raise the typical issues that the non-causality recommendation is faced. Afterward, we comprehensively review the existing work of causal inference-based recommendation, based on a taxonomy of what kind of problem causal inference addresses. Last, we discuss the open problems in this important research area, along with interesting future works.
Autonomic computing investigates how systems can achieve (user) specified control outcomes on their own, without the intervention of a human operator. Autonomic computing fundamentals have been substantially influenced by those of control theory for closed and open-loop systems. In practice, complex systems may exhibit a number of concurrent and inter-dependent control loops. Despite research into autonomic models for managing computer resources, ranging from individual resources (e.g., web servers) to a resource ensemble (e.g., multiple resources within a data center), research into integrating Artificial Intelligence (AI) and Machine Learning (ML) to improve resource autonomy and performance at scale continues to be a fundamental challenge. The integration of AI/ML to achieve such autonomic and self-management of systems can be achieved at different levels of granularity, from full to human-in-the-loop automation. In this article, leading academics, researchers, practitioners, engineers, and scientists in the fields of cloud computing, AI/ML, and quantum computing join to discuss current research and potential future directions for these fields. Further, we discuss challenges and opportunities for leveraging AI and ML in next generation computing for emerging computing paradigms, including cloud, fog, edge, serverless and quantum computing environments.