In this paper, we study systems where each job or request can be split into a flexible number of sub-jobs up to a maximum limit. The number of sub-jobs a job is split into depends on the number of available servers found upon its arrival. All sub-jobs of a job are then processed in parallel at different servers leading to a linear speed-up of the job. We refer to such jobs as {\em adaptive multi-server jobs}. We study the problem of optimal assignment of such jobs when each server can process at most one sub-job at any given instant and there is no waiting room in the system. We assume that, upon arrival, a job can only access a randomly sampled subset of $k(n)$ servers from a total of $n$ servers, and the number of sub-jobs is determined based on the number of idle servers within the sampled subset. We analyze the steady-state performance of the system when system load varies according to $\lambda(n) =1 - \beta n^{-\alpha}$ for $\alpha \in [0,1)$, and $\beta \geq 0$. Our interest is to find how large the subset $k(n)$ should be in order to have zero blocking and maximum speed-up in the limit as $n \to \infty$. We first characterize the system's performance when the jobs have access to the full system, i.e., $k(n)=n$. In this setting, we show that the blocking probability approaches to zero at the rate $O(1/\sqrt{n})$ and the mean response time of accepted jobs approaches to its minimum achievable value at rate $O(1/n)$. We then consider the case where the jobs only have access to subset of servers, i.e., $k(n) < n$. We show that as long as $k(n)=\omega(n^\alpha)$, the same asymptotic performance can be achieved as in the case with full system access. In particular, for $k(n)=\Theta(n^\alpha \log n)$, we show that both the blocking probability and the mean response time approach to their desired limits at rate $O(n^{-(1-\alpha)/2})$.
In this paper, we study the problem of computing the majority function by low-depth monotone circuits and a related problem of constructing low-depth sorting networks. We consider both the classical setting with elementary operations of arity $2$ and the generalized setting with operations of arity $k$, where $k$ is a parameter. For both problems and both settings, there are various constructions known, the minimal known depth being logarithmic. However, there is currently no known construction that simultaneously achieves sub-log-squared depth, effective constructability, simplicity, and has a potential to be used in practice. In this paper we make progress towards resolution of this problem. For computing majority by standard monotone circuits (gates of arity 2) we provide an explicit monotone circuit of depth $O(\log_2^{5/3} n)$. The construction is a combination of several known and not too complicated ideas. For arbitrary arity of gates $k$ we provide a new sorting network architecture inspired by representation of inputs as a high-dimensional cube. As a result we provide a simple construction that improves previous upper bound of $4 \log_k^2 n$ to $2 \log_k^2 n$. We prove the similar bound for the depth of the circuit computing majority of $n$ bits consisting of gates computing majority of $k$ bits. Note, that for both problems there is an explicit construction of depth $O(\log_k n)$ known, but the construction is complicated and the constant hidden in $O$-notation is huge.
In this paper, we evaluate the ability of large language models (LLMs) to perform multiple choice symbol binding (MCSB) for multiple choice question answering (MCQA) tasks in zero-shot, one-shot, and few-shot settings. We focus on Vietnamese, with fewer challenging MCQA datasets than in English. The two existing datasets, ViMMRC 1.0 and ViMMRC 2.0, focus on literature. Recent research in Vietnamese natural language processing (NLP) has focused on the Vietnamese National High School Graduation Examination (VNHSGE) from 2019 to 2023 to evaluate ChatGPT. However, these studies have mainly focused on how ChatGPT solves the VNHSGE step by step. We aim to create a novel and high-quality dataset by providing structured guidelines for typing LaTeX formulas for mathematics, physics, chemistry, and biology. This dataset can be used to evaluate the MCSB ability of LLMs and smaller language models (LMs) because it is typed in a strict LaTeX style. We focus on predicting the character (A, B, C, or D) that is the most likely answer to a question, given the context of the question. Our evaluation of six well-known LLMs, namely BLOOMZ-7.1B-MT, LLaMA-2-7B, LLaMA-2-70B, GPT-3, GPT-3.5, and GPT-4.0, on the ViMMRC 1.0 and ViMMRC 2.0 benchmarks and our proposed dataset shows promising results on the MCSB ability of LLMs for Vietnamese. The dataset is available for research purposes only.
In this paper, we study a sampling problem where a source takes samples from a Wiener process and transmits them through a wireless channel to a remote estimator. Due to channel fading, interference, and potential collisions, the packet transmissions are unreliable and could take random time durations. Our objective is to devise an optimal causal sampling policy that minimizes the long-term average mean square estimation error. This optimal sampling problem is a recursive optimal stopping problem, which is generally quite difficult to solve. However, we prove that the optimal sampling strategy is, in fact, a simple threshold policy where a new sample is taken whenever the instantaneous estimation error exceeds a threshold. This threshold remains a constant value that does not vary over time. By exploring the structure properties of the recursive optimal stopping problem, a low-complexity iterative algorithm is developed to compute the optimal threshold. This work generalizes previous research by incorporating both transmission errors and random transmission times into remote estimation. Numerical simulations are provided to compare our optimal policy with the zero-wait and age-optimal policies.
The random sampling on graph signals is one of the fundamental topics in graph signal processing. In this letter, we consider the random sampling of k-bandlimited signals from the local measurements and show that no more than O(klogk) measurements with replacement are sufficient for the accurate and stable recovery of any k-bandlimited graph signals. We propose two random sampling strategies based on the minimum measurements, i.e., the optimal sampling and the estimated sampling. The geodesic distance between vertices is introduced to design the sampling probability distribution. Numerical experiments are included to show the effectiveness of the proposed methods.
In this work, we report on the effectiveness of our efforts to tailor the personality and conversational style of a conversational agent based on GPT-3.5 and GPT-4 through prompts. We use three personality dimensions with two levels each to create eight conversational agents archetypes. Ten conversations were collected per chatbot, of ten exchanges each, generating 1600 exchanges across GPT-3.5 and GPT-4. Using Linguistic Inquiry and Word Count (LIWC) analysis, we compared the eight agents on language elements including clout, authenticity, and emotion. Four language cues were significantly distinguishing in GPT-3.5, while twelve were distinguishing in GPT-4. With thirteen out of a total nineteen cues in LIWC appearing as significantly distinguishing, our results suggest possible novel prompting approaches may be needed to better suit the creation and evaluation of persistent conversational agent personalities or language styles.
In pace with developments in the research field of artificial intelligence, knowledge graphs (KGs) have attracted a surge of interest from both academia and industry. As a representation of semantic relations between entities, KGs have proven to be particularly relevant for natural language processing (NLP), experiencing a rapid spread and wide adoption within recent years. Given the increasing amount of research work in this area, several KG-related approaches have been surveyed in the NLP research community. However, a comprehensive study that categorizes established topics and reviews the maturity of individual research streams remains absent to this day. Contributing to closing this gap, we systematically analyzed 507 papers from the literature on KGs in NLP. Our survey encompasses a multifaceted review of tasks, research types, and contributions. As a result, we present a structured overview of the research landscape, provide a taxonomy of tasks, summarize our findings, and highlight directions for future work.
This work considers the question of how convenient access to copious data impacts our ability to learn causal effects and relations. In what ways is learning causality in the era of big data different from -- or the same as -- the traditional one? To answer this question, this survey provides a comprehensive and structured review of both traditional and frontier methods in learning causality and relations along with the connections between causality and machine learning. This work points out on a case-by-case basis how big data facilitates, complicates, or motivates each approach.
Over the past few years, we have seen fundamental breakthroughs in core problems in machine learning, largely driven by advances in deep neural networks. At the same time, the amount of data collected in a wide array of scientific domains is dramatically increasing in both size and complexity. Taken together, this suggests many exciting opportunities for deep learning applications in scientific settings. But a significant challenge to this is simply knowing where to start. The sheer breadth and diversity of different deep learning techniques makes it difficult to determine what scientific problems might be most amenable to these methods, or which specific combination of methods might offer the most promising first approach. In this survey, we focus on addressing this central issue, providing an overview of many widely used deep learning models, spanning visual, sequential and graph structured data, associated tasks and different training methods, along with techniques to use deep learning with less data and better interpret these complex models --- two central considerations for many scientific use cases. We also include overviews of the full design process, implementation tips, and links to a plethora of tutorials, research summaries and open-sourced deep learning pipelines and pretrained models, developed by the community. We hope that this survey will help accelerate the use of deep learning across different scientific domains.
Machine learning techniques have deeply rooted in our everyday life. However, since it is knowledge- and labor-intensive to pursue good learning performance, human experts are heavily involved in every aspect of machine learning. In order to make machine learning techniques easier to apply and reduce the demand for experienced human experts, automated machine learning (AutoML) has emerged as a hot topic with both industrial and academic interest. In this paper, we provide an up to date survey on AutoML. First, we introduce and define the AutoML problem, with inspiration from both realms of automation and machine learning. Then, we propose a general AutoML framework that not only covers most existing approaches to date but also can guide the design for new methods. Subsequently, we categorize and review the existing works from two aspects, i.e., the problem setup and the employed techniques. Finally, we provide a detailed analysis of AutoML approaches and explain the reasons underneath their successful applications. We hope this survey can serve as not only an insightful guideline for AutoML beginners but also an inspiration for future research.
In this paper, we present a new method for detecting road users in an urban environment which leads to an improvement in multiple object tracking. Our method takes as an input a foreground image and improves the object detection and segmentation. This new image can be used as an input to trackers that use foreground blobs from background subtraction. The first step is to create foreground images for all the frames in an urban video. Then, starting from the original blobs of the foreground image, we merge the blobs that are close to one another and that have similar optical flow. The next step is extracting the edges of the different objects to detect multiple objects that might be very close (and be merged in the same blob) and to adjust the size of the original blobs. At the same time, we use the optical flow to detect occlusion of objects that are moving in opposite directions. Finally, we make a decision on which information we keep in order to construct a new foreground image with blobs that can be used for tracking. The system is validated on four videos of an urban traffic dataset. Our method improves the recall and precision metrics for the object detection task compared to the vanilla background subtraction method and improves the CLEAR MOT metrics in the tracking tasks for most videos.