Edit distance is an important measure of string similarity. It counts the number of insertions, deletions and substitutions one has to make to a string $x$ to get a string $y$. In this paper we design an almost linear-size sketching scheme for computing edit distance up to a given threshold $k$. The scheme consists of two algorithms, a sketching algorithm and a recovery algorithm. The sketching algorithm depends on the parameter $k$ and takes as input a string $x$ and a public random string $\rho$ and computes a sketch $sk_{\rho}(x;k)$, which is a digested version of $x$. The recovery algorithm is given two sketches $sk_{\rho}(x;k)$ and $sk_{\rho}(y;k)$ as well as the public random string $\rho$ used to create the two sketches, and (with high probability) if the edit distance $ED(x,y)$ between $x$ and $y$ is at most $k$, will output $ED(x,y)$ together with an optimal sequence of edit operations that transforms $x$ to $y$, and if $ED(x,y) > k$ will output LARGE. The size of the sketch output by the sketching algorithm on input $x$ is $k{2^{O(\sqrt{\log(n)\log\log(n)})}}$ (where $n$ is an upper bound on length of $x$). The sketching and recovery algorithms both run in time polynomial in $n$. The dependence of sketch size on $k$ is information theoretically optimal and improves over the quadratic dependence on $k$ in schemes of Kociumaka, Porat and Starikovskaya (FOCS'2021), and Bhattacharya and Kouck\'y (STOC'2023).
A scoring system is a simple decision model that checks a set of features, adds a certain number of points to a total score for each feature that is satisfied, and finally makes a decision by comparing the total score to a threshold. Scoring systems have a long history of active use in safety-critical domains such as healthcare and justice, where they provide guidance for making objective and accurate decisions. Given their genuine interpretability, the idea of learning scoring systems from data is obviously appealing from the perspective of explainable AI. In this paper, we propose a practically motivated extension of scoring systems called probabilistic scoring lists (PSL), as well as a method for learning PSLs from data. Instead of making a deterministic decision, a PSL represents uncertainty in the form of probability distributions, or, more generally, probability intervals. Moreover, in the spirit of decision lists, a PSL evaluates features one by one and stops as soon as a decision can be made with enough confidence. To evaluate our approach, we conduct a case study in the medical domain.
Large Language Models (LLMs) are used for many tasks, including those related to coding. An important aspect of being able to utilize LLMs is the ability to assess their fitness for specific usages. The common practice is to evaluate LLMs against a set of benchmarks. While benchmarks provide a sound foundation for evaluation and comparison of alternatives, they suffer from the well-known weakness of leaking into the training data \cite{Xu2024Benchmarking}. We present a method for creating benchmark variations that generalize across coding tasks and programming languages, and may also be applied to in-house code bases. Our approach enables ongoing generation of test-data thus mitigating the leaking into the training data issue. We implement one benchmark, called \textit{auto-regression}, for the task of text-to-code generation in Python. Auto-regression is specifically created to aid in debugging and in tracking model generation changes as part of the LLM regression testing process.
Speech bandwidth expansion is crucial for expanding the frequency range of low-bandwidth speech signals, thereby improving audio quality, clarity and perceptibility in digital applications. Its applications span telephony, compression, text-to-speech synthesis, and speech recognition. This paper presents a novel approach using a high-fidelity generative adversarial network, unlike cascaded systems, our system is trained end-to-end on paired narrowband and wideband speech signals. Our method integrates various bandwidth upsampling ratios into a single unified model specifically designed for speech bandwidth expansion applications. Our approach exhibits robust performance across various bandwidth expansion factors, including those not encountered during training, demonstrating zero-shot capability. To the best of our knowledge, this is the first work to showcase this capability. The experimental results demonstrate that our method outperforms previous end-to-end approaches, as well as interpolation and traditional techniques, showcasing its effectiveness in practical speech enhancement applications.
A new gradient-based particle sampling method, MPM-ParVI, based on material point method (MPM), is proposed for variational inference. MPM-ParVI simulates the deformation of a deformable body (e.g. a solid or fluid) under external effects driven by the target density; transient or steady configuration of the deformable body approximates the target density. The continuum material is modelled as an interacting particle system (IPS) using MPM, each particle carries full physical properties, interacts and evolves following conservation dynamics. This easy-to-implement ParVI method offers deterministic sampling and inference for a class of probabilistic models such as those encountered in Bayesian inference (e.g. intractable densities) and generative modelling (e.g. score-based).
We present a polynomial-time algorithm for online differentially private synthetic data generation. For a data stream within the hypercube $[0,1]^d$ and an infinite time horizon, we develop an online algorithm that generates a differentially private synthetic dataset at each time $t$. This algorithm achieves a near-optimal accuracy bound of $O(\log(t)t^{-1/d})$ for $d\geq 2$ and $O(\log^{4.5}(t)t^{-1})$ for $d=1$ in the 1-Wasserstein distance. This result extends the previous work on the continual release model for counting queries to Lipschitz queries. Compared to the offline case, where the entire dataset is available at once, our approach requires only an extra polylog factor in the accuracy bound.
The discrepancy of a binary string is the maximum (absolute) difference between the number of ones and the number of zeroes over all possible substrings of the given binary string. In this note we determine the minimal discrepancy that a binary de Bruijn sequence of order $n$ can achieve, which is $n$. This was an open problem until now. We give an algorithm that constructs a binary de Bruijn sequence with minimal discrepancy. A slight modification of this algorithm deals with arbitrary alphabets and yields de Bruijn sequences of order $n$ with discrepancy at most $1$ above the trivial lower bound $n$.
Today's scientific simulations, for example in the high-performance exascale sector, produce huge amounts of data. Due to limited I/O bandwidth and available storage space, there is the necessity to reduce scientific data of high performance computing applications. Error-bounded lossy compression has been proven to be an effective approach tackling the trade-off between accuracy and storage space. Within this work, we are exploring and discussing error-bounded lossy compression solely based on adaptive mesh refinement techniques. This compression technique is not only easily integrated into existing adaptive mesh refinement applications but also suits as a general lossy compression approach for arbitrary data in form of multi-dimensional arrays, irrespective of the data type. Moreover, these techniques permit the exclusion of regions of interest and even allows for nested error domains during the compression. The described data compression technique is presented exemplary on ERA5 data.
We analyze the stability of (strong) laws of large numbers in Hadamard spaces with respect to distributional perturbations. For the inductive means of a sequence of independent, but not necessarily identically distributed random variables, we provide a concentration inequality in quadratic mean, as well as a strong law of large numbers, generalizing a classical result of K.-T. Sturm. For the Fr\'echet mean, we generalize H. Ziezold's law of large numbers in Hadamard spaces. In this case, we neither require our data to be independent, nor identically distributed; reasonably mild conditions on the first two moments of our sample are enough. Additionally, we look at data contamination via a model inspired by Huber's $\varepsilon$-contamination model, in which we replace a random portion of the data with noise. In the most general setup, we do neither require the data, nor the noise to be i.i.d., nor do we require the noise to be independent of the data. To analyze the stability of the (non-symmetric) inductive mean with respect to data loss, data permutation, and noise, a resampling scheme is introduced, and sufficient conditions for its convergence are provided. These results suggest that means in Hadamard spaces are as robust as in Euclidean spaces. This is underlined by a small simulation study, in which we compare the robustness of means on the manifold of positive definite matrices, with means on open books.
Interactive Natural Language Processing (iNLP) has emerged as a novel paradigm within the field of NLP, aimed at addressing limitations in existing frameworks while aligning with the ultimate goals of artificial intelligence. This paradigm considers language models as agents capable of observing, acting, and receiving feedback iteratively from external entities. Specifically, language models in this context can: (1) interact with humans for better understanding and addressing user needs, personalizing responses, aligning with human values, and improving the overall user experience; (2) interact with knowledge bases for enriching language representations with factual knowledge, enhancing the contextual relevance of responses, and dynamically leveraging external information to generate more accurate and informed responses; (3) interact with models and tools for effectively decomposing and addressing complex tasks, leveraging specialized expertise for specific subtasks, and fostering the simulation of social behaviors; and (4) interact with environments for learning grounded representations of language, and effectively tackling embodied tasks such as reasoning, planning, and decision-making in response to environmental observations. This paper offers a comprehensive survey of iNLP, starting by proposing a unified definition and framework of the concept. We then provide a systematic classification of iNLP, dissecting its various components, including interactive objects, interaction interfaces, and interaction methods. We proceed to delve into the evaluation methodologies used in the field, explore its diverse applications, scrutinize its ethical and safety issues, and discuss prospective research directions. This survey serves as an entry point for researchers who are interested in this rapidly evolving area and offers a broad view of the current landscape and future trajectory of iNLP.
We investigate a lattice-structured LSTM model for Chinese NER, which encodes a sequence of input characters as well as all potential words that match a lexicon. Compared with character-based methods, our model explicitly leverages word and word sequence information. Compared with word-based methods, lattice LSTM does not suffer from segmentation errors. Gated recurrent cells allow our model to choose the most relevant characters and words from a sentence for better NER results. Experiments on various datasets show that lattice LSTM outperforms both word-based and character-based LSTM baselines, achieving the best results.