We study the problem of enumerating results from a query over a compressed document. The model we use for compression are straight-line programs (SLPs), which are defined by a context-free grammar that produces a single string. For our queries, we use a model called Annotated Automata, an extension of regular automata that allows annotations on letters. This model extends the notion of Regular Spanners as it allows arbitrarily long outputs. Our main result is an algorithm that evaluates such a query by enumerating all results with output-linear delay after a preprocessing phase which takes linear time on the size of the SLP, and cubic time over the size of the automaton. This is an improvement over Schmid and Schweikardt's result, which, with the same preprocessing time, enumerates with a delay that is logarithmic on the size of the uncompressed document. We achieve this through a persistent data structure named Enumerable Compact Sets with Shifts which guarantees output-linear delay under certain restrictions. These results imply constant-delay enumeration algorithms in the context of regular spanners. Further, we use an extension of annotated automata which utilizes succinctly encoded annotations to save an exponential factor from previous results that dealt with constant-delay enumeration over vset automata. Lastly, we extend our results in the same fashion Schmid and Schweikardt did to allow complex document editing while maintaining the constant delay guarantee.
A regime-switching multivariate time series model which is closed under margins is built. The model imposes a restriction on all lower-dimensional sub-processes to follow a regime-switching process sharing the same latent regime sequence and having the same Markov order as the original process. The margin-closed regime-switching model is constructed by considering the multivariate margin-closed Gaussian VAR($k$) dependence as a copula within each regime, and builds dependence between observations in different regimes by requiring the first observation in the new regime to depend on the last observation in the previous regime. The property of closure under margins allows inference on the latent regimes based on lower-dimensional selected sub-processes and estimation of univariate parameters from univariate sub-processes, and enables the use of multi-stage estimation procedure for the model. The parsimonious dependence structure of the model also avoids a large number of parameters under the regime-switching setting. The proposed model is applied to a macroeconomic data set to infer the latent business cycle and compared with the relevant benchmark.
There are two primary approaches to addressing cross-lingual transfer: multilingual pre-training, which implicitly aligns the hidden representations of various languages, and translate-test, which explicitly translates different languages into an intermediate language, such as English. Translate-test offers better interpretability compared to multilingual pre-training. However, it has lower performance than multilingual pre-training(Conneau and Lample, 2019; Conneau et al, 2020) and struggles with word-level tasks due to translation altering word order. As a result, we propose a new Machine-created Universal Language (MUL) as an alternative intermediate language. MUL comprises a set of discrete symbols forming a universal vocabulary and a natural language to MUL translator for converting multiple natural languages to MUL. MUL unifies shared concepts from various languages into a single universal word, enhancing cross-language transfer. Additionally, MUL retains language-specific words and word order, allowing the model to be easily applied to word-level tasks. Our experiments demonstrate that translating into MUL yields improved performance compared to multilingual pre-training, and our analysis indicates that MUL possesses strong interpretability. The code is at: //github.com/microsoft/Unicoder/tree/master/MCUL.
We present ParrotTTS, a modularized text-to-speech synthesis model leveraging disentangled self-supervised speech representations. It can train a multi-speaker variant effectively using transcripts from a single speaker. ParrotTTS adapts to a new language in low resource setup and generalizes to languages not seen while training the self-supervised backbone. Moreover, without training on bilingual or parallel examples, ParrotTTS can transfer voices across languages while preserving the speaker specific characteristics, e.g., synthesizing fluent Hindi speech using a French speaker's voice and accent. We present extensive results in monolingual and multi-lingual scenarios. ParrotTTS outperforms state-of-the-art multi-lingual TTS models using only a fraction of paired data as latter.
In current applied research the most-used route to an analysis of composition is through log-ratios -- that is, contrasts among log-transformed measurements. Here we argue instead for a more direct approach, using a statistical model for the arithmetic mean on the original scale of measurement. Central to the approach is a general variance-covariance function, derived by assuming multiplicative measurement error. Quasi-likelihood analysis of logit models for composition is then a general alternative to the use of multivariate linear models for log-ratio transformed measurements, and it has important advantages. These include robustness to secondary aspects of model specification, stability when there are zero-valued or near-zero measurements in the data, and more direct interpretation. The usual efficiency property of quasi-likelihood estimation applies even when the error covariance matrix is unspecified. We also indicate how the derived variance-covariance function can be used, instead of the variance-covariance matrix of log-ratios, with more general multivariate methods for the analysis of composition. A specific feature is that the notion of `null correlation' -- for compositional measurements on their original scale -- emerges naturally.
In recent years, advancements in large language models have been remarkable, with models such as ChatGPT demonstrating exceptional proficiency in diverse linguistic tasks. The pre-training of large models with billions of parameters, poses a formidable challenge, primarily due to the scarcity of datasets of a commensurate scale for effective training. Nevertheless, innovative strategies have emerged, including methods to fine-tune these pre-trained models using fewer parameters set, as evidenced by models like MiniGPT-4 and LLaVA. Despite their potential in various domains, these models remain limited in their understanding of artistic imagery. They have yet to fully grasp the intricate nuances of art images or to provide an objective articulation of the emotions they evoke, in a manner akin to human perception. This work introduces ArtGPT-4, a pioneering large vision-language model tailored to address the deficiencies of contemporary models in artistic comprehension. ArtGPT-4 underwent training on image-text pairs utilizing a Tesla A100 device in a mere 2 hours, with a dataset comprising approximately 0.52M entries. Impressively, the model can render images with an artistic-understanding and convey the emotions they inspire, mirroring human interpretation. Additionally, this work presents a unique dataset designed to evaluate the efficacy of vision-language models. In subsequent evaluations, ArtGPT-4 not only achieved state-of-the-art performance on the ArtEmis and ArtEmis-v2.0 datasets but also exceeded the established benchmarks introduced in This study, lagging behind professional artists' descriptions by a negligible 0.15 points on a 6-point scale. The code and the pre-trained model are accessible in //huggingface.co/Tyrannosaurus/ArtGPT-4.
This work considers the optimization of electrode positions in head imaging by electrical impedance tomography. The study is motivated by maximizing the sensitivity of electrode measurements to conductivity changes when monitoring the condition of a stroke patient, which justifies adopting a linearized version of the complete electrode model as the forward model. The algorithm is based on finding a (locally) A-optimal measurement configuration via gradient descent with respect to the electrode positions. The efficient computation of the needed derivatives of the complete electrode model is one of the focal points. Two algorithms are introduced and numerically tested on a three-layer head model. The first one assumes a region of interest and a Gaussian prior for the conductivity in the brain, and it can be run offline, i.e., prior to taking any measurements. The second algorithm first computes a reconstruction of the conductivity anomaly caused by the stroke with an initial electrode configuration by combining lagged diffusivity iteration with sequential linearizations, which can be interpreted to produce an approximate Gaussian probability density for the conductivity perturbation. It then resorts to the first algorithm to find new, more informative positions for the available electrodes with the constructed density as the prior.
Advances in survival analysis have facilitated unprecedented flexibility in data modeling, yet there remains a lack of tools for graphically illustrating the influence of continuous covariates on predicted survival outcomes. We propose the utilization of a colored contour plot to depict the predicted survival probabilities over time, and provide a Shiny app and R package as implementations of this tool. Our approach is capable of supporting conventional models, including the Cox and Fine-Gray models. However, its capability shines when coupled with cutting-edge machine learning models such as random survival forests and deep neural networks.
The advent of Large Language Models (LLMs) has ushered in a new era for design science in Information Systems, demanding a paradigm shift in tailoring LLMs design for business contexts. This paper proposes a novel framework to customize LLMs for general business contexts that aims to achieve three fundamental objectives simultaneously: (1) aligning conversational patterns, (2) integrating in-depth domain knowledge, and (3) embodying the soft skills and core principles. We design methodologies to combine domain-specific theory with Supervised Fine Tuning (SFT) in LLMs. We instantiate our proposed framework in the context of medical consultation, creating a GPT-doctor model. Specifically, we construct a comprehensive dataset for SFT by collecting large volume of real doctors consultation records from a leading online medical consultation platform and medical knowledge from professional databases. Additionally, drawing on medical theory, we identify three soft skills and core principles of human doctors including professionalism, explainability, and emotional support, and design approaches to integrate these skills into LLMs. We demonstrate the feasibility and performance of our proposed framework using online experiments with real patients as well as evaluation by domain experts and real consumers. Results demonstrate that fine-tuned GPT-doctor performs on par with human doctors across multiple metrics including medical expertise and consumer preference. Finally, we unravel the black box and examine the sources of model performance improvement from the perspectives of horizontal conversation pattern alignment and vertical medical knowledge evolution. Our proposed framework offers step-by-step principles and guidance for customizing LLMs for real-world business problems.
This paper presents an approach for efficiently approximating the inverse of Fisher information, a key component in variational Bayes inference. A notable aspect of our approach is the avoidance of analytically computing the Fisher information matrix and its explicit inversion. Instead, we introduce an iterative procedure for generating a sequence of matrices that converge to the inverse of Fisher information. The natural gradient variational Bayes algorithm without matrix inversion is provably convergent and achieves a convergence rate of order O(log s/s), with s the number of iterations. We also obtain a central limit theorem for the iterates. Our algorithm exhibits versatility, making it applicable across a diverse array of variational Bayes domains, including Gaussian approximation and normalizing flow Variational Bayes. We offer a range of numerical examples to demonstrate the efficiency and reliability of the proposed variational Bayes method.
We study computational aspects of repulsive Gibbs point processes, which are probabilistic models of interacting particles in a finite-volume region of space. We introduce an approach for reducing a Gibbs point process to the hard-core model, a well-studied discrete spin system. Given an instance of such a point process, our reduction generates a random graph drawn from a natural geometric model. We show that the partition function of a hard-core model on graphs generated by the geometric model concentrates around the partition function of the Gibbs point process. Our reduction allows us to use a broad range of algorithms developed for the hard-core model to sample from the Gibbs point process and approximate its partition function. This is, to the extend of our knowledge, the first approach that deals with pair potentials of unbounded range. We compare the resulting algorithms with recently established results and study further properties of the random geometric graphs with respect to the hard-core model.