Large language models (LLMs) have garnered significant attention, but the definition of "large" lacks clarity. This paper focuses on medium-sized language models (MLMs), defined as having at least six billion parameters but less than 100 billion. The study evaluates MLMs regarding zero-shot generative question answering, which requires models to provide elaborate answers without external document retrieval. The paper introduces an own test dataset and presents results from human evaluation. Results show that combining the best answers from different MLMs yielded an overall correct answer rate of 82.7% which is better than the 60.9% of ChatGPT. The best MLM achieved 71.8% and has 33B parameters, which highlights the importance of using appropriate training data for fine-tuning rather than solely relying on the number of parameters. More fine-grained feedback should be used to further improve the quality of answers. The open source community is quickly closing the gap to the best commercial models.
Benefiting from the development of deep learning, text-to-speech (TTS) techniques using clean speech have achieved significant performance improvements. The data collected from real scenes often contain noise and generally needs to be denoised by speech enhancement models. Noise-robust TTS models are often trained using the enhanced speech, which thus suffer from speech distortion and background noise that affect the quality of the synthesized speech. Meanwhile, it was shown that self-supervised pre-trained models exhibit excellent noise robustness on many speech tasks, implying that the learned representation has a better tolerance for noise perturbations. In this work, we therefore explore pre-trained models to improve the noise robustness of TTS models. Based on HIFI-GAN we first propose a representation-to-waveform vocoder, which aims to learn to map the representation of pre-trained models to the waveform. We then propose a text-to-representation Fastspeech2 model, which aims to learn to map text to pre-trained model representations. Experimental results on the LJSpeech and LibriTTS datasets show that our method outperforms those using speech enhancement methods in both subjective and objective metrics. Audio samples are available at: //zqs01.github.io/rep2wav/.
This note shows how to compute, to high relative accuracy under mild assumptions, complex Jacobi rotations for diagonalization of Hermitian matrices of order two, using the correctly rounded functions $\mathtt{cr\_hypot}$ and $\mathtt{cr\_rsqrt}$, proposed for standardization in the C programming language as recommended by the IEEE-754 floating-point standard. The rounding to nearest (ties to even) and the non-stop arithmetic are assumed. The numerical examples compare the observed with theoretical bounds on the relative errors in the rotations' elements, and show that the maximal observed departure of the rotations' determinants from unity is smaller than that of the transformations computed by LAPACK.
We consider finite-dimensional Bayesian linear inverse problems with Gaussian priors and additive Gaussian noise models. The goal of this note is to present a simple derivation of the well-known fact that solving the Bayesian D-optimal experimental design problem, i.e., maximizing the expected information gain, is equivalent to minimizing the log-determinant of posterior covariance operator. We focus on finite-dimensional inverse problems. However, the presentation is kept generic to facilitate extensions to infinite-dimensional inverse problems.
This paper considers a Bayesian approach for inclusion detection in nonlinear inverse problems using two known and popular push-forward prior distributions: the star-shaped and level set prior distributions. We analyze the convergence of the corresponding posterior distributions in a small measurement noise limit. The methodology is general; it works for priors arising from any H\"older continuous transformation of Gaussian random fields and is applicable to a range of inverse problems. The level set and star-shaped prior distributions are examples of push-forward priors under H\"older continuous transformations that take advantage of the structure of inclusion detection problems. We show that the corresponding posterior mean converges to the ground truth in a proper probabilistic sense. Numerical tests on a two-dimensional quantitative photoacoustic tomography problem showcase the approach. The results highlight the convergence properties of the posterior distributions and the ability of the methodology to detect inclusions with sufficiently regular boundaries.
We present a novel approach for finding multiple noisily embedded template graphs in a very large background graph. Our method builds upon the graph-matching-matched-filter technique proposed in Sussman et al., with the discovery of multiple diverse matchings being achieved by iteratively penalizing a suitable node-pair similarity matrix in the matched filter algorithm. In addition, we propose algorithmic speed-ups that greatly enhance the scalability of our matched-filter approach. We present theoretical justification of our methodology in the setting of correlated Erdos-Renyi graphs, showing its ability to sequentially discover multiple templates under mild model conditions. We additionally demonstrate our method's utility via extensive experiments both using simulated models and real-world dataset, include human brain connectomes and a large transactional knowledge base.
Equipped with Chain-of-Thought (CoT), Large language models (LLMs) have shown impressive reasoning ability in various downstream tasks. Even so, suffering from hallucinations and the inability to access external knowledge, LLMs often come with incorrect or unfaithful intermediate reasoning steps, especially in the context of answering knowledge-intensive tasks such as KBQA. To alleviate this issue, we propose a framework called Knowledge-Driven Chain-of-Thought (KD-CoT) to verify and modify reasoning traces in CoT via interaction with external knowledge, and thus overcome the hallucinations and error propagation. Concretely, we formulate the CoT rationale process of LLMs into a structured multi-round QA format. In each round, LLMs interact with a QA system that retrieves external knowledge and produce faithful reasoning traces based on retrieved precise answers. The structured CoT reasoning of LLMs is facilitated by our developed KBQA CoT collection, which serves as in-context learning demonstrations and can also be utilized as feedback augmentation to train a robust retriever. Extensive experiments on WebQSP and ComplexWebQuestion datasets demonstrate the effectiveness of proposed KD-CoT in task-solving reasoning generation, which outperforms the vanilla CoT ICL with an absolute success rate of 8.0% and 5.1%. Furthermore, our proposed feedback-augmented retriever outperforms the state-of-the-art baselines for retrieving knowledge, achieving significant improvement in Hit performance.
This paper aims to reconstruct the initial condition of a hyperbolic equation with an unknown damping coefficient. Our approach involves approximating the hyperbolic equation's solution by its truncated Fourier expansion in the time domain and using a polynomial-exponential basis. This truncation process facilitates the elimination of the time variable, consequently, yielding a system of quasi-linear elliptic equations. To globally solve the system without needing an accurate initial guess, we employ the Carleman contraction principle. We provide several numerical examples to illustrate the efficacy of our method. The method not only delivers precise solutions but also showcases remarkable computational efficiency.
Large language models (LLMs) have emerged as powerful machine-learning systems capable of handling a myriad of tasks. Tuned versions of these systems have been turned into chatbots that can respond to user queries on a vast diversity of topics, providing informative and creative replies. However, their application to physical science research remains limited owing to their incomplete knowledge in these areas, contrasted with the needs of rigor and sourcing in science domains. Here, we demonstrate how existing methods and software tools can be easily combined to yield a domain-specific chatbot. The system ingests scientific documents in existing formats, and uses text embedding lookup to provide the LLM with domain-specific contextual information when composing its reply. We similarly demonstrate that existing image embedding methods can be used for search and retrieval across publication figures. These results confirm that LLMs are already suitable for use by physical scientists in accelerating their research efforts.
This paper studies distributed Nash equilibrium (NE) seeking under Denial-of-Service (DoS) attacks and quantization. The players can only exchange information with their own direct neighbors. The transmitted information is subject to quantization and packet losses induced by malicious DoS attacks. We propose a quantized distributed NE seeking strategy based on the approach of dynamic quantized consensus. To solve the quantizer saturation problem caused by DoS attacks, the quantization mechanism is equipped to have zooming-in and holding capabilities, in which the holding capability is consistent with the results in quantized consensus under DoS. A sufficient condition on the number of quantizer levels is provided, under which the quantizers are free from saturation under DoS attacks. The proposed distributed quantized NE seeking strategy is shown to have the so-called maximum resilience to DoS attacks. Namely, if the bound characterizing the maximum resilience is violated, an attacker can deny all the transmissions and hence distributed NE seeking is impossible.
The design of automatic speech pronunciation assessment can be categorized into closed and open response scenarios, each with strengths and limitations. A system with the ability to function in both scenarios can cater to diverse learning needs and provide a more precise and holistic assessment of pronunciation skills. In this study, we propose a Multi-task Pronunciation Assessment model called MultiPA. MultiPA provides an alternative to Kaldi-based systems in that it has simpler format requirements and better compatibility with other neural network models. Compared with previous open response systems, MultiPA provides a wider range of evaluations, encompassing assessments at both the sentence and word-level. Our experimental results show that MultiPA achieves comparable performance when working in closed response scenarios and maintains more robust performance when directly used for open responses.