The Holant theorem is a powerful tool for studying the computational complexity of counting problems in the Holant framework. Due to the great expressiveness of the Holant framework, a converse to the Holant theorem would itself be a very powerful counting indistinguishability theorem. The most general converse does not hold, but we prove the following, still highly general, version: if any two sets of real-valued signatures are Holant-indistinguishable, then they are equivalent up to an orthogonal transformation. This resolves a partially open conjecture of Xia (2010). Consequences of this theorem include the well-known result that homomorphism counts from all graphs determine a graph up to isomorphism, the classical sufficient condition for simultaneous orthogonal similarity of sets of real matrices, and a combinatorial characterization of simultaneosly orthogonally decomposable (odeco) sets of symmetric tensors.
We investigate the numerical solution of multiscale transport equations using Physics Informed Neural Networks (PINNs) with ReLU activation functions. Therefore, we study the analogy between PINNs and Least-Squares Finite Elements (LSFE) which lies in the shared approach to reformulate the PDE solution as a minimization of a quadratic functional. We prove that in the diffusive regime, the correct limit is not reached, in agreement with known results for first-order LSFE. A diffusive scaling is introduced that can be applied to overcome this, again in full agreement with theoretical results for LSFE. We provide numerical results in the case of slab geometry that support our theoretical findings.
Large language models (LLMs) have significantly improved their ability to perform tasks in the field of code generation. However, there is still a gap between LLMs being capable coders and being top-tier software engineers. Based on the observation that top-level software engineers often ask clarifying questions to reduce ambiguity in both requirements and coding solutions, we argue that the same should be applied to LLMs for code generation tasks. In this work, we conducted an empirical study on the benchmark and analysis of the communication skills of LLMs for code generation. We define communication skills of LLMs as ``being able to ask clarifying questions when the description of the code generation problem has issues''. We created a new benchmark, HumanEvalComm, by modifying problem descriptions according to three issues: inconsistency, ambiguity, incompleteness. We defined new evaluation metrics such as Communication Rate and Good Question Rate, and then experimented on HumanEvalComm with different Code LLMs, and a new LLM agent approach, Okanagan, to identify and ask questions in ambiguous parts from code and descriptions for further refining the generated code. Finally, we discussed evaluation results by comparing Code LLMs and Okanagan with our findings.
This paper studies a generalized variant of the Colonel Blotto game, referred to as the Colonel Blotto game with costs. Unlike the classic Colonel Blotto game, which imposes the use-it-or-lose-it budget assumption, the Colonel Blotto game with costs captures the strategic importance of costs related both to obtaining resources and assigning them across battlefields. We show that every instance of the Colonel Blotto game with costs is strategically equivalent to an instance of the zero-sum Colonel Blotto game with one additional battlefield. This enables the computation of Nash equilibria of the Colonel Blotto game with costs in polynomial time with respect to the game parameters: the number of battlefields and the number of resources available to the players.
The Bayesian Mallows model is a flexible tool for analyzing data in the form of complete or partial rankings, and transitive or intransitive pairwise preferences. In many potential applications of preference learning, data arrive sequentially and it is of practical interest to update posterior beliefs and predictions efficiently, based on the currently available data. Despite this, most algorithms proposed so far have focused on batch inference. In this paper we present an algorithm for sequentially estimating the posterior distributions of the Bayesian Mallows model using nested sequential Monte Carlo. As it requires minimum user input in form of tuning parameters, is straightforward to parallelize, and returns the marginal likelihood as a direct byproduct of estimation, the algorithm is an alternative to Markov chain Monte Carlo techniques also in batch estimation settings.
We propose a practical tool for evaluating and comparing the accuracy of FDMs for the Helmholtz equation. The tool based on Fourier analysis makes it easy to find wavenumber explicit order of convergence, and can be used for rigorous proof. It fills in the gap between the dispersion analysis and the actual error with source term.
Enumeration problems aim at outputting, without repetition, the set of solutions to a given problem instance. However, outputting the entire solution set may be prohibitively expensive if it is too big. In this case, outputting a small, sufficiently diverse subset of the solutions would be preferable. This leads to the Diverse-version of the original enumeration problem, where the goal is to achieve a certain level d of diversity by selecting k solutions. In this paper, we look at the Diverse-version of the query answering problem for Conjunctive Queries and extensions thereof. That is, we study the problem if it is possible to achieve a certain level d of diversity by selecting k answers to the given query and, in the positive case, to actually compute such k answers.
Emotional text-to-speech synthesis (TTS) aims to generate realistic emotional speech from input text. However, quantitatively controlling multi-level emotion rendering remains challenging. In this paper, we propose a diffusion-based emotional TTS framework with a novel approach for emotion intensity modeling to facilitate fine-grained control over emotion rendering at the phoneme, word, and utterance levels. We introduce a hierarchical emotion distribution (ED) extractor that captures a quantifiable ED embedding across different speech segment levels. Additionally, we explore various acoustic features and assess their impact on emotion intensity modeling. During TTS training, the hierarchical ED embedding effectively captures the variance in emotion intensity from the reference audio and correlates it with linguistic and speaker information. The TTS model not only generates emotional speech during inference, but also quantitatively controls the emotion rendering over the speech constituents. Both objective and subjective evaluations demonstrate the effectiveness of our framework in terms of speech quality, emotional expressiveness, and hierarchical emotion control.
The $N$-point discrete Fourier transform (DFT) is a cornerstone for several signal processing applications. Many of these applications operate in real-time, making the computational complexity of the DFT a critical performance indicator to be optimized. Unfortunately, whether the $\mathcal{O}(N\log_2 N)$ time complexity of the fast Fourier transform (FFT) can be outperformed remains an unresolved question in the theory of computation. However, in many applications of the DFT -- such as compressive sensing, image processing, and wideband spectral analysis -- only a small fraction of the output signal needs to be computed because the signal is sparse. This motivates the development of algorithms that compute specific DFT coefficients more efficiently than the FFT algorithm. In this article, we show that the number of points of some DFT coefficients can be dramatically reduced by means of elementary mathematical properties. We present an algorithm that compacts the square index coefficients (SICs) of DFT (i.e., $X_{k\sqrt{N}}$, $k=0,1,\cdots, \sqrt{N}-1$, for a square number $N$) from $N$ to $\sqrt{N}$ points at the expense of $N-1$ complex sums and no multiplication. Based on this, any regular DFT algorithm can be straightforwardly applied to compute the SICs with a reduced number of complex multiplications. If $N$ is a power of two, one can combine our algorithm with the FFT to calculate all SICs in $\mathcal{O}(\sqrt{N}\log_2\sqrt{N})$ time complexity.
This survey presents an in-depth exploration of knowledge distillation (KD) techniques within the realm of Large Language Models (LLMs), spotlighting the pivotal role of KD in transferring sophisticated capabilities from proprietary giants such as GPT-4 to accessible, open-source models like LLaMA and Mistral. Amidst the evolving AI landscape, this work elucidates the critical disparities between proprietary and open-source LLMs, demonstrating how KD serves as an essential conduit for imbuing the latter with the former's advanced functionalities and nuanced understandings. Our survey is meticulously structured around three foundational pillars: algorithm, skill, and verticalization -- providing a comprehensive examination of KD mechanisms, the enhancement of specific cognitive abilities, and their practical implications across diverse fields. Crucially, the survey navigates the intricate interplay between data augmentation (DA) and KD, illustrating how DA emerges as a powerful paradigm within the KD framework to bolster LLMs' performance. By leveraging DA to generate context-rich, skill-specific training data, KD transcends traditional boundaries, enabling open-source models to approximate the contextual adeptness, ethical alignment, and deep semantic insights characteristic of their proprietary counterparts. This work aims to provide an insightful guide for researchers and practitioners, offering a detailed overview of current methodologies in knowledge distillation and proposing future research directions. By bridging the gap between proprietary and open-source LLMs, this survey underscores the potential for more accessible, efficient, and sustainable AI solutions, fostering a more inclusive and equitable landscape in AI advancements. An associated Github repository is available at //github.com/Tebmer/Awesome-Knowledge-Distillation-of-LLMs.
Human-in-the-loop aims to train an accurate prediction model with minimum cost by integrating human knowledge and experience. Humans can provide training data for machine learning applications and directly accomplish some tasks that are hard for computers in the pipeline with the help of machine-based approaches. In this paper, we survey existing works on human-in-the-loop from a data perspective and classify them into three categories with a progressive relationship: (1) the work of improving model performance from data processing, (2) the work of improving model performance through interventional model training, and (3) the design of the system independent human-in-the-loop. Using the above categorization, we summarize major approaches in the field, along with their technical strengths/ weaknesses, we have simple classification and discussion in natural language processing, computer vision, and others. Besides, we provide some open challenges and opportunities. This survey intends to provide a high-level summarization for human-in-the-loop and motivates interested readers to consider approaches for designing effective human-in-the-loop solutions.