Quantum Tanner codes constitute a family of quantum low-density parity-check (LDPC) codes with good parameters, i.e., constant encoding rate and relative distance. In this article, we prove that quantum Tanner codes also facilitate single-shot quantum error correction (QEC) of adversarial noise, where one measurement round (consisting of constant-weight parity checks) suffices to perform reliable QEC even in the presence of measurement errors. We establish this result for both the sequential and parallel decoding algorithms introduced by Leverrier and Z\'emor. Furthermore, we show that in order to suppress errors over multiple repeated rounds of QEC, it suffices to run the parallel decoding algorithm for constant time in each round. Combined with good code parameters, the resulting constant-time overhead of QEC and robustness to (possibly time-correlated) adversarial noise make quantum Tanner codes alluring from the perspective of quantum fault-tolerant protocols.
TripAdvisor reviews and comparable data sources play an important role in many tasks in Natural Language Processing (NLP), providing a data basis for the identification and classification of subjective judgments, such as hotel or restaurant reviews, into positive or negative polarities. This study explores three important factors influencing variation in crowdsourced polarity judgments, focusing on TripAdvisor reviews in Spanish. Three hypotheses are tested: the role of Part Of Speech (POS), the impact of sentiment words such as "tasty", and the influence of neutral words like "ok" on judgment variation. The study's methodology employs one-word titles, demonstrating their efficacy in studying polarity variation of words. Statistical tests on mean equality are performed on word groups of our interest. The results of this study reveal that adjectives in one-word titles tend to result in lower judgment variation compared to other word types or POS. Sentiment words contribute to lower judgment variation as well, emphasizing the significance of sentiment words in research on polarity judgments, and neutral words are associated with higher judgment variation as expected. However, these effects cannot be always reproduced in longer titles, which suggests that longer titles do not represent the best data source for testing the ambiguity of single words due to the influence on word polarity by other words like negation in longer titles. This empirical investigation contributes valuable insights into the factors influencing polarity variation of words, providing a foundation for NLP practitioners that aim to capture and predict polarity judgments in Spanish and for researchers that aim to understand factors influencing judgment variation.
The problem studied in this work is to determine the higher weight spectra of the Projective Reed-Muller codes associated to the Veronese $3$-fold $\mathcal V$ in $PG(9,q)$, which is the image of the quadratic Veronese embedding of $PG(3,q)$ in $PG(9,q)$. We reduce the problem to the following combinatorial problem in finite geometry: For each subset $S$ of $\mathcal V$, determine the dimension of the linear subspace of $PG(9,q)$ generated by $S$. We develop a systematic method to solve the latter problem. We implement the method for $q=3$, and use it to obtain the higher weight spectra of the associated code. The case of a general finite field $\mathbb F_q$ will be treated in a future work.
We introduce the sum-rank metric analogue of Reed-Muller codes, which we called linearized Reed-Muller codes, using multivariate Ore polynomials. We study the parameters of these codes, compute their dimension and give a lower bound for their minimum distance. Our codes exhibit quite good parameters, respecting a similar bound to Reed-Muller codes in the Hamming metric. Finally, we also show that many of the newly introduced linearized Reed--Muller codes can be embedded in some linearized Algebraic Geometry codes, a property which could turn out to be useful in light of decoding.
We present a class of high-order Eulerian-Lagrangian Runge-Kutta finite volume methods that can numerically solve Burgers' equation with shock formations, which could be extended to general scalar conservation laws. Eulerian-Lagrangian (EL) and semi-Lagrangian (SL) methods have recently seen increased development and have become a staple for allowing large time-stepping sizes. Yet, maintaining relatively large time-stepping sizes post shock formation remains quite challenging. Our proposed scheme integrates the partial differential equation on a space-time region partitioned by linear approximations to the characteristics determined by the Rankine-Hugoniot jump condition. We trace the characteristics forward in time and present a merging procedure for the mesh cells to handle intersecting characteristics due to shocks. Following this partitioning, we write the equation in a time-differential form and evolve with Runge-Kutta methods in a method-of-lines fashion. High-resolution methods such as ENO and WENO-AO schemes are used for spatial reconstruction. Extension to higher dimensions is done via dimensional splitting. Numerical experiments demonstrate our scheme's high-order accuracy and ability to sharply capture post-shock solutions with large time-stepping sizes.
We present QueryNER, a manually-annotated dataset and accompanying model for e-commerce query segmentation. Prior work in sequence labeling for e-commerce has largely addressed aspect-value extraction which focuses on extracting portions of a product title or query for narrowly defined aspects. Our work instead focuses on the goal of dividing a query into meaningful chunks with broadly applicable types. We report baseline tagging results and conduct experiments comparing token and entity dropping for null and low recall query recovery. Challenging test sets are created using automatic transformations and show how simple data augmentation techniques can make the models more robust to noise. We make the QueryNER dataset publicly available.
We investigate modifications to Bayesian Optimization for a resource-constrained setting of sequential experimental design where changes to certain design variables of the search space incur a switching cost. This models the scenario where there is a trade-off between evaluating more while maintaining the same setup, or switching and restricting the number of possible evaluations due to the incurred cost. We adapt two process-constrained batch algorithms to this sequential problem formulation, and propose two new methods: one cost-aware and one cost-ignorant. We validate and compare the algorithms using a set of 7 scalable test functions in different dimensionalities and switching-cost settings for 30 total configurations. Our proposed cost-aware hyperparameter-free algorithm yields comparable results to tuned process-constrained algorithms in all settings we considered, suggesting some degree of robustness to varying landscape features and cost trade-offs. This method starts to outperform the other algorithms with increasing switching-cost. Our work broadens out from other recent Bayesian Optimization studies in resource-constrained settings that consider a batch setting only. While the contributions of this work are relevant to the general class of resource-constrained problems, they are particularly relevant to problems where adaptability to varying resource availability is of high importance
This paper provides a convergence analysis for generalized Hamiltonian Monte Carlo samplers, a family of Markov Chain Monte Carlo methods based on leapfrog integration of Hamiltonian dynamics and kinetic Langevin diffusion, that encompasses the unadjusted Hamiltonian Monte Carlo method. Assuming that the target distribution $\pi$ satisfies a log-Sobolev inequality and mild conditions on the corresponding potential function, we establish quantitative bounds on the relative entropy of the iterates defined by the algorithm, with respect to $\pi$. Our approach is based on a perturbative and discrete version of the modified entropy method developed to establish hypocoercivity for the continuous-time kinetic Langevin process. As a corollary of our main result, we are able to derive complexity bounds for the class of algorithms at hand. In particular, we show that the total number of iterations to achieve a target accuracy $\varepsilon >0$ is of order $d/\varepsilon^{1/4}$, where $d$ is the dimension of the problem. This result can be further improved in the case of weakly interacting mean field potentials, for which we find a total number of iterations of order $(d/\varepsilon)^{1/4}$.
This paper develops an in-depth treatment concerning the problem of approximating the Gaussian smoothing and Gaussian derivative computations in scale-space theory for application on discrete data. With close connections to previous axiomatic treatments of continuous and discrete scale-space theory, we consider three main ways discretizing these scale-space operations in terms of explicit discrete convolutions, based on either (i) sampling the Gaussian kernels and the Gaussian derivative kernels, (ii) locally integrating the Gaussian kernels and the Gaussian derivative kernels over each pixel support region and (iii) basing the scale-space analysis on the discrete analogue of the Gaussian kernel, and then computing derivative approximations by applying small-support central difference operators to the spatially smoothed image data. We study the properties of these three main discretization methods both theoretically and experimentally, and characterize their performance by quantitative measures, including the results they give rise to with respect to the task of scale selection, investigated for four different use cases, and with emphasis on the behaviour at fine scales. The results show that the sampled Gaussian kernels and derivatives as well as the integrated Gaussian kernels and derivatives perform very poorly at very fine scales. At very fine scales, the discrete analogue of the Gaussian kernel with its corresponding discrete derivative approximations performs substantially better. The sampled Gaussian kernel and the sampled Gaussian derivatives do, on the other hand, lead to numerically very good approximations of the corresponding continuous results, when the scale parameter is sufficiently large, in the experiments presented in the paper, when the scale parameter is greater than a value of about 1, in units of the grid spacing.
Although Large Language Models (LLMs) have demonstrated remarkable code-generation ability, they still struggle with complex tasks. In real-world software development, humans usually tackle complex tasks through collaborative teamwork, a strategy that significantly controls development complexity and enhances software quality. Inspired by this, we present a self-collaboration framework for code generation employing LLMs, exemplified by ChatGPT. Specifically, through role instructions, 1) Multiple LLM agents act as distinct `experts', each responsible for a specific subtask within a complex task; 2) Specify the way to collaborate and interact, so that different roles form a virtual team to facilitate each other's work, ultimately the virtual team addresses code generation tasks collaboratively without the need for human intervention. To effectively organize and manage this virtual team, we incorporate software-development methodology into the framework. Thus, we assemble an elementary team consisting of three LLM roles (i.e., analyst, coder, and tester) responsible for software development's analysis, coding, and testing stages. We conduct comprehensive experiments on various code-generation benchmarks. Experimental results indicate that self-collaboration code generation relatively improves 29.9%-47.1% Pass@1 compared to the base LLM agent. Moreover, we showcase that self-collaboration could potentially enable LLMs to efficiently handle complex repository-level tasks that are not readily solved by the single LLM agent.
We present a generalization of the discrete Lehmann representation (DLR) to three-point correlation and vertex functions in imaginary time and Matsubara frequency. The representation takes the form of a linear combination of judiciously chosen exponentials in imaginary time, and products of simple poles in Matsubara frequency, which are universal for a given temperature and energy cutoff. We present a systematic algorithm to generate compact sampling grids, from which the coefficients of such an expansion can be obtained by solving a linear system. We show that the explicit form of the representation can be used to evaluate diagrammatic expressions involving infinite Matsubara sums, such as polarization functions or self-energies, with controllable, high-order accuracy. This collection of techniques establishes a framework through which methods involving three-point objects can be implemented robustly, with a substantially reduced computational cost and memory footprint.