In this work, we study linear codes with the folded Hamming distance, or equivalently, codes with the classical Hamming distance that are linear over a subfield. This includes additive codes. We study MDS codes in this setting and define quasi MDS (QMDS) codes and dually QMDS codes, which attain a more relaxed variant of the classical Singleton bound. We provide several general results concerning these codes, including restriction, shortening, weight distributions, existence, density, geometric description and bounds on their lengths relative to their field sizes. We provide explicit examples and a binary construction with optimal lengths relative to their field sizes, which beats any MDS code.
Word-level AutoCompletion(WLAC) is a rewarding yet challenging task in Computer-aided Translation. Existing work addresses this task through a classification model based on a neural network that maps the hidden vector of the input context into its corresponding label (i.e., the candidate target word is treated as a label). Since the context hidden vector itself does not take the label into account and it is projected to the label through a linear classifier, the model can not sufficiently leverage valuable information from the source sentence as verified in our experiments, which eventually hinders its overall performance. To alleviate this issue, this work proposes an energy-based model for WLAC, which enables the context hidden vector to capture crucial information from the source sentence. Unfortunately, training and inference suffer from efficiency and effectiveness challenges, thereby we employ three simple yet effective strategies to put our model into practice. Experiments on four standard benchmarks demonstrate that our reranking-based approach achieves substantial improvements (about 6.07%) over the previous state-of-the-art model. Further analyses show that each strategy of our approach contributes to the final performance.
In this work, we present a new stabilization method aimed at removing spurious oscillations in the pressure approximation of Biot's model for poroelasticity with low permeabilities and/or small time steps. We consider different finite-element discretizations and illustrate how not only does such a stabilized scheme provide numerical solutions that are free of non-physical oscillations, but it also allows one to iterate the fluid and mechanics problems in a fashion similar to the well-known fixed-stress split method. The resulting solution method is convergent without the necessity for additional terms to stabilize the iteration. Finally, we present numerical results illustrating the robust behavior of both the stabilization and iterative solver with respect to the physical and discretization parameters of the model.
In this paper, we present our approach to addressing the challenges of the 7th ABAW competition. The competition comprises three sub-challenges: Valence Arousal (VA) estimation, Expression (Expr) classification, and Action Unit (AU) detection. To tackle these challenges, we employ state-of-the-art models to extract powerful visual features. Subsequently, a Transformer Encoder is utilized to integrate these features for the VA, Expr, and AU sub-challenges. To mitigate the impact of varying feature dimensions, we introduce an affine module to align the features to a common dimension. Overall, our results significantly outperform the baselines.
By utilizing recently developed tools for constructing gradient flows on Wasserstein spaces, we extend an analysis technique commonly employed to understand alternating minimization algorithms on Euclidean space to the Expectation Maximization (EM) algorithm via its representation as coordinate-wise minimization on the product of a Euclidean space and a space of probability distributions due to Neal and Hinton (1998). In so doing we obtain finite sample error bounds and exponential convergence of the EM algorithm under a natural generalisation of a log-Sobolev inequality. We further demonstrate that the analysis technique is sufficiently flexible to allow also the analysis of several variants of the EM algorithm.
We prove that binary even LCD code and some graphs are in one-to-one correspondence in a certain way. Furthermore, we show that adjacency matrices of non-isomorphic simple graphs give inequivalent binary LCD codes, and vice versa.
This study addresses the use of Reed-Solomon error correction codes in QR codes to enhance resilience against failures. To fully grasp this approach, a basic cryptographic context is provided, necessary for understanding Reed-Solomon codes. The study begins by defining a code and explores key outcomes for codes with additional properties, such as linearity. The theoretical framework is further developed with specific definitions and examples of Reed-Solomon codes, presented as a particular variant of BCH codes. Additionally, the structure of QR codes is analyzed, encompassing different versions and how data is represented in the form of black and white pixels within an image. Finally, an inherent vulnerability of Reed-Solomon Codes, and particularly of QR codes, related to selective manipulation of modules is examined. This vulnerability leverages the error correction mechanisms present in Reed-Solomon codes.
In this paper, we provide a complete classification for the first-order Goedel logics concerning the property that the formulas admit logically equivalent prenex normal forms. We show that the only first-order Goedel logics that admit such prenex forms are those with finite truth value sets since they allow all quantifier-shift rules and the logic $G_\uparrow$ with only one accumulation point at 1 in the infinite truth value set. In all the other cases, there are generally no logically equivalent prenex normal forms. We will also see that $G_\uparrow$ is the intersection of all finite first-order Goedel logics. The second part of this paper investigates the existence of effective equivalence between the validity of a formula and the validity of some prenex normal form. The existence of such a normal form is obvious for finite valued Goedel logic and $G_\uparrow$. Goedel logics with an uncountable truth value set admit the prenex normal forms if and only if every surrounding of 0 is uncountable or 0 is an isolated point. Otherwise, uncountable Goedel logics are not recursively enumerable, however, the prenex fragment is always recursively enumerable. Therefore, there is no effective translation between the valid formula and the valid prenex normal form. However, the existence of effectively constructible validity equivalent prenex forms for the countable case is still up for debate.
In computational practice, we often encounter situations where only measurements at equally spaced points are available. Using standard polynomial interpolation in such cases can lead to highly inaccurate results due to numerical ill-conditioning of the problem. Several techniques have been developed to mitigate this issue, such as the mock-Chebyshev subset interpolation and the constrained mock-Chebyshev least-squares approximation. The high accuracy and the numerical stability achieved by these techniques motivate us to extend these methods to histopolation, a polynomial interpolation method based on segmental function averages. While classical polynomial interpolation relies on function evaluations at specific nodes, histopolation leverages averages of the function over subintervals. In this work, we introduce three types of mock-Chebyshev approaches for segmental interpolation and theoretically analyse the stability of their Lebesgue constants, which measure the numerical conditioning of the histopolation problem under small perturbations of the segments. We demonstrate that these segmental mock-Chebyshev approaches yield a quasi-optimal logarithmic growth of the Lebesgue constant in relevant scenarios. Additionally, we compare the performance of these new approximation techniques through various numerical experiments.
In this paper, we provide a complete classification for the first-order G\"odel logics concerning the property that the formulas admit logically equivalent prenex normal forms. We show that the only first-order G\"odel logics that admit such prenex forms are those with finite truth value sets since they allow all quantifier-shift rules and the logic \(G_\uparrow\) with only one accumulation point at $1$ in the infinite truth values set. In all the other cases, there are generally no logically equivalent prenex normal forms. We will also see that \(G_\uparrow\) is the intersection of all finite first-order G\"odel logics.\\ The second part of this paper investigates the existence of effective equivalence between the validity of a formula and the validity of some prenex normal form. The existence of such a normal form is obvious for finite valued G\"odel logic and \(G_\uparrow\). G\"odel logics with an uncountable truth value set admit the prenex normal forms if and only if every surrounding of \(0\) is uncountable or \(0\) is an isolated point. Otherwise, uncountable G\"odel logics are not recursively enumerable, however, the prenex fragment is always recursively enumerable. Therefore, there is no effective translation between the valid formula and the valid prenex normal form. However, the existence of effectively constructible validity equivalent prenex forms for the countable case is still up for debate.
In this work, we introduce DOPRA, a novel approach designed to mitigate hallucinations in multi-modal large language models (MLLMs). Unlike existing solutions that typically involve costly supplementary training data or the integration of external knowledge sources, DOPRA innovatively addresses hallucinations by decoding specific weighted layer penalties and redistribution, offering an economical and effective solution without additional resources. DOPRA is grounded in unique insights into the intrinsic mechanisms controlling hallucinations within MLLMs, especially the models' tendency to over-rely on a subset of summary tokens in the self-attention matrix, neglecting critical image-related information. This phenomenon is particularly pronounced in certain strata. To counteract this over-reliance, DOPRA employs a strategy of weighted overlay penalties and redistribution in specific layers, such as the 12th layer, during the decoding process. Furthermore, DOPRA includes a retrospective allocation process that re-examines the sequence of generated tokens, allowing the algorithm to reallocate token selection to better align with the actual image content, thereby reducing the incidence of hallucinatory descriptions in auto-generated captions. Overall, DOPRA represents a significant step forward in improving the output quality of MLLMs by systematically reducing hallucinations through targeted adjustments during the decoding process.