In this paper, we extend the work of Liesen et al. (2002), which analyzes how the condition number of an orthonormal matrix Q changes when a column is added ([Q, c]), particularly focusing on the perpendicularity of c to the span of Q. Their result, presented in Theorem 2.3 of Liesen et al. (2002), assumes exact arithmetic and orthonormality of Q, which is a strong assumption when applying these results to numerical methods such as QR factorization algorithms. In our work, we address this gap by deriving bounds on the condition number increase for a matrix B without assuming perfect orthonormality, even when a column is not perfectly orthogonal to the span of B. This framework allows us to analyze QR factorization methods where orthogonalization is imperfect and subject to Gaussian noise. We also provide results on the performance of orthogonal projection and least squares under Gaussian noise, further supporting the development of this theory.
Modern software for propositional satisfiability problems gives a powerful automated reasoning toolkit, capable of outputting not only a satisfiable/unsatisfiable signal but also a justification of unsatisfiability in the form of resolution proof (or a more expressive proof), which is commonly used for verification purposes. Empirically, modern SAT solvers produce relatively short proofs, however, there are no inherent guarantees that these proofs cannot be significantly reduced. This paper proposes a novel branch-and-bound algorithm for finding the shortest resolution proofs; to this end, we introduce a layer list representation of proofs that groups clauses by their level of indirection. As we show, this representation breaks all permutational symmetries, thereby improving upon the state-of-the-art symmetry-breaking and informing the design of a novel workflow for proof minimization. In addition to that, we design pruning procedures that reason on proof length lower bound, clause subsumption, and dominance. Our experiments suggest that the proofs from state-of-the-art solvers could be shortened by 30-60% on the instances from SAT Competition 2002 and by 25-50% on small synthetic formulas. When treated as an algorithm for finding the shortest proof, our approach solves twice as many instances as the previous work based on SAT solving and reduces the time to optimality by orders of magnitude for the instances solved by both approaches.
This paper aims to construct optimal quaternary additive codes with non-integer dimensions. Firstly, we propose combinatorial constructions of quaternary additive constant-weight codes, alongside additive generalized anticode construction. Subsequently, we propose generalized Construction X, which facilitates the construction of non-integer dimensional optimal additive codes from linear codes. Then, we construct ten classes of optimal quaternary non-integer dimensional additive codes through these two methods. As an application, we also determine the optimal additive $[n,3.5,n-t]_4$ codes for all $t$ with variable $n$, except for $t=6,7,12$.
In this paper, we discuss adaptive approximations of an elliptic eigenvalue optimization problem in a phase-field setting by a conforming finite element method. An adaptive algorithm is proposed and implemented in several two dimensional numerical examples for illustration of efficiency and accuracy. Theoretical findings consist in the vanishing limit of a subsequence of estimators and the convergence of the relevant subsequence of adaptively-generated solutions to a solution to the continuous optimality system.
This study focuses on Intelligent Fault Diagnosis (IFD) in rotating machinery utilizing a single microphone and a data-driven methodology, effectively diagnosing 42 classes of fault types and severities. The research leverages sound data from the imbalanced MaFaulDa dataset, aiming to strike a balance between high performance and low resource consumption. The testing phase encompassed a variety of configurations, including sampling, quantization, signal normalization, silence removal, Wiener filtering, data scaling, windowing, augmentation, and classifier tuning using XGBoost. Through the analysis of time, frequency, mel-frequency, and statistical features, we achieved an impressive accuracy of 99.54% and an F-Beta score of 99.52% with just 6 boosting trees at an 8 kHz, 8-bit configuration. Moreover, when utilizing only MFCCs along with their first- and second-order deltas, we recorded an accuracy of 97.83% and an F-Beta score of 97.67%. Lastly, by implementing a greedy wrapper approach, we obtained a remarkable accuracy of 96.82% and an F-Beta score of 98.86% using 50 selected features, nearly all of which were first- and second-order deltas of the MFCCs.
This paper introduces a collaborative, human-centred taxonomy of AI, algorithmic and automation harms. We argue that existing taxonomies, while valuable, can be narrow, unclear, typically cater to practitioners and government, and often overlook the needs of the wider public. Drawing on existing taxonomies and a large repository of documented incidents, we propose a taxonomy that is clear and understandable to a broad set of audiences, as well as being flexible, extensible, and interoperable. Through iterative refinement with topic experts and crowdsourced annotation testing, we propose a taxonomy that can serve as a powerful tool for civil society organisations, educators, policymakers, product teams and the general public. By fostering a greater understanding of the real-world harms of AI and related technologies, we aim to increase understanding, empower NGOs and individuals to identify and report violations, inform policy discussions, and encourage responsible technology development and deployment.
This study explores the effectiveness of Large Language Models in meal planning, focusing on their ability to identify and decompose compound ingredients. We evaluated three models-GPT-4o, Llama-3 (70b), and Mixtral (8x7b)-to assess their proficiency in recognizing and breaking down complex ingredient combinations. Preliminary results indicate that while Llama-3 (70b) and GPT-4o excels in accurate decomposition, all models encounter difficulties with identifying essential elements like seasonings and oils. Despite strong overall performance, variations in accuracy and completeness were observed across models. These findings underscore LLMs' potential to enhance personalized nutrition but highlight the need for further refinement in ingredient decomposition. Future research should address these limitations to improve nutritional recommendations and health outcomes.
We give a comprehensive description of Wasserstein gradient flows of maximum mean discrepancy (MMD) functionals $\mathcal F_\nu := \text{MMD}_K^2(\cdot, \nu)$ towards given target measures $\nu$ on the real line, where we focus on the negative distance kernel $K(x,y) := -|x-y|$. In one dimension, the Wasserstein-2 space can be isometrically embedded into the cone $\mathcal C(0,1) \subset L_2(0,1)$ of quantile functions leading to a characterization of Wasserstein gradient flows via the solution of an associated Cauchy problem on $L_2(0,1)$. Based on the construction of an appropriate counterpart of $\mathcal F_\nu$ on $L_2(0,1)$ and its subdifferential, we provide a solution of the Cauchy problem. For discrete target measures $\nu$, this results in a piecewise linear solution formula. We prove invariance and smoothing properties of the flow on subsets of $\mathcal C(0,1)$. For certain $\mathcal F_\nu$-flows this implies that initial point measures instantly become absolutely continuous, and stay so over time. Finally, we illustrate the behavior of the flow by various numerical examples using an implicit Euler scheme, which is easily computable by a bisection algorithm. For continuous targets $\nu$, also the explicit Euler scheme can be employed, although with limited convergence guarantees.
The recent introduction of geometric partition entropy brought a new viewpoint to non-parametric entropy quantification that incorporated the impacts of informative outliers, but its original formulation was limited to the context of a one-dimensional state space. A generalized definition of geometric partition entropy is now provided for samples within a bounded (finite measure) region of a d-dimensional vector space. The basic definition invokes the concept of a Voronoi diagram, but the computational complexity and reliability of Voronoi diagrams in high dimension make estimation by direct theoretical computation unreasonable. This leads to the development of approximation schemes that enable estimation that is faster than current methods by orders of magnitude. The partition intersection ($\pi$) approximation, in particular, enables direct estimates of marginal entropy in any context resulting in an efficient and versatile mutual information estimator. This new measure-based paradigm for data driven information theory allows flexibility in the incorporation of geometry to vary the representation of outlier impact, which leads to a significant broadening in the applicability of established entropy-based concepts. The incorporation of informative outliers is illustrated through analysis of transient dynamics in the synchronization of coupled chaotic dynamical systems.
This paper presents a comparative analysis of hallucination detection systems for AI, focusing on automatic summarization and question answering tasks for Large Language Models (LLMs). We evaluate different hallucination detection systems using the diagnostic odds ratio (DOR) and cost-effectiveness metrics. Our results indicate that although advanced models can perform better they come at a much higher cost. We also demonstrate how an ideal hallucination detection system needs to maintain performance across different model sizes. Our findings highlight the importance of choosing a detection system aligned with specific application needs and resource constraints. Future research will explore hybrid systems and automated identification of underperforming components to enhance AI reliability and efficiency in detecting and mitigating hallucinations.
By presenting curved surfaces of various curvatures including edges to the fingertip, it is possible to reproduce the haptic sensation of object shapes that cannot be reproduced by flat surfaces alone, such as spheres and rectangular objects. In this paper, we propose a method of presenting curved surfaces by controlling the inclination of a disk in contact with the finger belly with acoustic radiation pressure of ultrasound. The user only needs to mount a lightweight device on the fingertip to experience a tactile presentation with low physical burden. In the demonstration, the user can experience the sensation of stroking an edge and different curvatures of curved surfaces.