The satisfiability problem is NP-complete but there are subclasses where all the instances are satisfiable. For this, restrictions on the shape of the formula are made. Darman and D\"ocker show that the subclass MONOTONE $3$-SAT-($k$,1) with $k \geq 5$ proves to be NP-complete and pose the open question whether instances of MONOTONE $3$-SAT-(3,1) are satisfiable. This paper shows that all instances of MONOTONE $3$-SAT-(3,1) are satisfiable using the new concept of a color-structures.
We discuss Cartan-Schouten metrics (Riemannian or pseudo-Riemannian metrics that are parallel with respect to the Cartan-Schouten canonical connection) on perfect Lie groups. Applications are foreseen in Information Geometry. Throughout this work, the tangent bundle TG and the cotangent bundle T*G of a Lie group G, are always endowed with their Lie group structures induced by the right trivialization. We show that TG and T*G are isomorphic if G possesses a biinvariant Riemannian or pseudo-Riemannian metric. We also show that, if on a perfect Lie group, there exists a Cartan-Schouten metric, then it must be biinvariant. We compute all such metrics on the cotangent bundles of simple Lie groups. We further show the following. Endowed with their canonical Lie group structures, the set of unit dual quaternions is isomorphic to TSU(2), the set of unit dual split quaternions is isomorphic to T*SL(2,R). The group SE(3) of special rigid displacements of the Euclidean 3-space is isomorphic to T*SO(3). The group SE(2,1) of special rigid displacements of the Minkowski 3-space is isomorphic to T*SO(2,1). Some results on SE(3) by N. Miolane and X. Pennec, and M. Zefran, V. Kumar and C. Croke, are generalized to SE(2,1) and to T*G, for any simple Lie group G.
Defining a successful notion of a multivariate quantile has been an open problem for more than half a century, motivating a plethora of possible solutions. Of these, the approach of [8] and [25] leading to M-quantiles, is very appealing for its mathematical elegance combining elements of convex analysis and probability theory. The key idea is the description of a convex function (the K-function) whose gradient (the K-transform) is in one-to-one correspondence between all of R^d and the unit ball in R^d. By analogy with the d=1 case where the K-transform is a cumulative distribution function-like object (an M-distribution), the fact that its inverse is guaranteed to exist lends itself naturally to providing the basis for the definition of a quantile function for all d>=1. Over the past twenty years the resulting M-quantiles have seen applications in a variety of fields, primarily for the purpose of detecting outliers in multidimensional spaces. In this article we prove that for odd d>=3, it is not the gradient but a poly-Laplacian of the K-function that is (almost everywhere) proportional to the density function. For d even one cannot establish a differential equation connecting the K-function with the density. These results show that usage of the K-transform for outlier detection in higher odd-dimensions is in principle flawed, as the K-transform does not originate from inversion of a true M-distribution. We demonstrate these conclusions in two dimensions through examples from non-standard asymmetric distributions. Our examples illustrate a feature of the K-transform whereby regions in the domain with higher density map to larger volumes in the co-domain, thereby producing a magnification effect that moves inliers closer to the boundary of the co-domain than outliers. This feature obviously disrupts any outlier detection mechanism that relies on the inverse K-transform.
Global seismicity on all three solar system's bodies with in situ measurements (Earth, Moon, and Mars) is due mainly to mechanical Rieger resonance (RR) of the solar wind's macroscopic flapping, driven by the well-known PRg=~154-day Rieger period and detected commonly in most heliophysical data types and the interplanetary magnetic field (IMF). Thus, InSight mission marsquakes rates are periodic with PRg as characterized by a very high (>>12) fidelity {\Phi}=2.8 10^6 and by being the only >99%-significant spectral peak in the 385.8-64.3-nHz (1-180-day) band of highest planetary energies; the longest-span (v.9) release of raw data revealed the entire RR, excluding a tectonically active Mars. For check, I analyze rates of Oct 2015-Feb 2019, Mw5.6+ earthquakes, and all (1969-1977) Apollo mission moonquakes. To decouple magnetosphere and IMF effects, I study Earth and Moon seismicity during traversals of Earth magnetotail vs. IMF. The analysis showed with >99-67% confidence and {\Phi}>>12 fidelity that (an unspecified majority of) moonquakes and Mw5.6+ earthquakes also recur at Rieger periods. About half of spectral peaks split but also into clusters that average to usual Rieger periodicities, where magnetotail reconnecting clears the signal. Moonquakes are mostly forced at times of solar-wind resonance and not just during tides, as previously and simplistically believed. Earlier claims that solar plasma dynamics could be seismogenic are confirmed. This result calls for reinterpreting the seismicity phenomenon and for reliance on global magnitude scales. Predictability of solar-wind macroscopic dynamics is now within reach, which paves the way for long-term physics-based seismic and space weather prediction and the safety of space missions. Gauss-Vanicek Spectral Analysis revolutionizes geophysics by computing nonlinear global dynamics directly (renders approximating of dynamics obsolete).
A well-balanced second-order finite volume scheme is proposed and analyzed for a 2 X 2 system of non-linear partial differential equations which describes the dynamics of growing sandpiles created by a vertical source on a flat, bounded rectangular table in multiple dimensions. To derive a second-order scheme, we combine a MUSCL type spatial reconstruction with strong stability preserving Runge-Kutta time stepping method. The resulting scheme is ensured to be well-balanced through a modified limiting approach that allows the scheme to reduce to well-balanced first-order scheme near the steady state while maintaining the second-order accuracy away from it. The well-balanced property of the scheme is proven analytically in one dimension and demonstrated numerically in two dimensions. Additionally, numerical experiments reveal that the second-order scheme reduces finite time oscillations, takes fewer time iterations for achieving the steady state and gives sharper resolutions of the physical structure of the sandpile, as compared to the existing first-order schemes of the literature.
Datasets containing both categorical and continuous variables are frequently encountered in many areas, and with the rapid development of modern measurement technologies, the dimensions of these variables can be very high. Despite the recent progress made in modelling high-dimensional data for continuous variables, there is a scarcity of methods that can deal with a mixed set of variables. To fill this gap, this paper develops a novel approach for classifying high-dimensional observations with mixed variables. Our framework builds on a location model, in which the distributions of the continuous variables conditional on categorical ones are assumed Gaussian. We overcome the challenge of having to split data into exponentially many cells, or combinations of the categorical variables, by kernel smoothing, and provide new perspectives for its bandwidth choice to ensure an analogue of Bochner's Lemma, which is different to the usual bias-variance tradeoff. We show that the two sets of parameters in our model can be separately estimated and provide penalized likelihood for their estimation. Results on the estimation accuracy and the misclassification rates are established, and the competitive performance of the proposed classifier is illustrated by extensive simulation and real data studies.
Accurate uncertainty quantification is crucial for the safe deployment of language models (LMs), and prior research has demonstrated improvements in the calibration of modern LMs. Our study focuses on in-context learning (ICL), a prevalent method for adapting static LMs through tailored prompts, and examines the balance between performance and calibration across a broad spectrum of natural language understanding and reasoning tasks. Through comprehensive experiments, we observe that, with an increasing number of ICL examples, models initially exhibit increased miscalibration before achieving better calibration and miscalibration tends to arise in low-shot settings. Moreover, we find that methods aimed at improving usability, such as fine-tuning and chain-of-thought (CoT) prompting, can lead to miscalibration and unreliable natural language explanations, suggesting that new methods may be required for scenarios where models are expected to be reliable.
Query-focused summarization (QFS) aims to provide a summary of a single document/multi documents that can satisfy the information needs of a given query. It is useful for various real-world applications, such as abstractive snippet generation or more recent retrieval augmented generation (RAG). A prototypical QFS pipeline consists of a retriever (sparse or dense retrieval) and a generator (usually a large language model). However, applying large language models (LLM) potentially leads to hallucinations, especially when the evidence contradicts the prior belief of LLMs. There has been growing interest in developing new decoding methods to improve generation quality and reduce hallucination. In this work, we conduct a large-scale reproducibility study on one recently proposed decoding method -- Context-aware Decoding (CAD). In addition to replicating CAD's experiments on news summarization datasets, we include experiments on QFS datasets, and conduct more rigorous analysis on computational complexity and hyperparameter sensitivity. Experiments with eight different language models show that performance-wise, CAD improves QFS quality by (1) reducing factuality errors/hallucinations while (2) mostly retaining the match of lexical patterns, measured by ROUGE scores, while also at a cost of increased inference-time FLOPs and reduced decoding speed. The code implementation based on Huggingface Library is made available //github.com/zhichaoxu-shufe/context-aware-decoding-qfs
By abstracting over well-known properties of De Bruijn's representation with nameless dummies, we design a new theory of syntax with variable binding and capture-avoiding substitution. We propose it as a simpler alternative to Fiore, Plotkin, and Turi's approach, with which we establish a strong formal link. We also show that our theory easily incorporates simple types and equations between terms.
Defensive deception is a promising approach for cyberdefense. Although defensive deception is increasingly popular in the research community, there has not been a systematic investigation of its key components, the underlying principles, and its tradeoffs in various problem settings. This survey paper focuses on defensive deception research centered on game theory and machine learning, since these are prominent families of artificial intelligence approaches that are widely employed in defensive deception. This paper brings forth insights, lessons, and limitations from prior work. It closes with an outline of some research directions to tackle major gaps in current defensive deception research.
Most existing works in visual question answering (VQA) are dedicated to improving the accuracy of predicted answers, while disregarding the explanations. We argue that the explanation for an answer is of the same or even more importance compared with the answer itself, since it makes the question and answering process more understandable and traceable. To this end, we propose a new task of VQA-E (VQA with Explanation), where the computational models are required to generate an explanation with the predicted answer. We first construct a new dataset, and then frame the VQA-E problem in a multi-task learning architecture. Our VQA-E dataset is automatically derived from the VQA v2 dataset by intelligently exploiting the available captions. We have conducted a user study to validate the quality of explanations synthesized by our method. We quantitatively show that the additional supervision from explanations can not only produce insightful textual sentences to justify the answers, but also improve the performance of answer prediction. Our model outperforms the state-of-the-art methods by a clear margin on the VQA v2 dataset.