The satisfiability problem is NP-complete but there are subclasses where all the instances are satisfiable. For this, restrictions on the shape of the formula are made. Darman and D\"ocker show that the subclass MONOTONE $3$-SAT-($k$,1) with $k \geq 5$ proves to be NP-complete and pose the open question whether instances of MONOTONE $3$-SAT-(3,1) are satisfiable. This paper shows that all instances of MONOTONE $3$-SAT-(3,1) are satisfiable using the new concept of a color-structures.
Lattices are architected metamaterials whose properties strongly depend on their geometrical design. The analogy between lattices and graphs enables the use of graph neural networks (GNNs) as a faster surrogate model compared to traditional methods such as finite element modelling. In this work we present a higher-order GNN model trained to predict the fourth-order stiffness tensor of periodic strut-based lattices. The key features of the model are (i) SE(3) equivariance, and (ii) consistency with the thermodynamic law of conservation of energy. We compare the model to non-equivariant models based on a number of error metrics and demonstrate the benefits of the encoded equivariance and energy conservation in terms of predictive performance and reduced training requirements.
Wasserstein distortion is a one-parameter family of distortion measures that was recently proposed to unify fidelity and realism constraints. After establishing continuity results for Wasserstein in the extreme cases of pure fidelity and pure realism, we prove the first coding theorems for compression under Wasserstein distortion focusing on the regime in which both the rate and the distortion are small.
A business process model represents the expected behavior of a set of process instances (cases). The process instances may be executed in parallel and may affect each other through data or resources. In particular, changes in values of data shared by process instances may affect a set of process instances and require some operations in response. Such potential effects do not explicitly appear in the process model. This paper addresses possible impacts that may be affected through shared data across process instances and suggests how to analyze them at design time (when the actual process instances do not yet exist). The suggested method uses both a process model and a (relational) data model in order to identify potential inter-instance data impact sets. These sets may guide process users in tracking the impacts of data changes and supporting their handling at runtime. They can also assist process designers in exploring possible constraints over data. The applicability of the method was evaluated using three different realistic processes. Using a process expert, we further assessed the usefulness of the method, revealing some useful insights for coping with unexpected data-related changes suggested by our approach.
The intricate relationship between human decision-making and emotions, particularly guilt and regret, has significant implications on behavior and well-being. Yet, these emotions subtle distinctions and interplay are often overlooked in computational models. This paper introduces a dataset tailored to dissect the relationship between guilt and regret and their unique textual markers, filling a notable gap in affective computing research. Our approach treats guilt and regret recognition as a binary classification task and employs three machine learning and six transformer-based deep learning techniques to benchmark the newly created dataset. The study further implements innovative reasoning methods like chain-of-thought and tree-of-thought to assess the models interpretive logic. The results indicate a clear performance edge for transformer-based models, achieving a 90.4% macro F1 score compared to the 85.3% scored by the best machine learning classifier, demonstrating their superior capability in distinguishing complex emotional states.
Computational analysis with the finite element method requires geometrically accurate meshes. It is well known that high-order meshes can accurately capture curved surfaces with fewer degrees of freedom in comparison to low-order meshes. Existing techniques for high-order mesh generation typically output meshes with same polynomial order for all elements. However, high order elements away from curvilinear boundaries or interfaces increase the computational cost of the simulation without increasing geometric accuracy. In prior work, we have presented one such approach for generating body-fitted uniform-order meshes that takes a given mesh and morphs it to align with the surface of interest prescribed as the zero isocontour of a level-set function. We extend this method to generate mixed-order meshes such that curved surfaces of the domain are discretized with high-order elements, while low-order elements are used elsewhere. Numerical experiments demonstrate the robustness of the approach and show that it can be used to generate mixed-order meshes that are much more efficient than high uniform-order meshes. The proposed approach is purely algebraic, and extends to different types of elements (quadrilaterals/triangles/tetrahedron/hexahedra) in two- and three-dimensions.
Bias correction can often improve the finite sample performance of estimators. We show that the choice of bias correction method has no effect on the higher-order variance of semiparametrically efficient parametric estimators, so long as the estimate of the bias is asymptotically linear. It is also shown that bootstrap, jackknife, and analytical bias estimates are asymptotically linear for estimators with higher-order expansions of a standard form. In particular, we find that for a variety of estimators the straightforward bootstrap bias correction gives the same higher-order variance as more complicated analytical or jackknife bias corrections. In contrast, bias corrections that do not estimate the bias at the parametric rate, such as the split-sample jackknife, result in larger higher-order variances in the i.i.d. setting we focus on. For both a cross-sectional MLE and a panel model with individual fixed effects, we show that the split-sample jackknife has a higher-order variance term that is twice as large as that of the `leave-one-out' jackknife.
The problem of relay selection is pivotal in the realm of cooperative communication. However, this issue has not been thoroughly examined, particularly when the background noise is assumed to possess an impulsive characteristic with consistent memory as observed in smart grid communications and some other wireless communication scenarios. In this paper, we investigate the impact of this specific type of noise on the performance of cooperative Wireless Sensor Networks (WSNs) with the Decode and Forward (DF) relaying scheme, considering Symbol-Error-Rate (SER) and battery power consumption fairness across all nodes as the performance metrics. We introduce two innovative relay selection methods that depend on noise state detection and the residual battery power of each relay. The first method encompasses the adaptation of the Max-Min criterion to this specific context, whereas the second employs Reinforcement Learning (RL) to surmount this challenge. Our empirical outcomes demonstrate that the impacts of bursty impulsive noise on the SER performance can be effectively mitigated and that a balance in battery power consumption among all nodes can be established using the proposed methods.
The notion of an e-value has been recently proposed as a possible alternative to critical regions and p-values in statistical hypothesis testing. In this paper we consider testing the nonparametric hypothesis of symmetry, introduce analogues for e-values of three popular nonparametric tests, define an analogue for e-values of Pitman's asymptotic relative efficiency, and apply it to the three nonparametric tests. We discuss limitations of our simple definition of asymptotic relative efficiency and list directions of further research.
Defensive deception is a promising approach for cyberdefense. Although defensive deception is increasingly popular in the research community, there has not been a systematic investigation of its key components, the underlying principles, and its tradeoffs in various problem settings. This survey paper focuses on defensive deception research centered on game theory and machine learning, since these are prominent families of artificial intelligence approaches that are widely employed in defensive deception. This paper brings forth insights, lessons, and limitations from prior work. It closes with an outline of some research directions to tackle major gaps in current defensive deception research.
Most existing works in visual question answering (VQA) are dedicated to improving the accuracy of predicted answers, while disregarding the explanations. We argue that the explanation for an answer is of the same or even more importance compared with the answer itself, since it makes the question and answering process more understandable and traceable. To this end, we propose a new task of VQA-E (VQA with Explanation), where the computational models are required to generate an explanation with the predicted answer. We first construct a new dataset, and then frame the VQA-E problem in a multi-task learning architecture. Our VQA-E dataset is automatically derived from the VQA v2 dataset by intelligently exploiting the available captions. We have conducted a user study to validate the quality of explanations synthesized by our method. We quantitatively show that the additional supervision from explanations can not only produce insightful textual sentences to justify the answers, but also improve the performance of answer prediction. Our model outperforms the state-of-the-art methods by a clear margin on the VQA v2 dataset.