In this paper, we introduce a new method for querying triadic concepts through partial or complete matching of triples using an inverted index, to retrieve already computed triadic concepts that contain a set of terms in their extent, intent, and/or modus. As opposed to the approximation approach described in Ananias, this method (i) does not need to keep the initial triadic context or its three dyadic counterparts, (ii) avoids the application of derivation operators on the triple components through context exploration, and (iii) eliminates the requirement for a factorization phase to get triadic concepts as the answer to one-dimensional queries. Additionally, our solution introduces a novel metric for ranking the retrieved triadic concepts based on their similarity to a given query. Lastly, an empirical study is primarily done to illustrate the effectiveness and scalability of our approach against the approximation one. Our solution not only showcases superior efficiency, but also highlights a better scalability, making it suitable for big data scenarios.
In this paper we provide a first-ever epistemic formulation of stabilizing agreement, defined as the non-terminating variant of the well established consensus problem. In stabilizing agreements, agents are given (possibly different) initial values, with the goal to eventually always decide on the same value. While agents are allowed to change their decisions finitely often, they are required to agree on the same value eventually. We capture these properties in temporal epistemic logic and we use the Runs and Systems framework to formally reason about stabilizing agreement problems. We then epistemically formalize the conditions for solving stabilizing agreement, and identify the knowledge that the agents acquire during any execution to choose a particular value under our system assumptions. This first formalization of a sufficient condition for solving stabilizing agreement sets the stage for a planned necessary and sufficient epistemic characterization of stabilizing agreement.
In this paper, we introduce a set representation called polynomial logical zonotopes for performing exact and computationally efficient reachability analysis on logical systems. Polynomial logical zonotopes are a generalization of logical zonotopes, which are able to represent up to 2^n binary vectors using only n generators. Due to their construction, logical zonotopes are only able to support exact computations of some logical operations (XOR, NOT, XNOR), while other operations (AND, NAND, OR, NOR) result in over-approximations in the reduced space, i.e., generator space. In order to perform all fundamental logical operations exactly, we formulate a generalization of logical zonotopes that is constructed by dependent generators and exponent matrices. We prove that through this polynomial-like construction, we are able to perform all of the fundamental logical operations (XOR, NOT, XNOR, AND, NAND, OR, NOR) exactly in the generator space. While we are able to perform all of the logical operations exactly, this comes with a slight increase in computational complexity compared to logical zonotopes. We show that we can use polynomial logical zonotopes to perform exact reachability analysis while retaining a low computational complexity. To illustrate and showcase the computational benefits of polynomial logical zonotopes, we present the results of performing reachability analysis on two use cases: (1) safety verification of an intersection crossing protocol and (2) reachability analysis on a high-dimensional Boolean function. Moreover, to highlight the extensibility of logical zonotopes, we include an additional use case where we perform a computationally tractable exhaustive search for the key of a linear feedback shift register.
Recent advances in LLMs have sparked a debate on whether they understand text. In this position paper, we argue that opponents in this debate hold different definitions for understanding, and particularly differ in their view on the role of consciousness. To substantiate this claim, we propose a thought experiment involving an open-source chatbot $Z$ which excels on every possible benchmark, seemingly without subjective experience. We ask whether $Z$ is capable of understanding, and show that different schools of thought within seminal AI research seem to answer this question differently, uncovering their terminological disagreement. Moving forward, we propose two distinct working definitions for understanding which explicitly acknowledge the question of consciousness, and draw connections with a rich literature in philosophy, psychology and neuroscience.
In this paper, we propose a deep learning based model for Acoustic Anomaly Detection of Machines, the task for detecting abnormal machines by analysing the machine sound. By conducting extensive experiments, we indicate that multiple techniques of pseudo audios, audio segment, data augmentation, Mahalanobis distance, and narrow frequency bands, which mainly focus on feature engineering, are effective to enhance the system performance. Among the evaluating techniques, the narrow frequency bands presents a significant impact. Indeed, our proposed model, which focuses on the narrow frequency bands, outperforms the DCASE baseline on the benchmark dataset of DCASE 2022 Task 2 Development set. The important role of the narrow frequency bands indicated in this paper inspires the research community on the task of Acoustic Anomaly Detection of Machines to further investigate and propose novel network architectures focusing on the frequency bands.
In this paper, we consider the generation and utilization of helper data for physical unclonable functions (PUFs) that provide real-valued readout symbols. Compared to classical binary PUFs, more entropy can be extracted from each basic building block (PUF node), resulting in longer keys/fingerprints and/or a higher reliability. To this end, a coded modulation and signal shaping scheme that matches the (approximately) Gaussian distribution of the readout has to be employed. A new helper data scheme is proposed that works with any type of coded modulation/shaping scheme. Compared to the permutation scheme from the literature, less amount of helper data has to be generated and a higher reliability is achieved. Moreover, the recently proposed idea of a two-metric helper data scheme is generalized to coded modulation and a general S-metric scheme. It is shown how extra helper data can be generated to improve decodability. The proposed schemes are assessed by numerical simulations and by evaluation of measurement data. We compare multi-level codes using a new rate design strategy with bit-interleaved coded modulation and trellis shaping with a distribution matcher. By selecting a suitable design, the rate per PUF node that can be reliably extracted can be as high as 2~bit/node.
In this paper, we propose an adaptive approach, based on mesh refinement or parametric enrichment with polynomial degree adaption, for numerical solution of convection dominated equations with random input data. A parametric system emerged from an application of stochastic Galerkin approach is discretized by using a symmetric interior penalty Galerkin (SIPG) method with upwinding for the convection term in the spatial domain. We derive a residual-based error estimator contributed by the error due to the SIPG discretization, the (generalized) polynomial chaos discretization in the stochastic space, and data oscillations. Then, the reliability of the proposed error estimator, an upper bound for the energy error up to a multiplicative constant, is shown. Moreover, to balance the errors stemmed from spatial and stochastic spaces, the truncation error coming from Karhunen--Lo\`{e}ve expansion is also considered in the numerical simulations. Last, several benchmark examples including a random diffusivity parameter, a random velocity parameter, random diffusivity/velocity parameters, and a random (jump) discontinuous diffusivity parameter, are tested to illustrate the performance of the proposed estimator.
In this work, we proposed a new dynamic distributed planning approach that is able to take into account the changes that the agent introduces on his set of actions to be planned in order to take into account the changes that occur in his environment. Our approach fits into the context of distributed planning for distributed plans where each agent can produce its own plans. According to our approach the generation of the plans is based on the satisfaction of the constraints by the use of the genetic algorithms. Our approach is to generate, a new plan by each agent, whenever there is a change in its set of actions to plan. This in order to take into account the new actions introduced in its new plan. In this new plan, the agent takes, each time, as a new action set to plan all the old un-executed actions of the old plan and the new actions engendered by the changes and as a new initial state; the state in which the set of actions of the agent undergoes a change. In our work, we used a concrete case to illustrate and demonstrate the utility of our approach.
In this paper, we explore the intriguing similarities between the structure of a discrete neural network, such as a spiking network, and the composition of a piano piece. While both involve nodes or notes that are activated sequentially or in parallel, the latter benefits from the rich body of music theory to guide meaningful combinations. We propose a novel approach that leverages musical grammar to regulate activations in a spiking neural network, allowing for the representation of symbols as attractors. By applying rules for chord progressions from music theory, we demonstrate how certain activations naturally follow others, akin to the concept of attraction. Furthermore, we introduce the concept of modulating keys to navigate different basins of attraction within the network. Ultimately, we show that the map of concepts in our model is structured by the musical circle of fifths, highlighting the potential for leveraging music theory principles in deep learning algorithms.
As artificial intelligence (AI) models continue to scale up, they are becoming more capable and integrated into various forms of decision-making systems. For models involved in moral decision-making, also known as artificial moral agents (AMA), interpretability provides a way to trust and understand the agent's internal reasoning mechanisms for effective use and error correction. In this paper, we provide an overview of this rapidly-evolving sub-field of AI interpretability, introduce the concept of the Minimum Level of Interpretability (MLI) and recommend an MLI for various types of agents, to aid their safe deployment in real-world settings.
To quickly obtain new labeled data, we can choose crowdsourcing as an alternative way at lower cost in a short time. But as an exchange, crowd annotations from non-experts may be of lower quality than those from experts. In this paper, we propose an approach to performing crowd annotation learning for Chinese Named Entity Recognition (NER) to make full use of the noisy sequence labels from multiple annotators. Inspired by adversarial learning, our approach uses a common Bi-LSTM and a private Bi-LSTM for representing annotator-generic and -specific information. The annotator-generic information is the common knowledge for entities easily mastered by the crowd. Finally, we build our Chinese NE tagger based on the LSTM-CRF model. In our experiments, we create two data sets for Chinese NER tasks from two domains. The experimental results show that our system achieves better scores than strong baseline systems.