We establish a generic upper bound ExpTime for reasoning with global assumptions (also known as TBoxes) in coalgebraic modal logics. Unlike earlier results of this kind, our bound does not require a tractable set of tableau rules for the instance logics, so that the result applies to wider classes of logics. Examples are Presburger modal logic, which extends graded modal logic with linear inequalities over numbers of successors, and probabilistic modal logic with polynomial inequalities over probabilities. We establish the theoretical upper bound using a type elimination algorithm. We also provide a global caching algorithm that potentially avoids building the entire exponential-sized space of candidate states, and thus offers a basis for practical reasoning. This algorithm still involves frequent fixpoint computations; we show how these can be handled efficiently in a concrete algorithm modelled on Liu and Smolka's linear-time fixpoint algorithm. Finally, we show that the upper complexity bound is preserved under adding nominals to the logic, i.e. in coalgebraic hybrid logic.
In logistic regression, it is often desirable to utilize regularization to promote sparse solutions, particularly for problems with a large number of features compared to available labels. In this paper, we present screening rules that safely remove features from logistic regression with $\ell_0-\ell_2$ regularization before solving the problem. The proposed safe screening rules are based on lower bounds from the Fenchel dual of strong conic relaxations of the logistic regression problem. Numerical experiments with real and synthetic data suggest that a high percentage of the features can be effectively and safely removed apriori, leading to substantial speed-up in the computations.
The lazy algorithm for a real base $\beta$ is generalized to the setting of Cantor bases $\boldsymbol{\beta}=(\beta_n)_{n\in \mathbb{N}}$ introduced recently by Charlier and the author. To do so, let $x_{\boldsymbol{\beta}}$ be the greatest real number that has a $\boldsymbol{\beta}$-representation $a_0a_1a_2\cdots$ such that each letter $a_n$ belongs to $\{0,\ldots,\lceil \beta_n \rceil -1\}$. This paper is concerned with the combinatorial properties of the lazy $\boldsymbol{\beta}$-expansions, which are defined when $x_{\boldsymbol{\beta}}<+\infty$. As an illustration, Cantor bases following the Thue-Morse sequence are studied and a formula giving their corresponding value of $x_{\boldsymbol{\beta}}$ is proved. First, it is shown that the lazy $\boldsymbol{\beta}$-expansions are obtained by "flipping" the digits of the greedy $\boldsymbol{\beta}$-expansions. Next, a Parry-like criterion characterizing the sequences of non-negative integers that are the lazy $\boldsymbol{\beta}$-expansions of some real number in $(x_{\boldsymbol{\beta}}-1,x_{\boldsymbol{\beta}}]$ is proved. Moreover, the lazy $\boldsymbol{\beta}$-shift is studied and in the particular case of alternate bases, that is the periodic Cantor bases, an analogue of Bertrand-Mathis' theorem in the lazy framework is proved: the lazy $\boldsymbol{\beta}$-shift is sofic if and only if all quasi-lazy $\boldsymbol{\beta}^{(i)}$-expansions of $x_{\boldsymbol{\beta}^{(i)}}-1$ are ultimately periodic, where $\boldsymbol{\beta}^{(i)}$ is the $i$-th shift of the alternate base $\boldsymbol{\beta}$.
FO(.) (aka FO-dot) is a language that extends classical first-order logic with constructs to allow complex knowledge to be represented in a natural and elaboration-tolerant way. IDP-Z3 is a new reasoning engine for the FO(.) language: it can perform a variety of generic computational tasks using knowledge represented in FO(.). It supersedes IDP3, its predecessor, with new capabilities such as support for linear arithmetic over reals and quantification over concepts. We present four knowledge-intensive industrial use cases, and show that IDP-Z3 delivers real value to its users at low development costs: it supports interactive applications in a variety of problem domains, with a response time typically below 3 seconds.
A proof procedure, in the spirit of the sequent calculus, is proposed to check the validity of entailments between Separation Logic formulas combining inductively defined predicates denoted structures of bounded tree width and theory reasoning. The calculus is sound and complete, in the sense that a sequent is valid iff it admits a (possibly infinite) proof tree. We show that the procedure terminates in the two following cases: (i) When the inductive rules that define the predicates occurring on the left-hand side of the entailment terminate, in which case the proof tree is always finite. (ii) When the theory is empty, in which case every valid sequent admits a rational proof tree, where the total number of pairwise distinct sequents occurring in the proof tree is doubly exponential w.r.t.\ the size of the end-sequent. We also show that the validity problem is undecidable for a wide class of theories, even with a very low expressive power.
In this paper, we study three representations of lattices by means of a set with a binary relation of compatibility in the tradition of Plo\v{s}\v{c}ica. The standard representations of complete ortholattices and complete perfect Heyting algebras drop out as special cases of the first representation, while the second covers arbitrary complete lattices. The third topological representation is a variant of that of Craig, Havier, and Priestley. We then add a second relation of accessibility interacting with compatibility in order to represent lattices with a multiplicative unary operation. The resulting representations yield an approach to semantics for modal logics on non-classical bases, motivated by a recent application of modal orthologic to natural language semantics.
Penalized $M-$estimators for logistic regression models have been previously study for fixed dimension in order to obtain sparse statistical models and automatic variable selection. In this paper, we derive asymptotic results for penalized $M-$estimators when the dimension $p$ grows to infinity with the sample size $n$. Specifically, we obtain consistency and rates of convergence results, for some choices of the penalty function. Moreover, we prove that these estimators consistently select variables with probability tending to 1 and derive their asymptotic distribution.
Faroldi argues that deontic modals are hyperintensional and thus traditional modal logic cannot provide an appropriate formalization of deontic situations. To overcome this issue, we introduce novel justification logics as hyperintensional analogues to non-normal modal logics. We establish soundness and completeness with respect to various models and we study the problem of realization.
One of the fundamental problems in Artificial Intelligence is to perform complex multi-hop logical reasoning over the facts captured by a knowledge graph (KG). This problem is challenging, because KGs can be massive and incomplete. Recent approaches embed KG entities in a low dimensional space and then use these embeddings to find the answer entities. However, it has been an outstanding challenge of how to handle arbitrary first-order logic (FOL) queries as present methods are limited to only a subset of FOL operators. In particular, the negation operator is not supported. An additional limitation of present methods is also that they cannot naturally model uncertainty. Here, we present BetaE, a probabilistic embedding framework for answering arbitrary FOL queries over KGs. BetaE is the first method that can handle a complete set of first-order logical operations: conjunction ($\wedge$), disjunction ($\vee$), and negation ($\neg$). A key insight of BetaE is to use probabilistic distributions with bounded support, specifically the Beta distribution, and embed queries/entities as distributions, which as a consequence allows us to also faithfully model uncertainty. Logical operations are performed in the embedding space by neural operators over the probabilistic embeddings. We demonstrate the performance of BetaE on answering arbitrary FOL queries on three large, incomplete KGs. While being more general, BetaE also increases relative performance by up to 25.4% over the current state-of-the-art KG reasoning methods that can only handle conjunctive queries without negation.
Markov Logic Networks (MLNs), which elegantly combine logic rules and probabilistic graphical models, can be used to address many knowledge graph problems. However, inference in MLN is computationally intensive, making the industrial-scale application of MLN very difficult. In recent years, graph neural networks (GNNs) have emerged as efficient and effective tools for large-scale graph problems. Nevertheless, GNNs do not explicitly incorporate prior logic rules into the models, and may require many labeled examples for a target task. In this paper, we explore the combination of MLNs and GNNs, and use graph neural networks for variational inference in MLN. We propose a GNN variant, named ExpressGNN, which strikes a nice balance between the representation power and the simplicity of the model. Our extensive experiments on several benchmark datasets demonstrate that ExpressGNN leads to effective and efficient probabilistic logic reasoning.
Neural networks can learn to represent and manipulate numerical information, but they seldom generalize well outside of the range of numerical values encountered during training. To encourage more systematic numerical extrapolation, we propose an architecture that represents numerical quantities as linear activations which are manipulated using primitive arithmetic operators, controlled by learned gates. We call this module a neural arithmetic logic unit (NALU), by analogy to the arithmetic logic unit in traditional processors. Experiments show that NALU-enhanced neural networks can learn to track time, perform arithmetic over images of numbers, translate numerical language into real-valued scalars, execute computer code, and count objects in images. In contrast to conventional architectures, we obtain substantially better generalization both inside and outside of the range of numerical values encountered during training, often extrapolating orders of magnitude beyond trained numerical ranges.