Derived datasets can be defined implicitly or explicitly. An implicit definition (of dataset O in terms of datasets I) is a logical specification involving the source data I and the interface data O. It is a valid definition of O in terms of I, if any two models of the specification agreeing on I agree on O. In contrast, an explicit definition is a query that produces O from I. Variants of Beth's theorem state that one can convert implicit definitions to explicit ones. Further, this conversion can be done effectively given a proof witnessing implicit definability in a suitable proof system. We prove the analogous implicit-to-explicit result for nested relations: implicit definitions, given in the natural logic for nested relations, can be converted to explicit definitions in the nested relational calculus (NRC) We first provide a model-theoretic argument for this result, which makes some additional connections that may be of independent interest. between NRC queries, interpretations, a standard mechanisms for defining structure-to-structure translation in logic, and between interpretations and implicit to definability ``up to unique isomorphism''. The latter connection makes use of a variation of a result of Gaifman. We also provide a proof-theoretic result that provides an effective argument: from a proof witnessing implicit definability, we can efficiently produce an NRC definition. This will involve introducing the appropriate proof system for reasoning with nested sets, along with some auxiliary Beth-type results for this system. As a consequence, we can effectively extract rewritings of NRC queries in terms of NRC views, given a proof witnessing that the query is determined by the views.
Given a computable sequence of natural numbers, it is a natural task to find a G\"odel number of a program that generates this sequence. It is easy to see that this problem is neither continuous nor computable. In algorithmic learning theory this problem is well studied from several perspectives and one question studied there is for which sequences this problem is at least learnable in the limit. Here we study the problem on all computable sequences and we classify the Weihrauch complexity of it. For this purpose we can, among other methods, utilize the amalgamation technique known from learning theory. As a benchmark for the classification we use closed and compact choice problems and their jumps on natural numbers, and we argue that these problems correspond to induction and boundedness principles, as they are known from the Kirby-Paris hierarchy in reverse mathematics. We provide a topological as well as a computability-theoretic classification, which reveal some significant differences.
Resistance distance has been studied extensively in the past years, with the majority of previous studies devoted to undirected networks, in spite of the fact that various realistic networks are directed. Although several generalizations of resistance distance on directed graphs have been proposed, they either have no physical interpretation or are not a metric. In this paper, we first extend the definition of resistance distance to strongly connected directed graphs based on random walks and show that the two-node resistance distance on directed graphs is a metric. Then, we introduce the Laplacian matrix for directed graphs that subsumes the Laplacian matrix of undirected graphs as a particular case and use its pseudoinverse to express the two-node resistance distance, and many other relevant quantities derived from resistance distances. Moreover, we define the resistance distance between a vertex and a vertex group on directed graphs and further define a problem of optimally selecting a group of fixed number of nodes, such that their resistance distance is minimized. Since this combinatorial optimization problem is NP-hard, we present a greedy algorithm with a proved approximation ratio, and conduct experiments on model and realistic networks to validate the performance of this approximation algorithm.
Event prediction is the ability of anticipating future events, i.e., future real-world occurrences, and aims to support the user in deciding on actions that change future events towards a desired state. An event prediction method learns the relation between features of past events and future events. It is applied to newly observed events to predict corresponding future events that are evaluated with respect to the user's desired future state. If the predicted future events do not comply with this state, actions are taken towards achieving desirable future states. Evidently, event prediction is valuable in many application domains such as business and natural disasters. The diversity of application domains results in a diverse range of methods that are scattered across various research areas which, in turn, use different terminology for event prediction methods. Consequently, sharing methods and knowledge for developing future event prediction methods is restricted. To facilitate knowledge sharing on account of a comprehensive classification, integration, and assessment of event prediction methods, we combine taxonomies and take a systems perspective to integrate event prediction methods into a single system, elicit requirements and assess existing work with respect to the requirements. Based on the assessment, we identify open challenges and discuss future research directions.
Some actions must be executed in different ways depending on the context. For example, wiping away marker requires vigorous force while wiping away almonds requires more gentle force. In this paper we provide a model where an agent learns which manner of action execution to use in which context, drawing on evidence from trial and error and verbal corrections when it makes a mistake (e.g., ``no, gently''). The learner starts out with a domain model that lacks the concepts denoted by the words in the teacher's feedback; both the words describing the context (e.g., marker) and the adverbs like ``gently''. We show that through the the semantics of coherence, our agent can perform the symbol grounding that's necessary for exploiting the teacher's feedback so as to solve its domain-level planning problem: to perform its actions in the current context in the right way.
We consider signal source localization from range-difference measurements. First, we give some readily-checked conditions on measurement noises and sensor deployment to guarantee the asymptotic identifiability of the model and show the consistency and asymptotic normality of the maximum likelihood (ML) estimator. Then, we devise an estimator that owns the same asymptotic property as the ML one. Specifically, we prove that the negative log-likelihood function converges to a function, which has a unique minimum and positive-definite Hessian at the true source's position. Hence, it is promising to execute local iterations, e.g., the Gauss-Newton (GN) algorithm, following a consistent estimate. The main issue involved is obtaining a preliminary consistent estimate. To this aim, we construct a linear least-squares problem via algebraic operation and constraint relaxation and obtain a closed-form solution. We then focus on deriving and eliminating the bias of the linear least-squares estimator, which yields an asymptotically unbiased (thus consistent) estimate. Noting that the bias is a function of the noise variance, we further devise a consistent noise variance estimator which involves $3$-order polynomial rooting. Based on the preliminary consistent location estimate, we prove that a one-step GN iteration suffices to achieve the same asymptotic property as the ML estimator. Simulation results demonstrate the superiority of our proposed algorithm in the large sample case.
The best empirical research in political science clearly defines substantive parameters of interest, presents a set of assumptions that guarantee its identification, and uses an appropriate estimator. We argue for the importance of explicitly integrating rigorous theory into this process and focus on the advantages of doing so. By integrating theoretical structure into one's empirical strategy, researchers can quantify the effects of competing mechanisms, consider the ex-ante effects of new policies, extrapolate findings to new environments, estimate model-specific theoretical parameters, evaluate the fit of a theoretical model, and test competing models that aim to explain the same phenomena. As a guide to such a methodology, we provide an overview of structural estimation, including formal definitions, implementation suggestions, examples, and comparisons to other methods.
We study the relationship between two desiderata of algorithms in statistical inference and machine learning: differential privacy and robustness to adversarial data corruptions. Their conceptual similarity was first observed by Dwork and Lei (STOC 2009), who observed that private algorithms satisfy robustness, and gave a general method for converting robust algorithms to private ones. However, all general methods for transforming robust algorithms into private ones lead to suboptimal error rates. Our work gives the first black-box transformation that converts any adversarially robust algorithm into one that satisfies pure differential privacy. Moreover, we show that for any low-dimensional estimation task, applying our transformation to an optimal robust estimator results in an optimal private estimator. Thus, we conclude that for any low-dimensional task, the optimal error rate for $\varepsilon$-differentially private estimators is essentially the same as the optimal error rate for estimators that are robust to adversarially corrupting $1/\varepsilon$ training samples. We apply our transformation to obtain new optimal private estimators for several high-dimensional tasks, including Gaussian (sparse) linear regression and PCA. Finally, we present an extension of our transformation that leads to approximate differentially private algorithms whose error does not depend on the range of the output space, which is impossible under pure differential privacy.
Knowledge is a formal way of understanding the world, providing a human-level cognition and intelligence for the next-generation artificial intelligence (AI). One of the representations of knowledge is the structural relations between entities. An effective way to automatically acquire this important knowledge, called Relation Extraction (RE), a sub-task of information extraction, plays a vital role in Natural Language Processing (NLP). Its purpose is to identify semantic relations between entities from natural language text. To date, there are several studies for RE in previous works, which have documented these techniques based on Deep Neural Networks (DNNs) become a prevailing technique in this research. Especially, the supervised and distant supervision methods based on DNNs are the most popular and reliable solutions for RE. This article 1)introduces some general concepts, and further 2)gives a comprehensive overview of DNNs in RE from two points of view: supervised RE, which attempts to improve the standard RE systems, and distant supervision RE, which adopts DNNs to design the sentence encoder and the de-noise method. We further 3)cover some novel methods and describe some recent trends and discuss possible future research directions for this task.
Recent years have witnessed the enormous success of low-dimensional vector space representations of knowledge graphs to predict missing facts or find erroneous ones. Currently, however, it is not yet well-understood how ontological knowledge, e.g. given as a set of (existential) rules, can be embedded in a principled way. To address this shortcoming, in this paper we introduce a framework based on convex regions, which can faithfully incorporate ontological knowledge into the vector space embedding. Our technical contribution is two-fold. First, we show that some of the most popular existing embedding approaches are not capable of modelling even very simple types of rules. Second, we show that our framework can represent ontologies that are expressed using so-called quasi-chained existential rules in an exact way, such that any set of facts which is induced using that vector space embedding is logically consistent and deductively closed with respect to the input ontology.
Dynamic programming (DP) solves a variety of structured combinatorial problems by iteratively breaking them down into smaller subproblems. In spite of their versatility, DP algorithms are usually non-differentiable, which hampers their use as a layer in neural networks trained by backpropagation. To address this issue, we propose to smooth the max operator in the dynamic programming recursion, using a strongly convex regularizer. This allows to relax both the optimal value and solution of the original combinatorial problem, and turns a broad class of DP algorithms into differentiable operators. Theoretically, we provide a new probabilistic perspective on backpropagating through these DP operators, and relate them to inference in graphical models. We derive two particular instantiations of our framework, a smoothed Viterbi algorithm for sequence prediction and a smoothed DTW algorithm for time-series alignment. We showcase these instantiations on two structured prediction tasks and on structured and sparse attention for neural machine translation.