Fusemate is a logic programming system that implements the possible model semantics for disjunctive logic programs. Its input language is centered around a weak notion of stratification with comprehension and aggregation operators on top of it. Fusemate is implemented as a shallow embedding in the Scala programming language. This enables using Scala data types natively as terms, a tight interface with external systems, and it makes model computation available as an ordinary container data structure constructor. The paper describes the above features and demonstrates them with a non-trivial use-case, the embedding of the description logic $\cal ALCIF$ into Fusemate's input language This version of the paper corrects an error in the published version, which used an unsuitable version of "blocking" in the $\cal ALCIF$ embedding.
We study the computational complexity of computing or approximating a quasi-proper equilibrium for a given finite extensive form game of perfect recall. We show that the task of computing a symbolic quasi-proper equilibrium is $\mathrm{PPAD}$-complete for two-player games. For the case of zero-sum games we obtain a polynomial time algorithm based on Linear Programming. For general $n$-player games we show that computing an approximation of a quasi-proper equilibrium is $\mathrm{FIXP}_a$-complete.
The success of Deep Artificial Neural Networks (DNNs) in many domains created a rich body of research concerned with hardwareaccelerators for compute-intensive DNN operators. However, implementing such operators efficiently with complex hardwareintrinsics such as matrix multiply is a task not yet automated gracefully. Solving this task often requires joint program and data layouttransformations. First solutions to this problem have been proposed, such as TVM, UNIT or ISAMIR, which work on a loop-levelrepresentation of operators and specify data layout and possible program transformations before the embedding into the operator isperformed. This top-down approach creates a tension between exploration range and search space complexity, especially when alsoexploring data layout transformations such as im2col, channel packing or padding.In this work, we propose a new approach to this problem. We created a bottom-up method that allows the joint transformation ofboth compuation and data layout based on the found embedding. By formulating the embedding as a constraint satisfaction problemover the scalar dataflow, every possible embedding solution is contained in the search space. Adding additional constraints andoptmization targets to the solver generates the subset of preferable solutions.An evaluation using the VTA hardware accelerator with the Baidu DeepBench inference benchmark shows that our approach canautomatically generate code competitive to reference implementations. Further, we show that dynamically determining the data layoutbased on intrinsic and workload is beneficial for hardware utilization and performance. In cases where the reference implementationhas low hardware utilization due to its fixed deployment strategy, we achieve a geomean speedup of up to x2.813, while individualoperators can improve as much as x170.
Alignments provide sophisticated diagnostics that pinpoint deviations in a trace with respect to a process model and their severity. However, approaches based on trace alignments use crisp process models as reference and recent probabilistic conformance checking approaches check the degree of conformance of an event log with respect to a stochastic process model instead of finding trace alignments. In this paper, for the first time, we provide a conformance checking approach based on trace alignments using stochastic Workflow nets. Conceptually, this requires to handle the two possibly contrasting forces of the cost of the alignment on the one hand and the likelihood of the model trace with respect to which the alignment is computed on the other.
We propose a family of four-parameter distributions that contain the K-distribution as special case. The family is derived as a mixture distribution that uses the three-parameter reflected Gamma distribution as parental and the two-parameter Gamma distribution as prior. Properties of the proposed family are investigated as well; these include probability density function, cumulative distribution function, moments, and cumulants. The family is termed symmetric K-distribution (SKD) based on its resemblance to the K-distribution as well as its symmetric nature. The standard form of the SKD, which often proves to be an adequate model, is also discussed. Moreover, an order statistics analysis is provided as well as the distributions of the product and ratio of two independent and identical SKD random variables are derived. Finally, a generalisation of the proposed family, which enables non-zero skewness values, is investigated, while both the SKD and the skew-SKD are proven capable of describing the complex dynamics of machine learning, Bayesian analysis and other fields through simplified expressions with high accuracy.
We provide evidence of the existence of KAM quasi-periodic attractors for a dissipative model in Celestial Mechanics. We compute the attractors extremely close to the breakdown threshold. We consider the spin-orbit problem describing the motion of a triaxial satellite around a central planet under the simplifying assumption that the center of mass of the satellite moves on a Keplerian orbit, the spin-axis is perpendicular to the orbit plane and coincides with the shortest physical axis. We also assume that the satellite is non-rigid; as a consequence, the problem is affected by a dissipative tidal torque that can be modeled as a time-dependent friction, which depends linearly upon the velocity. Our goal is to fix a frequency and compute the embedding of a smooth attractor with this frequency. This task requires to adjust a drift parameter. The goal of this paper is to provide numerical calculations of the condition numbers and verify that, when they are applied to the numerical solutions, they will lead to the existence of the torus for values of the parameters extremely close to the parameters of breakdown. Computing reliably close to the breakdown allows to discover several interesting phenomena, which we will report in [CCGdlL20a]. The numerical calculations of the condition numbers presented here are not completely rigorous, since we do not use interval arithmetic to estimate the round off error and we do not estimate rigorously the truncation error, but we implement the usual standards in numerical analysis (using extended precision, checking that the results are not affected by the level of precision, truncation, etc.). Hence, we do not claim a computer-assisted proof, but the verification is more convincing that standard numerics. We hope that our work could stimulate a computer-assisted proof.
The problem of Approximate Nearest Neighbor (ANN) search is fundamental in computer science and has benefited from significant progress in the past couple of decades. However, most work has been devoted to pointsets whereas complex shapes have not been sufficiently treated. Here, we focus on distance functions between discretized curves in Euclidean space: they appear in a wide range of applications, from road segments to time-series in general dimension. For $\ell_p$-products of Euclidean metrics, for any $p$, we design simple and efficient data structures for ANN, based on randomized projections, which are of independent interest. They serve to solve proximity problems under a notion of distance between discretized curves, which generalizes both discrete Fr\'echet and Dynamic Time Warping distances. These are the most popular and practical approaches to comparing such curves. We offer the first data structures and query algorithms for ANN with arbitrarily good approximation factor, at the expense of increasing space usage and preprocessing time over existing methods. Query time complexity is comparable or significantly improved by our algorithms, our algorithm is especially efficient when the length of the curves is bounded.
This paper presents our semantic parsing system for the evaluation task of open domain semantic parsing in NLPCC 2019. Many previous works formulate semantic parsing as a sequence-to-sequence(seq2seq) problem. Instead, we treat the task as a sketch-based problem in a coarse-to-fine(coarse2fine) fashion. The sketch is a high-level structure of the logical form exclusive of low-level details such as entities and predicates. In this way, we are able to optimize each part individually. Specifically, we decompose the process into three stages: the sketch classification determines the high-level structure while the entity labeling and the matching network fill in missing details. Moreover, we adopt the seq2seq method to evaluate logical form candidates from an overall perspective. The co-occurrence relationship between predicates and entities contribute to the reranking as well. Our submitted system achieves the exactly matching accuracy of 82.53% on full test set and 47.83% on hard test subset, which is the 3rd place in NLPCC 2019 Shared Task 2. After optimizations for parameters, network structure and sampling, the accuracy reaches 84.47% on full test set and 63.08% on hard test subset(Our code and data are available at //github.com/zechagl/NLPCC2019-Semantic-Parsing).
Over the past years, there has been a resurgence of Datalog-based systems in the database community as well as in industry. In this context, it has been recognized that to handle the complex knowl\-edge-based scenarios encountered today, such as reasoning over large knowledge graphs, Datalog has to be extended with features such as existential quantification. Yet, Datalog-based reasoning in the presence of existential quantification is in general undecidable. Many efforts have been made to define decidable fragments. Warded Datalog+/- is a very promising one, as it captures PTIME complexity while allowing ontological reasoning. Yet so far, no implementation of Warded Datalog+/- was available. In this paper we present the Vadalog system, a Datalog-based system for performing complex logic reasoning tasks, such as those required in advanced knowledge graphs. The Vadalog system is Oxford's contribution to the VADA research programme, a joint effort of the universities of Oxford, Manchester and Edinburgh and around 20 industrial partners. As the main contribution of this paper, we illustrate the first implementation of Warded Datalog+/-, a high-performance Datalog+/- system utilizing an aggressive termination control strategy. We also provide a comprehensive experimental evaluation.
Many resource allocation problems in the cloud can be described as a basic Virtual Network Embedding Problem (VNEP): finding mappings of request graphs (describing the workloads) onto a substrate graph (describing the physical infrastructure). In the offline setting, the two natural objectives are profit maximization, i.e., embedding a maximal number of request graphs subject to the resource constraints, and cost minimization, i.e., embedding all requests at minimal overall cost. The VNEP can be seen as a generalization of classic routing and call admission problems, in which requests are arbitrary graphs whose communication endpoints are not fixed. Due to its applications, the problem has been studied intensively in the networking community. However, the underlying algorithmic problem is hardly understood. This paper presents the first fixed-parameter tractable approximation algorithms for the VNEP. Our algorithms are based on randomized rounding. Due to the flexible mapping options and the arbitrary request graph topologies, we show that a novel linear program formulation is required. Only using this novel formulation the computation of convex combinations of valid mappings is enabled, as the formulation needs to account for the structure of the request graphs. Accordingly, to capture the structure of request graphs, we introduce the graph-theoretic notion of extraction orders and extraction width and show that our algorithms have exponential runtime in the request graphs' maximal width. Hence, for request graphs of fixed extraction width, we obtain the first polynomial-time approximations. Studying the new notion of extraction orders we show that (i) computing extraction orders of minimal width is NP-hard and (ii) that computing decomposable LP solutions is in general NP-hard, even when restricting request graphs to planar ones.
This paper describes a suite of algorithms for constructing low-rank approximations of an input matrix from a random linear image of the matrix, called a sketch. These methods can preserve structural properties of the input matrix, such as positive-semidefiniteness, and they can produce approximations with a user-specified rank. The algorithms are simple, accurate, numerically stable, and provably correct. Moreover, each method is accompanied by an informative error bound that allows users to select parameters a priori to achieve a given approximation quality. These claims are supported by numerical experiments with real and synthetic data.