View: session overviewtalk overviewside by side with other conferences
10:00 | SPEAKER: Torsten Schaub ABSTRACT. In this paper, we introduce an alternative approach to Temporal Answer Set Programming that relies on a variation of Temporal Equilibrium Logic (TEL) for finite traces. This approach allows us to even out the expressiveness of TEL over infinite traces with the computational capacity of (incremental) Answer Set Programming (ASP). Also, we argue that finite traces are more natural when reasoning about action and change. As a result, our approach is readily implementable via multi-shot ASP systems and benefits from an extension of ASP's full-fledged input language with temporal operators. This includes future as well as past operators whose combination offers a rich temporal modeling language. For computation, we identify the class of temporal logic programs and prove that it constitutes a normal form for our approach. Finally, we outline two implementations, a generic one and an extension of clingo. |
11:00 | ABSTRACT. This paper describes how XSB combines top-down and bottom-up computation through the mechanisms of variant tabling and subsumptive tabling with abstraction, respectively. It is well known that top-down evaluation of logical rules in Prolog has a procedural interpretation as recursive procedure invocation (Kowalski 1986). Tabling adds the intuition of short-circuiting redundant computations (Warren 1992). This paper shows how to introduce into tabled logic program evaluation a bottom-up component, whose procedural intuition is the initialization of a data structure, in which a relation is initially computed and filled, on first demand, and then used throughout the remainder of a larger computation for efficient lookup. This allows many Prolog programs to be expressed fully declaratively, programs which formerly required procedural features, such as assert, to be made efficient. |
11:30 | SPEAKER: Stefania Dumbrava ABSTRACT. We employ the Coq proof assistant to develop a mechanically-certified framework for evaluating graph queries and incrementally maintaining materialized graph instances, also called views. The language we use for defining queries and views is Regular Datalog (RD) -- a notable fragment of non-recursive Datalog that can express complex navigational queries, with transitive closure as native operator. We first design and encode the theory of RD and then mechanize a RD-specific evaluation algorithm capable of fine-grained, incremental graph view computation, which we prove sound with respect to the declarative RD semantics. By using the Coq extraction mechanism, we test an Ocaml version of the verified engine on a set of preliminary benchmarks. Our development is particularly focused on leveraging existing verification and notational techniques to: a) define mechanized properties that can be easily understood by logicians and database researchers and b) attain formal verification with limited effort. Our work is the first step towards a unified, machine-verified, formal framework for dynamic graph query languages and their evaluation engines. |
12:00 | SPEAKER: John Gallagher ABSTRACT. We present a method for automatic inference of conditions on the initial states of a program that guarantee that the safety assertions in the program are not violated. Constrained Horn clauses (CHCs) are used to model the program and assertions in a uniform way, and we use standard abstract interpretations to derive an over-approximation of the set of \emph{unsafe} initial states. The precondition then is the constraint satisfied by the complement of that set under-approximating the set of \emph{safe} initial states. This approach has been used before but demonstrated only on small transition systems. In this paper, we develop an iterative refinement algorithm for non-linear CHCs, and show that it produces much more precise, and in some cases optimal approximations of the safety conditions, and can scale to larger programs. The refinement algorithm uses partial evaluation and a form of counterexample-guided abstraction refinement techniques to focus on the relevant program states. Disjunctive constraints, which are essential to achieve good precision, are generated in a controlled way. The algorithm is implemented and tested on a benchmark suite of programs from the literature in precondition inference and software verification competitions. |
14:00 | SPEAKER: Tobias Kaminski ABSTRACT. Meta-Interpretive Learning (MIL) learns logic programs from examples by instantiating meta-rules. The recent Metagol system efficiently solves MIL-problems by relying on the procedural bias imposed by Prolog. Its focus on positive examples, however, effects that Metagol can detect the derivability of negative examples only at a later check, which can severely hit performance. Viewing MIL-problems as combinatorial search problems, they can alternatively be solved by employing Answer Set Programming (ASP). Using a sophisticated ASP solver, we may expect that violations of negative examples can be propagated directly, but such an effect has never been explicitly exploited for general MIL. In fact, a straightforward ASP-encoding of MIL results in a huge search space due to a lack of procedural bias and the need for grounding. To address these challenging issues, we encode MIL in the HEX formalism, which is an extension of ASP that allows us to outsource the background knowledge, and we restrict the search space by modeling the procedural bias. This way, the import of constants from the background knowledge can for a given type of meta-rules be limited to the relevant ones. Moreover, by abstracting from term manipulations in the encoding and exploiting the HEX interface mechanism, the import of such constants can be prevented completely in order to avoid the grounding bottleneck. An experimental evaluation shows promising results. |
14:30 | SPEAKER: Arindam Mitra ABSTRACT. Over these years the Artificial Intelligence (AI) community has produced several datasets which have given the machine learning algorithms the opportunity to learn various skills across various domains. However, a subclass of these machine learning algorithms that aimed at learning logic programs, namely the Inductive Logic Programming algorithms, have often failed at the task due to the vastness of these datasets. This has impacted the usability of knowledge representation and reasoning techniques in the development of AI systems. In this research, we try to address this scalability issue for the algorithms that learn Answer Set Programs. We present a sound and complete algorithm which takes the input in a slightly different manner and perform an efficient and more user controlled search for a solution. We show via experiments that our algorithm can learn from two popular datasets from machine learning community, namely bAbl (a question answering dataset) and MNIST (a dataset for handwritten digit recognition), which to the best of our knowledge was not previously possible. The system is publicly available at https://goo.gl/KdWAcV. |
15:00 | SPEAKER: George Baryannis ABSTRACT. Spatial information is often expressed using qualitative terms such as natural language expressions instead of coordinates; reasoning over such terms has several practical applications, such as bus routes planning. Representing and reasoning on trajectories is a specific case of qualitative spatial reasoning that focuses on moving objects and their paths. In this work, we propose two versions of a trajectory calculus based on the allowed properties over trajectories, where trajectories are defined as a sequence of non-overlapping regions of a partitioned map. More specifically, if a given trajectory is allowed to start and finish at the same region, 6 base relations are defined (TC-6). If a given trajectory should have different start and finish regions but cycles are allowed within, 10 base relations are defined (TC-10). Both versions of the calculus are implemented as ASP programs; we propose several different encodings, including a generalised program capable of encoding any qualitative calculus in ASP. All proposed encodings are experimentally evaluated using a real-world dataset. Experiment results show that the best performing implementation can scale up to an input of 250 trajectories for TC-6 and 150 trajectories for TC-10 for the problem of discovering a consistent configuration, a significant improvement compared to previous ASP implementations for similar qualitative spatial and temporal calculi. |
16:00 | SPEAKER: Gopal Gupta ABSTRACT. Significant research has been conducted in recent years to extend Inductive Logic Programming (ILP) methods to induce a more expressive class of logic programs such as answer set programs. The methods proposed perform an exhaustive search for the correct hypothesis. Thus, they are sound but not scalable to real-life datasets. Lack of scalability and inability to deal with noisy data in real-life datasets restricts their applicability. In contrast, top-down ILP algorithms such as FOIL, can easily guide the search using heuristics and tolerate noise. They also scale up very well, due to the greedy nature of search for best hypothesis. However, in some cases, heuristics fail to direct the search in the correct direction. In this paper, we introduce the FOLD 2.0 algorithm---an enhanced version of our recently developed algorithm called FOLD. Our original FOLD algorithm automates the inductive learning of default theories. The enhancements presented here preserve the greedy nature of hypothesis search during clause specialization. These enhancements also avoid being stuck in local optima---a major pitfall of FOIL-like algorithms. Experiments that we report in this paper, suggest a significant improvement in terms of accuracy and expressiveness of the class of induced hypotheses. To the best of our knowledge, our FOLD 2.0 algorithm is the first heuristic based, scalable, and noise-resilient ILP system to induce answer set programs. |
16:15 | SPEAKER: Patrick Kahl ABSTRACT. An epistemic logic program is a set of rules written in the language of Epistemic Specifications, an extension of the language of answer set programming that provides for more powerful introspective reasoning through the use of modal operators K and M. We propose adding a new construct to Epistemic Specifications called a world view constraint that provides a universal device for expressing global constraints in the various versions of the language. We further propose the use of subjective literals (literals preceded by K or M) in rule heads as syntactic sugar for world view constraints. Additionally, we provide an algorithm for finding the world views of such programs. |
16:30 | SPEAKER: Da Shen ABSTRACT. Many answer set solvers utilize Satisfiability solvers for search. SMT solvers extend Satisfiability solvers. This paper presents the CMODELS(DIFF) system that uses SMT solvers to find answer sets of a logic program. Its theoretical foundation is based on Niemela’s characterization of answer sets of a logic program via so called level rankings. The comparative experimental analysis demonstrates that CMODELS(DIFF) is a viable answer set solver. |
16:45 | ABSTRACT. This paper develops a logic programming language, ASP$^\text{EP}$, that extends answer set programming language with a new epistemic operator $\succcurlyeq_x$ where $x\in\{\sharp,\supseteq\}$. The operator are used between two literals in rules bodies, and thus allows for the representation of introspections of preferences in the presence of multiple belief sets: $G\succcurlyeq_\sharp F$ expresses that $G$ is preferred to $F$ by the cardinality of the sets\ignore{ which can be read as \emph{$G$ is more possible than $F$}}, and $G\succcurlyeq_\supseteq F$ expresses $G$ is preferred to $F$ by the set-theoretic inclusion\ignore{ which can be read as \emph{$G$ is true whenever $F$ is true}}. We define the semantics of ASP$^\text{EP}$, explore the relation to the languages of strong introspections, give an algorithm for computing solutions of ASP$^\text{EP}$ programs, and study the applications of ASP$^\text{EP}$ by modeling the Monty Hall problem and the principle of majority. |
17:00 | SPEAKER: Bram Aerts ABSTRACT. This paper describes an application worked out in collaboration with a company that produces made-to-order machine components. The goal of the project is to develop a system that can support the company's engineers by automating parts of the component design process. We propose a knowledge extraction methodology based on the recent DMN (Decision Modelling Notation) standard and compare a rule-based and a constraint-based method for representing the resulting knowledge. We study the advantages and disadvantages of both approaches in the context of the company's real-life application. |
17:15 | SPEAKER: Inês Dutra ABSTRACT. Many real-world phenomena exhibit both relational structure and uncertainty. Probabilistic Inductive Logic Programming (PILP) uses Inductive Logic Programming (ILP) extended with probabilistic facts to produce meaningful and interpretable models for real-world phenomena. This merge between First Order Logic (FOL) theories and uncertainty makes PILP a very adequate tool for knowledge representation and extraction. However, this flexibility is coupled with a problem (inherited from ILP) of exponential search space growth and so, often, only a subset of all possible models is explored due to limited resources. Furthermore, the probabilistic evaluation of FOL theories, coming from the underlying probabilistic logic language and its solver, is also computationally demanding. This work introduces a prediction-based pruning strategy, which can reduce the search space based on the probabilistic evaluation of models, and a safe pruning criterion, which guarantees that the optimal model is not pruned away, as well as two alternative more aggressive criteria that do not provide this guarantee. Experiments performed using three benchmarks from different areas show that prediction pruning is effective in (i) maintaining predictive accuracy for all criteria and experimental settings; (ii) reducing the execution time when using some of the more aggressive criteria, compared to using no pruning; and (iii) selecting better candidate models in limited resource settings, also when compared to using no pruning. |
17:30 | SPEAKER: Van Nguyen ABSTRACT. The paper addresses the problem of automatic generation of natural language descriptions for ontology- described artifacts. The motivation for the work is the challenge of providing textual descriptions of auto- matically generated scientific workflows (e.g., paragraphs that scientists can include in their publications). The paper presents two systems which generate descriptions of sets of atoms derived from a collection of ontologies. The first system, called nlgPhylogeny, demonstrates the feasibility of the task in the Phylotastic project, that aims at providing evolutionary biologists with a platform for automatic generation of phylogenetic trees given some suitable inputs. nlgPhylogeny utilizes the fact that the Grammatical Framework (GF) is suitable for the natural language generation (NLG) task; the paper shows how elements of the ontologies in Phylotastic, such as web services, inputs and outputs of web services, can be encoded in GF for the NLG task. The second system, called nlgOntology^A, is a generalization of nlgPhylogeny. It eliminates the requirement that a GF needs to be defined and proposes to use annotated ontologies for NLG. Given a set of annotated ontologies, nlgOntology^A will generate a GF suitable for the generation of natural language description of a set of atoms derived from these ontologies. The paper presents the algorithms used in the development of nlgPhylogeny and nlgOntologyA and discusses potential uses of these systems. |
17:45 | SPEAKER: Federico Igne ABSTRACT. There has been an increasing interest in recent years towards the development of efficient solvers for Answer Set Programming (ASP) and towards the application of ASP to solve increasing more challenging problems. In particular, several recent efforts have explored the issue of scalability of ASP solvers when addressing the challenges caused by the need to ground the program before resolution. This paper offers an alternative solution to this challenge, focused on the use of distributed programming techniques to reason about ASP programs whose grounding would be prohibitive for mainstream ASP solvers. The work builds on the work proposed by Konczak et al. in 2004, which proposed a characterization of answer set solving as a form of non-standard graph coloring. The paper expands this characterization to include syntactic extensions used in modern ASP (e.g., choice rules, weight constraints). We present an implementation of the solver using a distributed programming framework specifically designed to manipulate very large graphs, as provided by Apache Spark, which in turn builds on the MapReduce programming framework. Finally, we provide a few preliminary results obtained from the first prototype implementation of this approach. |