FLOC 2018: FEDERATED LOGIC CONFERENCE 2018
ICLP PROCEEDINGS OF ICLP 2018: PAPERS WITH ABSTRACTS

Editors: Paul Tarau and Alessandro Dal Palu'

Authors, Title and AbstractPaperTalk

ABSTRACT. This paper describes how XSB combines top-down and bottom-up computation through the mechanisms of variant tabling and subsumptive tabling with abstraction, respectively. It is well known that top-down evaluation of logical rules in Prolog has a procedural interpretation as recursive procedure invocation (Kowalski 1986). Tabling adds the intuition of short-circuiting redundant computations (Warren 1992). This paper shows how to introduce into tabled logic program evaluation a bottom-up component, whose procedural intuition is the initialization of a data structure, in which a relation is initially computed and filled, on first demand, and then used throughout the remainder of a larger computation for efficient lookup. This allows many Prolog programs to be expressed fully declaratively, programs which formerly required procedural features, such as assert, to be made efficient.

Jul 15 11:00

ABSTRACT. An epistemic logic program is a set of rules written in the language of Epistemic Specifications, an extension of the language of answer set programming that provides for more powerful introspective reasoning through the use of modal operators K and M. We propose adding a new construct to Epistemic Specifications called a world view constraint that provides a universal device for expressing global constraints in the various versions of the language. We further propose the use of subjective literals (literals preceded by K or M) in rule heads as syntactic sugar for world view constraints. Additionally, we provide an algorithm for finding the world views of such programs.

Jul 15 16:15

ABSTRACT. We show how a bi-directional grammar can be used to specify and verbalise answer set programs in controlled natural language. We start from a program specification in controlled natural language and translate this specification automatically into an executable answer set program. The resulting answer set program can be modified following certain naming conventions and the revised version of the program can then be verbalised in the same subset of natural language that was used as specification language. The bi-directional grammar is parametrised for processing and generation, deals with referring expressions, and exploits symmetries in the data structure of the grammar rules whenever these grammar rules need to be duplicated. We demonstrate that verbalisation requires sentence planning in order to aggregate similar structures with the aim to improve the readability of the generated specification. Without modifications, the generated specification is always semantically equivalent to the original one; our bi-directional grammar is the first one that allows for semantic round-tripping in the context of controlled natural language processing.

Jul 16 09:30

ABSTRACT. First-order resolution has been used for type inference for many years, including in Hindley-Milner type inference, type-classes, and constrained data types, to name but a few. Dependent types are a new trend in functional languages. In this paper, we show that proof-relevant first-order resolution can play an important role in automating type inference and term synthesis for dependently typed languages. We propose a calculus that translates type inference and term synthesis problems in a dependently typed language to a logic program and a goal in the proof-relevant first-order Horn clause logic. The computed answer substitution and proof term then provide a solution to the given type inference and term synthesis problem. We prove the decidability and soundness of our method.

Jul 14 14:00

ABSTRACT. In this paper, we introduce an alternative approach to Temporal Answer Set Programming that relies on a variation of Temporal Equilibrium Logic (TEL) for finite traces. This approach allows us to even out the expressiveness of TEL over infinite traces with the computational capacity of (incremental) Answer Set Programming (ASP). Also, we argue that finite traces are more natural when reasoning about action and change. As a result, our approach is readily implementable via multi-shot ASP systems and benefits from an extension of ASP's full-fledged input language with temporal operators. This includes future as well as past operators whose combination offers a rich temporal modeling language. For computation, we identify the class of temporal logic programs and prove that it constitutes a normal form for our approach. Finally, we outline two implementations, a generic one and an extension of clingo.

Jul 15 10:00

ABSTRACT. We introduce the asprilo [1] framework to facilitate experimental studies of approaches addressing complex dynamic applications. For this purpose, we have chosen the domain of robotic intra-logistics. This domain is not only highly relevant in the context of today's fourth industrial revolution but it moreover combines a multitude of challenging issues within a single uniform framework. This includes multi-agent planning, reasoning about action, change, resources, strategies, etc. In return, asprilo allows users to study alternative solutions as regards effectiveness and scalability. Although asprilo relies on Answer Set Programming and Python, it is readily usable by any system complying with its fact-oriented interface format. This makes it attractive for benchmarking and teaching well beyond logic programming. More precisely, asprilo consists of a versatile benchmark generator, solution checker and visualizer as well as a bunch of reference encodings featuring various ASP techniques. Importantly, the visualizer's animation capabilities are indispensable for complex scenarios like intra-logistics in order to inspect valid as well as invalid solution candidates. Also, it allows for graphically editing benchmark layouts that can be used as a basis for generating benchmark suites.

[1] asprilo stands for Answer Set Programming for robotic intra-logistics

Jul 17 11:30

ABSTRACT. Significant research has been conducted in recent years to extend Inductive Logic Programming (ILP) methods to induce a more expressive class of logic programs such as answer set programs. The methods proposed perform an exhaustive search for the correct hypothesis. Thus, they are sound but not scalable to real-life datasets. Lack of scalability and inability to deal with noisy data in real-life datasets restricts their applicability. In contrast, top-down ILP algorithms such as FOIL, can easily guide the search using heuristics and tolerate noise. They also scale up very well, due to the greedy nature of search for best hypothesis. However, in some cases, heuristics fail to direct the search in the correct direction. In this paper, we introduce the FOLD 2.0 algorithm---an enhanced version of our recently developed algorithm called FOLD. Our original FOLD algorithm automates the inductive learning of default theories. The enhancements presented here preserve the greedy nature of hypothesis search during clause specialization. These enhancements also avoid being stuck in local optima---a major pitfall of FOIL-like algorithms. Experiments that we report in this paper, suggest a significant improvement in terms of accuracy and expressiveness of the class of induced hypotheses. To the best of our knowledge, our FOLD 2.0 algorithm is the first heuristic based, scalable, and noise-resilient ILP system to induce answer set programs.

Jul 15 16:00

ABSTRACT. Logic Programs with Ordered Disjunction (LPOD) is an extension of standard answer set programs to handle preference using the concept of ordered disjunction, and CR-Prolog2 is an extension of standard answer set programs with consistency restoring rules and LPOD-like ordered disjunction. We present reductions of each of these languages into the standard ASP language, which gives us an alternative insight into the semantics of the extensions in terms of the standard ASP language.

Jul 14 17:30

ABSTRACT. One of the most difficult problems in Artificial Intelligence is related to acquiring commonsense knowledge -- to create a collection of facts and information that an ordinary person should know. In this work, we present a system that, from a limited background knowledge, is able to learn to form simple concepts through interactive dialogue with a user. We approach the problem using a syntactic parser, along with a mechanism to check for synonymy, to translate sentences into a logical formulas represented in Event Calculus using Answer Set Programming (ASP). Reasoning and learning tasks are then automatically generated for the translated text, with learning being initiated through question and answering. The system is capable of learning with no contextual knowledge prior to the dialogue. The system has been evaluated on stories inspired by the Facebook's bAbI's question-answering tasks, and through appropriate question and answering is able to respond accurately to these dialogues.

Jul 17 17:00

ABSTRACT. Over these years the Artificial Intelligence (AI) community has produced several datasets which have given the machine learning algorithms the opportunity to learn various skills across various domains. However, a subclass of these machine learning algorithms that aimed at learning logic programs, namely the Inductive Logic Programming algorithms, have often failed at the task due to the vastness of these datasets. This has impacted the usability of knowledge representation and reasoning techniques in the development of AI systems. In this research, we try to address this scalability issue for the algorithms that learn Answer Set Programs. We present a sound and complete algorithm which takes the input in a slightly different manner and perform an efficient and more user controlled search for a solution. We show via experiments that our algorithm can learn from two popular datasets from machine learning community, namely bAbl (a question answering dataset) and MNIST (a dataset for handwritten digit recognition), which to the best of our knowledge was not previously possible. The system is publicly available at https://goo.gl/KdWAcV.

Jul 15 14:30

ABSTRACT. This paper describes an application worked out in collaboration with a company that produces made-to-order machine components. The goal of the project is to develop a system that can support the company's engineers by automating parts of the component design process. We propose a knowledge extraction methodology based on the recent DMN (Decision Modelling Notation) standard and compare a rule-based and a constraint-based method for representing the resulting knowledge. We study the advantages and disadvantages of both approaches in the context of the company's real-life application.

Jul 15 17:00

ABSTRACT. We define a novel, extensional, three-valued semantics for higher-order logic programs with negation. The new semantics is based on interpreting the types of the source language as three-valued Fitting-monotonic functions at all levels of the type hierarchy. We prove that there exists a bijection between such Fitting-monotonic functions and pairs of two-valued-result functions where the first member of the pair is monotone-antimonotone and the second member is antimonotone-monotone. By deriving an extension of consistent approximation fixpoint theory (Denecker et al. 2004) and utilizing the above bijection, we define an iterative procedure that produces for any given higher-order logic program a distinguished extensional model. We demonstrate that this model is actually a minimal one. Moreover, we prove that our construction generalizes the familiar well-founded semantics for classical logic programs, making in this way our proposal a serious candidate for the title of the well-founded semantics for higher-order logic programs.

Jul 14 12:00

ABSTRACT. This paper develops a logic programming language, ASP$^\text{EP}$, that extends answer set programming language with a new epistemic operator $\succcurlyeq_x$ where $x\in\{\sharp,\supseteq\}$. The operator are used between two literals in rules bodies, and thus allows for the representation of introspections of preferences in the presence of multiple belief sets: $G\succcurlyeq_\sharp F$ expresses that $G$ is preferred to $F$ by the cardinality of the sets\ignore{ which can be read as \emph{$G$ is more possible than $F$}}, and $G\succcurlyeq_\supseteq F$ expresses $G$ is preferred to $F$ by the set-theoretic inclusion\ignore{ which can be read as \emph{$G$ is true whenever $F$ is true}}. We define the semantics of ASP$^\text{EP}$, explore the relation to the languages of strong introspections, give an algorithm for computing solutions of ASP$^\text{EP}$ programs, and study the applications of ASP$^\text{EP}$ by modeling the Monty Hall problem and the principle of majority.

Jul 15 16:45

ABSTRACT. Aggregates are among the most frequently used linguistic extensions of answer set programming. The result of an aggregation may introduce new constants during the instantiation of the input program, a feature known as value invention. When the aggregation involves literals whose truth value is undefined at instantiation time, modern grounders introduce several instances of the aggregate, one for each possible interpretation of the undefined literals. This paper introduces new data structures and techniques to handle such cases, and more in general aggregations on the same aggregate set identified in the ground program in input. The proposed solution reduces the memory footprint of the solver without sacrificing efficiency. On the contrary, the performance of the solver may improve thanks to the addition of some simple entailed clauses which are not easily discovered otherwise, and since redundant computation is avoided during propagation. Empirical evidence of the potential impact of the proposed solution is given.

Jul 14 11:30

ABSTRACT. Answer Set Programming (ASP) is a logic-based knowledge representation framework, supporting -among other reasoning modes- the central task of query answering. In the propositional case, query answering amounts to computing cautious consequences of the input program among the atoms in a given set of candidates, where a cautious consequence is an atom belonging to all stable models. Currently, the most efficient algorithms either iteratively verify the existence of a stable model of the input program extended with the complement of one candidate, where the candidate is heuristically selected, or introduce a clause enforcing the falsity of at least one candidate, so that the solver is free to choose which candidate to falsify at any time during the computation of a stable model. This paper introduces new algorithms for the computation of cautious consequences, with the aim of driving the solver to search for stable models discarding more candidates. Specifically, one of such algorithms enforces minimality on the set of true candidates, where different notions of minimality can be used, and another takes advantage of unsatisfiable core computation. The algorithms are implemented in WASP, and experiments on benchmarks from the latest ASP competitions show that the new algorithms perform better than the state-of-the-art.

Jul 16 10:00

ABSTRACT. We address the problem of verifying the satisfiability of Constrained Horn Clauses (CHCs) based on theories of inductively defined data structures, such as lists or trees. We propose a transformation technique, related to Wadler's {\em deforestation}, whose objective is the removal of these data structures from the clauses, hence reducing their satisfiability to an equivalent satisfiability problem for CHCs on integers or booleans. We propose a basic transformation algorithm and identify a class of CHCs where it always succeeds. We also consider extensions of the basic algorithm which combine CHC transformation with reasoning on integer constraints, thus enlarging the class of CHCs where inductively defined data structures can be removed. We perform an experimental evaluation of our technique by considering a benchmark of verification problems for first order OCaml programs that process lists and trees. We show that our transformation greatly improves the effectiveness of applying the Z3 solver for CHCs. Our experiments also show that the verification technique based on CHC transformation followed by CHC solving, is competitive with respect to CHC solvers extended with induction.

Jul 14 15:00

ABSTRACT. Many real-world phenomena exhibit both relational structure and uncertainty. Probabilistic Inductive Logic Programming (PILP) uses Inductive Logic Programming (ILP) extended with probabilistic facts to produce meaningful and interpretable models for real-world phenomena. This merge between First Order Logic (FOL) theories and uncertainty makes PILP a very adequate tool for knowledge representation and extraction. However, this flexibility is coupled with a problem (inherited from ILP) of exponential search space growth and so, often, only a subset of all possible models is explored due to limited resources. Furthermore, the probabilistic evaluation of FOL theories, coming from the underlying probabilistic logic language and its solver, is also computationally demanding. This work introduces a prediction-based pruning strategy, which can reduce the search space based on the probabilistic evaluation of models, and a safe pruning criterion, which guarantees that the optimal model is not pruned away, as well as two alternative more aggressive criteria that do not provide this guarantee. Experiments performed using three benchmarks from different areas show that prediction pruning is effective in (i) maintaining predictive accuracy for all criteria and experimental settings; (ii) reducing the execution time when using some of the more aggressive criteria, compared to using no pruning; and (iii) selecting better candidate models in limited resource settings, also when compared to using no pruning.

Jul 15 17:15

ABSTRACT. Extending ASP with constraints (CASP) enhances its expressiveness and performance. This extension is not straightforward as the grounding phase, present in most ASP systems, removes variables and the links among them, and also causes a combinatorial explosion in the size of the program. This has led CASP systems to devise several methods to overcome this issue: restricting the constraint domains (e.g., discrete instead of dense), where constraints can appear, or the type (or number) of models that can be returned. In this paper we propose to incorporate constraints into s(ASP), a goal-directed, top-down execution model which implements ASP while retaining logical variables both during execution and in the answer sets. The resulting model, s(CASP), can constrain variables that (as in CLP) are kept during the execution and in the answer sets. s(CASP) inherits and generalizes the execution model of s(ASP) while parameterizing the constraint solver. We describe this novel execution model and show, through several examples, the enhanced expressiveness of s(CASP) w.r.t. ASP, CLP, and other ASP systems with constraints. We also report improved performance w.r.t. other very mature, highly optimized ASP systems in some benchmarks.

Jul 14 17:00

ABSTRACT. We employ the Coq proof assistant to develop a mechanically-certified framework for evaluating graph queries and incrementally maintaining materialized graph instances, also called views. The language we use for defining queries and views is Regular Datalog (RD) -- a notable fragment of non-recursive Datalog that can express complex navigational queries, with transitive closure as native operator. We first design and encode the theory of RD and then mechanize a RD-specific evaluation algorithm capable of fine-grained, incremental graph view computation, which we prove sound with respect to the declarative RD semantics. By using the Coq extraction mechanism, we test an Ocaml version of the verified engine on a set of preliminary benchmarks.

Our development is particularly focused on leveraging existing verification and notational techniques to: a) define mechanized properties that can be easily understood by logicians and database researchers and b) attain formal verification with limited effort. Our work is the first step towards a unified, machine-verified, formal framework for dynamic graph query languages and their evaluation engines.

Jul 15 11:30

ABSTRACT. Meta-Interpretive Learning (MIL) learns logic programs from examples by instantiating meta-rules. The recent Metagol system efficiently solves MIL-problems by relying on the procedural bias imposed by Prolog. Its focus on positive examples, however, effects that Metagol can detect the derivability of negative examples only at a later check, which can severely hit performance. Viewing MIL-problems as combinatorial search problems, they can alternatively be solved by employing Answer Set Programming (ASP). Using a sophisticated ASP solver, we may expect that violations of negative examples can be propagated directly, but such an effect has never been explicitly exploited for general MIL. In fact, a straightforward ASP-encoding of MIL results in a huge search space due to a lack of procedural bias and the need for grounding. To address these challenging issues, we encode MIL in the HEX formalism, which is an extension of ASP that allows us to outsource the background knowledge, and we restrict the search space by modeling the procedural bias. This way, the import of constants from the background knowledge can for a given type of meta-rules be limited to the relevant ones. Moreover, by abstracting from term manipulations in the encoding and exploiting the HEX interface mechanism, the import of such constants can be prevented completely in order to avoid the grounding bottleneck. An experimental evaluation shows promising results.

Jul 15 14:00

ABSTRACT. Spatial information is often expressed using qualitative terms such as natural language expressions instead of coordinates; reasoning over such terms has several practical applications, such as bus routes planning. Representing and reasoning on trajectories is a specific case of qualitative spatial reasoning that focuses on moving objects and their paths. In this work, we propose two versions of a trajectory calculus based on the allowed properties over trajectories, where trajectories are defined as a sequence of non-overlapping regions of a partitioned map. More specifically, if a given trajectory is allowed to start and finish at the same region, 6 base relations are defined (TC-6). If a given trajectory should have different start and finish regions but cycles are allowed within, 10 base relations are defined (TC-10). Both versions of the calculus are implemented as ASP programs; we propose several different encodings, including a generalised program capable of encoding any qualitative calculus in ASP. All proposed encodings are experimentally evaluated using a real-world dataset. Experiment results show that the best performing implementation can scale up to an input of 250 trajectories for TC-6 and 150 trajectories for TC-10 for the problem of discovering a consistent configuration, a significant improvement compared to previous ASP implementations for similar qualitative spatial and temporal calculi.

Jul 15 15:00

ABSTRACT. Conventional processor architectures are restricted in exploiting instruction level parallelism (ILP) due to the relatively low number of programmer-visible registers. Therefore, more recent processor architectures expose their datapaths so that the compiler (1) can schedule parallel instructions to different processing units and (2) can make effective use of local storage of the processing units. Among these architectures, the Synchronous Control Asynchronous Dataflow (SCAD) architecture is a new exposed datapath architecture whose processing units are equipped with first-in first-out (FIFO) buffers at their input and output ports.

In contrast to register-based machines, the optimal code generation for SCAD is still a matter of research. In particular, SAT and SMT solvers were used to generate optimal resource constrained and optimal time constrained schedules for SCAD, respectively. As Answer Set Programming (ASP) offers better flexibility in handling such scheduling problems, we focus in this paper on using an answer set solver for both resource and time constrained optimal SCAD code generation. As a major benefit of using ASP, we are able to generate \emph{all} optimal schedules for a given program which allows one to study their properties. Furthermore, the experimental results of this paper demonstrate that the answer set solver can compete with SAT solvers and outperforms SMT solvers.

Jul 17 12:00

ABSTRACT. Constraint handling rules are a committed-choice language consisting of multiple-heads guarded rules that rewrite constraints into simpler ones until they are solved. We propose a new proof-theoretical declarative linear semantics for Constraint Handling Rules. We demonstrate completeness and soundness of our semantics w.r.t. operational w_t semantics. We propose also a translation from this semantics to linear logic.

Jul 17 17:15

ABSTRACT. The paper addresses the problem of automatic generation of natural language descriptions for ontology- described artifacts. The motivation for the work is the challenge of providing textual descriptions of auto- matically generated scientific workflows (e.g., paragraphs that scientists can include in their publications). The paper presents two systems which generate descriptions of sets of atoms derived from a collection of ontologies. The first system, called nlgPhylogeny, demonstrates the feasibility of the task in the Phylotastic project, that aims at providing evolutionary biologists with a platform for automatic generation of phylogenetic trees given some suitable inputs. nlgPhylogeny utilizes the fact that the Grammatical Framework (GF) is suitable for the natural language generation (NLG) task; the paper shows how elements of the ontologies in Phylotastic, such as web services, inputs and outputs of web services, can be encoded in GF for the NLG task. The second system, called nlgOntology^A, is a generalization of nlgPhylogeny. It eliminates the requirement that a GF needs to be defined and proposes to use annotated ontologies for NLG. Given a set of annotated ontologies, nlgOntology^A will generate a GF suitable for the generation of natural language description of a set of atoms derived from these ontologies. The paper presents the algorithms used in the development of nlgPhylogeny and nlgOntologyA and discusses potential uses of these systems.

Jul 15 17:30

ABSTRACT. Evolutionary Biologists have long struggled with the challenge of developing analysis workflows in a flexible manner, thus facilitating the reuse of phylogenetic knowledge. An evolutionary biology workflow can be viewed as a plan which composes web services that can retrieve, manipulate, and produce phylogenetic trees.

The Phylotastic project was launched two years ago as a collaboration between evolutionary biologists and computer scientists, with the goal of developing an open architecture to facilitate the creation of such analysis workflows. While composition of web services is a problem that has been extensively explored in the literature, including within the logic programming domain, the incarnation of the problem in Phylotastic provides a number of additional challenges. Along with the need to integrate preferences and formal ontologies in the description of the desired workflow, evolutionary biologists tend to construct workflows in an incremental manner, by successively refining the workflow, by indicating desired changes (e.g., exclusion of certain services, modifications of the desired output). This leads to the need of successive iterations of incremental replanning, to develop a new workflow that integrates the requested changes while minimizing the changes to the original workflow. This paper illustrates how Phylotastic has addressed the challenges of creating and refining phylogenetic analysis workflows using logic programming technology and how such solutions have been used within the general framework of the Phylotastic project.

Jul 17 11:00

ABSTRACT. Context-sensitive global analysis of large code bases can be expensive, which can be specially problematic in interactive uses of analyzers. However, in practice each development iteration implies small modifications which are often isolated within a few modules, and analysis cost can be reduced by reusing the results of previous analyses.

This has been achieved to date on one hand through modular analysis, which can reduce the memory consumption and often localize the computation during reanalysis mainly to the modules affected by changes. In parallel, context-sensitive incremental fixpoints have been proposed that achieve cost reductions at finer levels of granularity, such as changes in program lines. However, these fine-grained techniques are not directly applicable to modular programs.

This paper describes, implements, and evaluates a context sensitive, fixpoint analysis algorithm for (Constraint) Logic Programs aimed at achieving both inter-modular (coarse-grain) and intra-modular (fine-grain) incrementality, solving the problems related to propagation of the fine-grain change information and effects across module boundaries, for additions and deletions in multiple modules.

The implementation and evaluation of our algorithm shows encouraging results: the expected advantages of fine-grain incremental analysis carry over to the modular analysis context. Furthermore, the fine-grained propagation of analysis information of our algorithm improves performance with respect to traditional modular analysis even when analyzing from scratch.

Jul 17 16:45

ABSTRACT. There has been an increasing interest in recent years towards the development of efficient solvers for Answer Set Programming (ASP) and towards the application of ASP to solve increasing more challenging problems. In particular, several recent efforts have explored the issue of scalability of ASP solvers when addressing the challenges caused by the need to ground the program before resolution. This paper offers an alternative solution to this challenge, focused on the use of distributed programming techniques to reason about ASP programs whose grounding would be prohibitive for mainstream ASP solvers. The work builds on the work proposed by Konczak et al. in 2004, which proposed a characterization of answer set solving as a form of non-standard graph coloring. The paper expands this characterization to include syntactic extensions used in modern ASP (e.g., choice rules, weight constraints). We present an implementation of the solver using a distributed programming framework specifically designed to manipulate very large graphs, as provided by Apache Spark, which in turn builds on the MapReduce programming framework. Finally, we provide a few preliminary results obtained from the first prototype implementation of this approach.

Jul 15 17:45

ABSTRACT. Pre-mappable (PreM ) extrema constraints in recursive Datalog programs enable concise declarative formulations for classical algorithms (Zaniolo et al. 2017). The programs expressing these algorithms have formal non- monotonic semantics with efficient and scalable support on multiple platforms (Shkapsky et al. 2016) (Yang et al. 2017). However proving PreM for different programs can be challenging for programmers; thus, in this paper, we introduce simple templates that allow users to verify with ease that their programs have the PreM property along with the rigorous semantics and the efficient and scalable implementation associated with it. We thus obtain simple declarative formulation for classical algorithms in two equivalent versions: one with perfect model semantics and the other with stable model semantics.

Jul 17 17:45

ABSTRACT. We describe an application of Answer Set Programming to the understanding of narratives about stereotypical activities, demonstrated via question answering. Substantial work in this direction was done by Erik Mueller, who modeled stereotypical activities as scripts. His systems were able to understand a good number of narratives, but could not process texts describing exceptional scenarios. We propose addressing this problem by using a theory of intentions developed by Blount, Gelfond, and Balduccini. We present a methodology in which we substitute scripts by activities (i.e., hierarchical plans associated with goals) and employ the concept of an intentional agent to reason about both normal and exceptional scenarios. We exemplify the application of this methodology by answering questions about a number of restaurant stories.

Jul 16 09:00

ABSTRACT. Probabilistic Logic Programs (PLPs) generalize traditional logic programs and allow the encoding of models combining logical structure and uncertainty. In PLP, inference is performed by summarizing the possible worlds which entail the query in a suitable data-structure, and using it to compute the answer probability. Systems such as ProbLog, PITA, etc., use propositional data-structures like explanation graphs, BDDs, SDDs, etc., to represent the possible worlds. While this approach saves inference time due to substructure sharing, there are a number of problems where a more compact data-structure is possible. We propose a data-structure called Ordered Symbolic Derivation Diagram (OSDD) which captures the possible worlds by means of constraint formulas. We describe a program transformation technique to construct OSDDs via query evaluation, and give procedures to perform exact and approximate inference over OSDDs. Our approach has two key properties. Firstly, the exact inference procedure is a generalization of traditional inference, and results in speedup over the latter in certain settings. Secondly, the approximate technique is a generalization of likelihood weighting in Bayesian Networks, and allows us to perform sampling-based inference with lower rejection rate and variance. We evaluate the effectiveness of the proposed techniques through experiments on several problems.

Jul 17 15:00

ABSTRACT. We present a probabilistic extension of action language BC+. Just like BC+ is defined as a high-level notation of answer set programs for describing transition systems, the proposed language, which we call pBC+, is defined as a high-level notation of LPMLN programs---a probabilistic extension of answer set programs. We show how probabilistic reasoning about transition systems, such as prediction, postdiction, and planning problems, as well as probabilistic diagnosis for dynamic domains, can be modeled in pBC+ and computed using an implementation of LPMLN.

Jul 17 14:30

ABSTRACT. We present a method for automatic inference of conditions on the initial states of a program that guarantee that the safety assertions in the program are not violated. Constrained Horn clauses (CHCs) are used to model the program and assertions in a uniform way, and we use standard abstract interpretations to derive an over-approximation of the set of \emph{unsafe} initial states. The precondition then is the constraint satisfied by the complement of that set under-approximating the set of \emph{safe} initial states. This approach has been used before but demonstrated only on small transition systems. In this paper, we develop an iterative refinement algorithm for non-linear CHCs, and show that it produces much more precise, and in some cases optimal approximations of the safety conditions, and can scale to larger programs. The refinement algorithm uses partial evaluation and a form of counterexample-guided abstraction refinement techniques to focus on the relevant program states. Disjunctive constraints, which are essential to achieve good precision, are generated in a controlled way. The algorithm is implemented and tested on a benchmark suite of programs from the literature in precondition inference and software verification competitions.

Jul 15 12:00

ABSTRACT. Malformed data-structures can lead to runtime errors such as arbitrary memory access or corruption. Despite this, reasoning over data-structure properties for low-level heap manipulating programs remains challenging. In this paper we present a constraint-based program analysis that checks data-structure integrity, w.r.t. given target data-structure properties, as the heap is manipulated by the program. Our approach is to automatically generate a solver for properties using the type definitions from the target program. The generated solver is implemented in Constraint Handling Rules (CHR) extending builtin heap, integer and equality solvers. A key property of our program analysis is that the target data-structure properties are shape neutral, i.e. the analysis does not check for properties relating to a given data-structure graph shape, such as doubly-linked-lists versus trees. Nevertheless, the analysis can detect errors in wide range of datastructure manipulating programs, including those that use lists, trees, DAGs, graphs, etc. We present an implementation based on a specialized shape neutral constraint solver implemented in the Satisfiability Modulo Constraint Handling Rules (SMCHR) system. Experimental results show that our approach works well for real-world C programs.

Jul 17 14:00

ABSTRACT. Automated storage and retrieval systems are principal components of modern production and warehouse facilities. In particular, automated guided vehicles nowadays substitute human-operated pallet trucks in transporting production materials between storage locations and assembly stations. While low-level control systems take care of navigating such driverless vehicles along programmed routes and avoid collisions even under unforeseen circumstances, in the common case of multiple vehicles sharing the same operation area, the problem remains how to set up routes such that a collection of transport tasks is accomplished most effectively. We address this prevalent problem in the context of car assembly at Mercedes-Benz Ludwigsfelde GmbH, a large-scale producer of commercial vehicles, where routes for automated guided vehicles used in the production process have traditionally been hand-coded by human engineers. Such ad-hoc methods may suffice as long as a running production process remains in place, while any change in the factory layout or production targets necessitates tedious manual reconfiguration, not to mention the missing portability between different production plants. Unlike this, we propose a declarative approach based on Answer Set Programming to optimize the routes taken by automated guided vehicles for accomplishing transport tasks. The advantages include a transparent and executable problem formalization, provable optimality of routes relative to objective criteria, as well as elaboration tolerance towards particular factory layouts and production targets. Moreover, we demonstrate that our approach is efficient enough to deal with the transport tasks evolving in realistic production processes at the car factory of Mercedes-Benz Ludwigsfelde GmbH.

Jul 16 11:00

ABSTRACT. We propose an interpretation of the first-order answer set programming (FOASP) in terms of intuitionistic proof theory. It is obtained by two polynomial translations between FOASP and the bounded-arity fragment of the \Sigma_1 level of the Mints hierarchy in first-order intuitionistic logic. It follows that \Sigma_1 formulas using predicates of fixed arity (in particular unary) are of the same strength as FOASP. Our construction reveals a close similarity between constructive provability and stable entailment, or equivalently, between the construction of an answer set and an intuitionistic refutation.

Jul 14 14:30

ABSTRACT. In this paper, we propose a variant of Answer Set Programming (ASP) with evaluable functions that extends their application to sets of objects, something that allows a fully logical treatment of aggregates. Formally, we start from the syntax of First Order Logic with equality and the semantics of Quantified Equilibrium Logic with evaluable functions (QELF). Then, we proceed to incorporate a new kind of logical term, 'intensional set' (a construct commonly used to denote the set of objects characterised by a given formula), and to extend QELF semantics for this new type of expressions. In our extended approach, intensional sets can be arbitrarily used as predicate or function arguments or even nested inside other intensional sets, just as regular first-order logical terms. As a result, aggregates can be naturally formed by the application of some evaluable function (count, sum, maximum, etc) to a set of objects expressed as an intensional set. This approach has several advantages. First, while other semantics for aggregates depend on some syntactic transformation (either via a reduct or a formula translation), the QELF interpretation treats them as regular evaluable functions, providing a compositional semantics and avoiding any kind of syntactic restriction. Second, aggregates can be explicitly defined now within the logical language by the simple addition of formulas that fix their meaning in terms of multiple applications of some (commutative and associative) binary operation. For instance, we can use recursive rules to define sum in terms of the integer addition. Last, but not least, we prove that the semantics we obtain for aggregates coincides with the one defined by Gelfond and Zhang for the Alog language, when we make the restriction to that syntactic fragment.

Jul 14 11:00

ABSTRACT. The work in the paper presents an animation extension CHRvis to Constraint Handling Rules CHR. Visualizations have always helped programmers understand data and debug programs. A picture is worth a thousand words. It can help identify where a problem is or show how something works. It can even illustrate a relation that was not clear otherwise.

Jul 17 17:30

ABSTRACT. Instrumenting programs for performing run-time checking of properties, such as regular shapes, is a common and useful technique that helps programmers detect incorrect program behaviors. This is specially true in dynamic languages such as Prolog. However, such run-time checks inevitably introduce run-time overhead (in execution time, memory, energy, etc.). Several approaches have been proposed for reducing such overhead, such as eliminating the checks that can statically be proved to always succeed, and/or optimizing the way in which the (remaining) checks are performed. However, there are cases in which it is not possible to remove all checks statically (e.g., open libraries which must check their interfaces, complex properties, unknown code, etc.) and in which, even after optimizations, these remaining checks still may introduce an unacceptable level of overhead. It is thus important for programmers to be able to determine the additional cost due to the run-time checks and compare it to some notion of admissible cost. The common practice used for estimating run-time checking overhead is profiling, which is not exhaustive by nature. Instead, we propose a method that uses static analysis to estimate such overhead, with the advantage that the estimations are functions parameterized by input data sizes. Unlike profiling, this approach can provide guarantees for all possible execution traces, and allows assessing how the overhead grows as the size of the input grows. Our method also extends an existing assertion verification framework to express “admissible” overheads, and statically and automatically checks whether the instrumented program conforms with such specifications. Finally, we present an experimental evaluation of our approach that suggests that our method is feasible and promising.

Jul 17 16:30

ABSTRACT. Many answer set solvers utilize Satisfiability solvers for search. SMT solvers extend Satisfiability solvers. This paper presents the CMODELS(DIFF) system that uses SMT solvers to find answer sets of a logic program. Its theoretical foundation is based on Niemela’s characterization of answer sets of a logic program via so called level rankings. The comparative experimental analysis demonstrates that CMODELS(DIFF) is a viable answer set solver.

Jul 15 16:30