FLOC 2018: FEDERATED LOGIC CONFERENCE 2018
PROGRAM FOR SATURDAY, JULY 14TH
Days:
previous day
next day
all days

View: session overviewtalk overview

09:00-10:00 Session 93B: F-IDE/Overture invited talk: Jean-Christophe Filliatre (F-IDE and Overture)
09:00
Auto-active verification using Why3's IDE

ABSTRACT. Why3 is a platform for deductive program verification. It features a rich language for specification and programming, called WhyML, and relies on external theorem provers, both automated and interactive, to discharge verification conditions. Why3 comes with a dedicated IDE where users edit source files and build proofs interactively using a blend of logical transformations and calls to external theorem provers. In this talk, I will illustrate the typical workflow of program verification using Why3's IDE, focusing on the key features of WhyML, auto-active verification, and proof maintenance.

10:00-10:30 Session 94A: Program verification (F-IDE)
10:00
Lightweight Interactive Proving inside an Automatic Program Verifier

ABSTRACT. Among formal methods, the deductive verification approach allows establishing the strongest possible formal guarantees on critical software. The downside is the cost in terms of human effort required to design adequate formal specifications and to successfully discharge the required proof obligations. To popularize deductive verification in an industrial software development environment, it is essential to provide means to progressively transition from simple and automated approaches to deductive verification. The SPARK 2014 environment, for development of critical software written in Ada, goes towards this goal by providing automated tools for formally proving that some code fulfills the requirements expressed in Ada contracts.

In a program verifier that makes use of automatic provers to discharge the proof obligations, a need for some additional user interaction with proof tasks shows up: either to help analyzing the reason of a proof failure or, ultimately, to discharge the verification conditions that are out-of-reach of state-of-the art automatic provers. Adding interactive proof features in SPARK2014 appears to be complicated by the fact that the proof tool chain makes use of the independent, intermediate verification tool Why3, which is generic enough to accept multiple front-ends for different input languages. This paper reports on our approach to extend Why3 with interactive proof features and also with a generic client/server infrastructure allowing integration of proof interaction into an external, front-end graphical user interface such as the one of SPARK2014.

10:00-10:30 Session 94B: Overture: Tools (Overture)
Chair:
10:00
Welcome to the Overture Workshop

ABSTRACT. Introduction to the 16th Overture workshop

10:10
Enhancing Testing of VDM-SL models

ABSTRACT. Testing of VDM-SL models is currently a tedious and error-prone task due to lack of support for defining tests, executing tests automatically, and validating test results. In VDM++, test-driven development is supported by the VDMUnit framework, which offers many of the features one would expect from a modern testing framework. However, since VDMUnit relies on object-orientation and exception handling, this framework does not work for testing VDM-SL models. In this paper, we discuss the challenges of testing VDM-SL models, and propose a library extension of Overture/VDMUnit that improves this situation. We demonstrate usage of this library extension, and show how it also enables one to reuse tests to validate code-generated VDM-SL models.

10:30-11:00Coffee Break
11:00-12:30 Session 95A: Model Checking (CAV)
Location: Maths LT1
11:00
Propositional Dynamic Logic for Higher-Order Functional Programs
SPEAKER: Yuki Satake

ABSTRACT. We present an extension of propositional dynamic logic called HOT-PDL for specifying temporal properties of higher-order functional programs. The semantics of HOT-PDL is defined over Higher-Order Traces (HOTs) that model execution traces of higher-order programs. A HOT is a sequence of events such as function calls and returns, equipped with two kinds of pointers inspired by the notion of justification pointers from game semantics: one for capturing the correspondence between call and return events, and the other for capturing higher-order control flow involving a function that is passed to or returned by a higher-order function. To allow traversal of the new kinds of pointers, HOT-PDL extends PDL with new path expressions. The extension enables HOT-PDL to specify interesting properties of higher-order programs, including stack-based access control properties and those definable using dependent refinement types. We show that HOT-PDL model checking of higher-order functional programs over bounded integers is decidable via a reduction to modal $\mu$-calculus model checking of higher-order recursion schemes.

11:15
Syntax-Guided Termination Analysis

ABSTRACT. We present new algorithms for proving program termination and non-termination using syntax-guided synthesis. They exploit the symbolic encoding of program and automatically construct a formal grammar for symbolic constraints that are used to synthesize either a termination argument or a never-terminating program refinement. The constraints are then added back to the program encoding, and an off-the-shelf constraint solver decides on their fitness and on the progress of the algorithms. The evaluation of our implementation, called FreqTerm, shows that although the number of constraints is always finite and the formal grammar is limited to the syntax of the program, in the majority of cases our algorithms are effective and fast. Importantly, FreqTerm is competitive with the state-of-the-art on a wide range of terminating and non-terminating benchmarks, and it significantly outperforms the state-of-the-art on proving non-termination of a class of programs arising from large-scale Event-Condition-Action systems.

11:30
Model Checking Quantitative Hyperproperties
SPEAKER: Hazem Torfah

ABSTRACT. Hyperproperties are properties of sets of computation traces. In this paper, we study quantitative hyperproperties, which we define as hyperproperties that express a bound on the number of traces that may appear in a certain relation. For example, quantitative non-interference limits the amount of information about certain secret inputs that is leaked through the observable outputs of a system. Quantitative non-interference thus bounds the number of traces that have the same observable input but different observable output. We study quantitative hyperproperties in the setting of HyperLTL, a temporal logic for hyperproperties. We show that, while quantitative hyperproperties can be expressed in HyperLTL, the running time of the HyperLTL model checking algorithm is, depending on the type of property, exponential or even doubly exponential in the quantitative bound. We improve this complexity with a new model checking algorithm based on model-counting. The new algorithm needs only logarithmic space in the bound and therefore improves, depending on the property, exponentially or even doubly exponentially over the model checking algorithm of HyperLTL. In the worst case, the running time of the new algorithm is exponential in the size of the system. Our SAT-based prototype implementation demonstrates, however, that the counting approach is viable on systems with nontrivial quantitative information flow requirements such as a passcode checker.

11:45
Exploiting Synchrony and Symmetry in Relational Verification
SPEAKER: Lauren Pick

ABSTRACT. Relational safety specifications describe multiple runs of the same program or relate the behaviors of multiple programs. Approaches to automatic relational verification often compose the programs and analyze the result for safety, but a naively composed program can lead to difficult verification problems. We propose to exploit relational specifications for simplifying the generated verification subtasks. First, we maximize opportunities for synchronizing code fragments. Second, we compute symmetries in the specifications to reveal and avoid redundant subtasks. We have implemented these enhancements in a prototype for verifying k-safety properties on Java programs. Our evaluation confirms that our approach leads to a consistent performance speedup on a range of benchmarks.

12:00
JBMC: A Bounded Model Checking Tool for Verifying Java Bytecode

ABSTRACT. We present a bounded model checking tool for verifying Java bytecode, which is built on top of the CProver framework, named Java Bounded Model Checker (JBMC). In summary, JBMC processes Java bytecode together with a model of the standard Java libraries, with the goal of checking a set of desired properties. Experimental results show that JBMC can correctly verify a set of Java benchmarks extracted from the literature and it also performs competitively with JayHorn and JPF, which are state-of-art Java veriers based on constrained Horn clauses and path-based symbolic execution, respectively.

12:15
Eager Abstraction for Symbolic Model Checking

ABSTRACT. We present a method of abstracting parameterized or infinite-state symbolic model checking problems to finite-state problems, based on propositional skeletons and eager theory explication. The method is extensible in the sense that users can add abstractions (or refine existing abstractions) by providing axiom schemata. We evaluate the method on a collection of parameterized and infinite-state case studies, including FLASH cache coherence protocol and the Virtually Synchronous Paxos distributed consensus protocol. The approach is seen to substantially reduce the complexity of the required auxiliary invariants in comparison to invariant checking using an SMT solver.

11:00-12:30 Session 95B: DS-FM invited talk: Sylvain Conchon (DS-FM)
Location: Blavatnik LT1
11:00
Cubicle: a model checker for parameterized array-based transition systems

ABSTRACT. In this talk, I will present Cubicle, a model checker for verifying safety properties of array-based systems, a syntactically restricted class of para­metrized transition systems with states represented as arrays indexed by an arbitrary number of processes. Cache coherence protocols and mutual exclusion algorithms are typical examples of such systems. After a short presentation of the semantics of its input language, I will present some details about its implementation, mainly its symbolic backward reachability analysis which makes use of Satisfiabilty Modulo Theories.  I will conclude with a presentation of current and future works.

11:45
Heuristic-Based GR(1) Assumptions Refinement

ABSTRACT. In order to synthesize automatically a controller satisfying a specification given in GR(1) (a subset of linear temporal logic), the environment, where the controller is expected to operate, needs to be characterized by a sufficient set of GR(1) assumptions. Assumptions refinement procedures identify alternative sets of assumptions that make controller synthesis possible. However, since assumptions spaces are intractably large, techniques to explore a subset of them in a guided fashion are needed. In particular, it is important to identify weakest assumptions refinements to avoid overconstraining the environments and hence deeming the controller to be inadequate. The objective of my research is to devise a heuristic search approach that uses estimates of goodness of explored assumptions to direct the search towards better solutions. The work involves defining computable metrics that capture quality features of assumptions (such as their weakness), and automated ways to select a good subset of refinements in the search procedure.

11:00-12:30 Session 95C: User interfaces for formal tools (F-IDE)
11:00
User Support for the Combinator Logic Synthesizer Framework
SPEAKER: Anna Vasileva

ABSTRACT. Usability is crucial for the adoption of software development technologies. This is especially true in development stages, where build processes fail, because software is not yet complete or was incompletely modified. We present early work that aims to improve usability of the Combinatory Logic Synthesizer (CL)S framework, especially in these stages. (CL)S is a publicly available type-based development tool for the automatic composition of software components from a user-specified repository. It provides an implementation of a type inhabitation algorithm for Combinatory Logic with intersection types, which is fully integrated into the Scala programming language. Here, we specifically focus on building a web-based IDE to make potentially incomplete or erroneous input specifications for and decisions of the algorithm understandable for non-experts. A main aspect of this is providing graphical representations illustrating the step-wise search process of the algorithm. We also provide a detailed discussion of possible future work to further improve the understandability of these representations.

11:30
AsmetaF: a flattener for the ASMETA framework
SPEAKER: Paolo Arcaini

ABSTRACT. Abstract State Machines (ASMs) have shown to be a suitable high-level specification method for complex, even industrial, systems; the ASMETA framework, supporting several validation and verification activities on ASM models, is an example of a formal integrated development environment. Although ASMs allow modeling complex systems in a rather concise way -and this is advantageous for specification purposes-, such concise notation is in general a problem for verification activities as model checking and theorem proving that rely on tools accepting simpler notations. In this paper, we propose a flattener tool integrated in the ASMETA framework that transforms a general ASM model in a flattened model constituted only of update, parallel, and conditional rules; such model is easier to map to notations of verification tools. Experiments show the effect of applying the tool to some representative case studies of the ASMETA repository.

12:00
Improving the Visualization of Alloy Instances
SPEAKER: Rui Couto

ABSTRACT. Alloy is a lightweight formal specification language, supported by an IDE, which has proven well-suited for reasoning about software design in early development stages. The IDE provides a Visualizer that produces graphical representations of analysis results, which is essential for the proper validation of the model. Alloy is a rich language but inherently static, so behavior needs to be explicitly encoded and reasoned about. Even though this is a common scenario, the Visualizer presents limitations when dealing with such models. The main contribution of this paper is a principled approach to generate instance visualizations, which improves the current Alloy Visualizer, focusing on the representation of behavior.

11:00-12:30 Session 95D: Ally Skills (FLoC)

"Allies" are people who support a group who are commonly the subject of discrimination or prejudice, but who are not members of that group. The Ally Skills session teaches simple everyday ways for allies to use their privilege and influence to support people who are targets of systemic oppression in their workplaces and communities. This includes women of all races, people of color, people with disabilities, LGBTQ folks, parents, caregivers of all sorts, and people of different ages.

The format of the session will be some introductory material, followed by group-based discussion of several scenarios. All are welcome, hope to see you there!

Any questions, please contact Stephen Chong <chong@seas.harvard.edu> and Alexandra Silva <alexandra.silva@ucl.ac.uk>.

Location: Blavatnik LT2
11:00-12:30 Session 95E: Foundations I (ICLP)
11:00
Functional ASP with Intensional Sets; Application to Gelfond-Zhang Agreggates

ABSTRACT. In this paper, we propose a variant of Answer Set Programming (ASP) with evaluable functions that extends their application to sets of objects, something that allows a fully logical treatment of aggregates. Formally, we start from the syntax of First Order Logic with equality and the semantics of Quantified Equilibrium Logic with evaluable functions (QELF). Then, we proceed to incorporate a new kind of logical term, 'intensional set' (a construct commonly used to denote the set of objects characterised by a given formula), and to extend QELF semantics for this new type of expressions. In our extended approach, intensional sets can be arbitrarily used as predicate or function arguments or even nested inside other intensional sets, just as regular first-order logical terms. As a result, aggregates can be naturally formed by the application of some evaluable function (count, sum, maximum, etc) to a set of objects expressed as an intensional set. This approach has several advantages. First, while other semantics for aggregates depend on some syntactic transformation (either via a reduct or a formula translation), the QELF interpretation treats them as regular evaluable functions, providing a compositional semantics and avoiding any kind of syntactic restriction. Second, aggregates can be explicitly defined now within the logical language by the simple addition of formulas that fix their meaning in terms of multiple applications of some (commutative and associative) binary operation. For instance, we can use recursive rules to define sum in terms of the integer addition. Last, but not least, we prove that the semantics we obtain for aggregates coincides with the one defined by Gelfond and Zhang for the Alog language, when we make the restriction to that syntactic fragment.

11:30
Shared aggregate sets in answer set programming

ABSTRACT. Aggregates are among the most frequently used linguistic extensions of answer set programming. The result of an aggregation may introduce new constants during the instantiation of the input program, a feature known as value invention. When the aggregation involves literals whose truth value is undefined at instantiation time, modern grounders introduce several instances of the aggregate, one for each possible interpretation of the undefined literals. This paper introduces new data structures and techniques to handle such cases, and more in general aggregations on the same aggregate set identified in the ground program in input. The proposed solution reduces the memory footprint of the solver without sacrificing efficiency. On the contrary, the performance of the solver may improve thanks to the addition of some simple entailed clauses which are not easily discovered otherwise, and since redundant computation is avoided during propagation. Empirical evidence of the potential impact of the proposed solution is given.

12:00
Approximation Fixpoint Theory and the Well-Founded Semantics of Higher-Order Logic Programs

ABSTRACT. We define a novel, extensional, three-valued semantics for higher-order logic programs with negation. The new semantics is based on interpreting the types of the source language as three-valued Fitting-monotonic functions at all levels of the type hierarchy. We prove that there exists a bijection between such Fitting-monotonic functions and pairs of two-valued-result functions where the first member of the pair is monotone-antimonotone and the second member is antimonotone-monotone. By deriving an extension of consistent approximation fixpoint theory (Denecker et al. 2004) and utilizing the above bijection, we define an iterative procedure that produces for any given higher-order logic program a distinguished extensional model. We demonstrate that this model is actually a minimal one. Moreover, we prove that our construction generalizes the familiar well-founded semantics for classical logic programs, making in this way our proposal a serious candidate for the title of the well-founded semantics for higher-order logic programs.

11:00-12:30 Session 95F: SMT 1 (IJCAR)
Location: Maths LT2
11:00
Efficient Interpolation for the Theory of Arrays

ABSTRACT. Existing techniques for Craig interpolation for the quantifier-free fragment of the theory of arrays are inefficient for computing sequence and tree interpolants: the solver needs to run for every partitioning (A, B) of the interpolation problem to avoid creating AB-mixed terms. We present a new approach using Proof Tree Preserving Interpolation and an array solver based on Weak Equivalence on Arrays. We give an interpolation algorithm for the lemmas produced by the array solver. The computed interpolants have worst-case exponential size for extensionality lemmas and worst-case quadratic size otherwise. We show that these bounds are strict in the sense that there are lemmas with no smaller interpolants. We implemented the algorithm and show that the produced interpolants are useful to prove memory safety for C programs.

11:30
Implicit Hitting Set Algorithms for Maximum Satisfiability Modulo Theories

ABSTRACT. Solving optimization problems with SAT has a long tradition, particularly in the form of MaxSAT, which maximizes the weight of satisfied clauses in a propositional formula. The extension to maximum satisfiability modulo theories (MaxSMT) is less mature but allows problems to be formulated in a higher-level language closer to actual applications. In this paper we describe a new approach for solving MaxSMT based on lifting one of the currently most successful approaches for MaxSAT, the implicit hitting set approach, from the propositional level to SMT. We also provide a unifying view of how optimization, propositional reasoning, and theory reasoning can be combined in a MaxSMT solver. This leads to a generic framework that can be instantiated in different ways, subsuming existing work and supporting new approaches. Experiments with two instantiations clearly show the benefit of our generic framework.

12:00
A Generic Framework for Implicate Generation Modulo Theories
SPEAKER: Yanis Sellami

ABSTRACT. The clausal logical consequences of a formula are called its implicates. The generation of these implicates has several applications, such as the identification of missing hypotheses in a logical specification. We present a procedure that generates the implicates of a quantifier-free formula modulo a theory. No assumption is made on the considered theory, other than the existence of a decision procedure. The algorithm has been implemented (using the solvers Minisat, CVC4 and z3) and experimental results show evidence of the practical relevance of the proposed approach.

11:00-12:30 Session 95G: Overture: Tools and Applications (Overture)

The session contains two papers focuses on tool support for VDM in Overture and two papers on applications.

Chair:
11:00
Overture FMU: Export VDM-RT Models as Tool-Wrapper FMUs
SPEAKER: Casper Thule

ABSTRACT. The Functional Mock-up Interface is a standard for co-simulation, which both defines and describes a set of C interfaces that a simulation unit, a Functional Mock-up Unit (FMU), must adhere to in order to participate in such a co-simulation. To avoid the effort of implementing the low level details of the C interface when developing an FMU, one can use the Overture tool and the language VDM-RT. VDM-RT is a VDM dialect used for modelling real-time and potentially distributed systems. By using the Overture extension, called Overture FMU, the VDM-RT dialect can be used to develop FMUs. This raises the abstraction level of the implementation language and avoids implementation details of the FMIinterface thereby supporting rapid prototyping of FMUs. Furthermore, it enables precise time detection of changes in outputs, as every expression and statement in VDM-RT is associated with a “timing cost”. The Overture FMU has been used in several industrial case studies, and this paper describes how the Overture toolwrapper FMU engages in a co-simulation in terms of architecture, synchronisation and execution. Furthermore, a small example is presented.

11:20
ViennaVM: a Virtual Machine for VDM-SL development
SPEAKER: Tomohiro Oda

ABSTRACT. The executable subset of VDM allows code generators to automatically produce program code. A lot of research have been conducted on automated code generators. Virtual machines are common platforms of executing program code. Those virtual machines demand rigorous implementation and in return give portability among different operating systems and CPUs. This paper introduces a virtual machine called ViennaVM which is formally defined in VDM-SL and still under development. The objective of ViennaVM is to serve as a target platform of code generators from VDM specifications.

11:40
Multi-modelling of Cooperative Swarms

ABSTRACT. A major challenge in multi-modelling and co-simulation of cyber-physical systems (CPSs) using distributed control, such as swarms of autonomous UAVs, is the need to model distributed controller-hardware pairs where communication between controllers using complex types is required. Co-simulation standards such as the Functional Mock-up Interface (FMI) primarily support simple scalar types. This makes the protocol easy to adopt for new tools, but is limiting where a richer form of data exchange is required, such as distributed controllers. This paper applies previous work on adding an explicit network VDM model, called an ether, to a multi-model by deploying it to a more complex multi-model, specifically swarm of UAVs.

12:00
Design Space Exploration for Secure Building Control
SPEAKER: John Mace

ABSTRACT. By automation of their critical systems, modern buildings are becoming increasingly intelligent, but also increasingly vulnerable to both cyber and physical attacks. We propose that multi-models can be used not only to assess the security weaknesses of smart buildings, but also to optimise their control to be resilient to malicious use. The proposed approach makes use of the INTO-CPS toolchain to model both building systems and the behaviour of adversaries, and utilises design space exploration to analyse the impact of security on usability. By separation of standard control and security monitoring, the approach is suitable for both the design of new controllers and the improvement of legacy systems. A case study of a fan coil unit demonstrates how a controller can be augmented to be more secure, and how the trade-off between security and usability can be explored to find an optimal design. We propose that the suggested use of multi-models can aid building managers and security engineers to build systems which are both secure and user friendly.

12:30-14:00Lunch Break
14:00-15:00 Session 96A: CAV Invited Talk: Eran Yahav (CAV)
Location: Maths LT1
14:00
From Programs to interpretable Deep Models, and Back

ABSTRACT. Deep learning has revolutionized many research areas. In this talk, we demonstrate how deep learning over programs is used to provide (preliminary) augmented programmer intelligence. In the first part of the talk, we show how deep learning over programs is used to tackle tasks like code completion, code summarization, and captioning. We describe a general path-based representation of source code that can be used across programming languages and learning tasks, and discuss how this representation enables different learning algorithms. In the second part, we describe techniques for extracting interpretable representations from deep models, shedding light on what has actually been learned in various tasks.

14:00-15:30 Session 96B (DS-FM)
Location: Blavatnik LT1
14:00
Analysis and Verification of Message Passing based Parallel Programs

ABSTRACT. Many complex applications which have to tackle simulation and data analysis are a prerogative to High-Performance Computing. And so, high data volumes and performance requirements have held the parallel community developers with a clasp to deliver scalable, accurate, and most importantly bug-free solutions. Message Passing (MP) is a prominent programming model via which nodes of a distributed system communicate. Writing correct and bug-free parallel programs are hard because the participating entities interact in such non-deterministic ways that are difficult to anticipate before-hand. Programmers have to predict messaging patterns, perform data marshaling and compute locations for coordination in order to design correct and efficient programs. Unfortunately, there is a shortage of verification and formal-method techniques that can guarantee the development of correct solutions.

14:30
Formal verification of neural networks

ABSTRACT. Since a few years neural networks are the most powerful tool for perception tasks, especially in image processing, and superior performances in these tasks sparked the desire to use them in safety-critical systems, e.g., autonomous vehicles. However, verifying the safety of systems that are using neural networks remains a challenge, because neural networks raise certain dependability concerns (such as adversarial inputs). Resulting from this need, the research topic of formal verification of neural networks emerged. We identify some of the main challenges of this field and discuss how to address them.

15:00
Consistency Checking of Functional Requirements

ABSTRACT. Requirements are informal and semi-formal descriptions of the expected behavior of a system. They are usually expressed in the form of natural language sentences and checked for errors manually, \textit{e.g.}, by peer reviews. However, manual checks are error-prone and time-consuming. With the increasing complexity of cyber-physical systems and the need of operating in safety- and security-critical environments, it became essential to automatize the consistency check of requirements and build artifacts to help system engineers in the design process.

14:00-15:30 Session 96D: Foundations II (ICLP)
14:00
Proof-relevant Horn Clauses for Dependent Type Inference and Term Synthesis

ABSTRACT. First-order resolution has been used for type inference for many years, including in Hindley-Milner type inference, type-classes, and constrained data types, to name but a few. Dependent types are a new trend in functional languages. In this paper, we show that proof-relevant first-order resolution can play an important role in automating type inference and term synthesis for dependently typed languages. We propose a calculus that translates type inference and term synthesis problems in a dependently typed language to a logic program and a goal in the proof-relevant first-order Horn clause logic. The computed answer substitution and proof term then provide a solution to the given type inference and term synthesis problem. We prove the decidability and soundness of our method.

14:30
First-order answer set programming as constructive proof search

ABSTRACT. We propose an interpretation of the first-order answer set programming (FOASP) in terms of intuitionistic proof theory. It is obtained by two polynomial translations between FOASP and the bounded-arity fragment of the \Sigma_1 level of the Mints hierarchy in first-order intuitionistic logic. It follows that \Sigma_1 formulas using predicates of fixed arity (in particular unary) are of the same strength as FOASP. Our construction reveals a close similarity between constructive provability and stable entailment, or equivalently, between the construction of an answer set and an intuitionistic refutation.

15:00
Solving Horn Clauses on Inductive Data Types Without Induction

ABSTRACT. We address the problem of verifying the satisfiability of Constrained Horn Clauses (CHCs) based on theories of inductively defined data structures, such as lists and trees. We propose a transformation technique whose objective is the removal of these data structures from CHCs, hence reducing their satisfiability to a satisfiability problem for CHCs on integers and booleans. We propose a transformation algorithm and identify a class of clauses where it always succeeds. We also consider an extension of that
algorithm, which combines clause transformation with reasoning on integer constraints. Via an experimental evaluation we show that our technique greatly improves the effectiveness of applying the Z3 solver to CHCs. We also show that our verification
technique based on CHC transformation followed by CHC solving, is competitive with respect to CHC solvers extended with induction.

14:00-15:30 Session 96E: SAT Extensions & Applications (IJCAR)
Location: Maths LT2
14:00
An Assumption-Based Approach for Solving The Minimal S5-Satisfiability Problem

ABSTRACT. Recent work on the practical aspects on the modal logic S5 satisfiability problem showed that using a SAT-based approach outperforms other existing approaches. In this work, we go one step further and study the related minimal S5 satisfiability problem (MinS5-SAT), the problem of finding an S5 model, a Kripke structure, with the smallest number of worlds.

Finding a small S5 model is crucial as soon as the model should be presented to a user, displayed on a screen for instance. SAT-based approaches tend to produce S5-models with a large number of worlds, thus the need to minimize them.

That optimization problem can obviously be solved as a pseudo-Boolean optimization problem. We show in this paper that it is also equivalent to the extraction of a maximal satisfiable set (MSS). It can thus be solved using a standard pseudo-Boolean or MaxSAT solver, or an MSS-extractor.

We show that a new incremental, SAT-based approach can be proposed by taking into account the equivalence relation between the possible worlds on S5 models. That specialized approach presented the best performance on our experiments conducted on a wide range of benchmarks from the modal logic community and a wide range of pseudo-Boolean and MaxSAT solvers.

Our results demonstrate once again that domain knowledge is key to building efficient SAT-based tools.

14:30
Investigating the Existence of Large Sets of Idempotent Quasigroups via Satisfiability Testing
SPEAKER: Pei Huang

ABSTRACT. In this paper, we describe a method for solving some open problems in design theory based on SAT solvers. Modern SAT solvers are efficient and can produce unsatisfiability proofs. However, the state-of-the-art SAT solvers cannot solve so-called large set of idempotent quasigroups. Two idempotent quasigroups over the same set of elements are said to be disjoint if at any position other than the main diagonal, the two elements from the two idempotent quasigroups at the same position cannot be the same. A collection of $n-2$ idempotent quasigroups of size n is called a large set if all idempotent quasigroups are mutually disjoint, denoted by LIQ(n). The existence of LIQ(n) satisfying certain identities has been a challenge for modern SAT solvers even if n = 9. We will use a finite model generator to help the SAT solver avoiding symmetric search spaces, and take both advantages of first order logic and the SAT techniques. Furthermore, we use an incremental search strategy to find a maximum number of disjoint idempotent quasigroups, thus decide the non-existence of large sets. The experimental results show that our method is highly efficient. The use of symmetry breaking is crucial to allow us to solve some instances in reasonable time.

15:00
A New Probabilistic Algorithm for Approximate Model Counting
SPEAKER: Cunjing Ge

ABSTRACT. Constrained counting is important in domains ranging from artificial intelligence to software analysis. There are already a few approaches for counting models over various types of constraints. Recently, hashing-based approaches achieve success but still rely on solution enumeration. In this paper, a new probabilistic approximate model counter is proposed, which is also a hashing-based universal framework, but with only satisfiability queries. A dynamic stopping criteria, for the new algorithm, is presented, which has not been studied yet in previous works of hashing-based approaches. Although the new algorithm lacks theoretical guarantee, it works well in practice. Empirical evaluation over benchmarks on propositional logic formulas and SMT(BV) formulas shows that the approach is promising.

14:00-14:50 Session 96F: Overture/F-IDE invited talk: Leo Freitas (F-IDE and Overture)
14:00
VDM at large: analysing the EMV Next Generation Kernel

ABSTRACT. In this short talk we present our experience in using VDMSL as tool for understanding a complex requirements specification of our model of the upcoming EMVCo Next Generation Kernel specifications. EMVCo is the consortium including MasterCard, Visa, Amex, etc for worldwide payment systems. It includes the familiar Chip&Pin and contactless protocols, as well as a number of new operational modes, security verification types (including biometric). EMVCo's objective is to ensure the requirements are are safe/correct as possible.

In order to engage them with any findings, a conversation between ourselves and stakeholders within EMVCo sub-groups has been established through formal simulation of specific protocol runs using VDMSL. The results have been productive and substantial: a number of corrections were suggested, and most importantly a large number of hidden assumptions about types, APIs, and other system parts have been carefully documented.

An issue of attempting to apply formal methods to industrial problems is that elegance often needs to be compromised in order to ensure stakeholders notice/accept the formal results: they ought to see what was built as their artefact, rather than a nicer abstraction.

Our aim, which we have achieved, is to influence the specification process prior to public release later in 2018. To date, we have modelled about 75% of the whole EMV kernel, and hope to complete it by the middle of 2018. It comprises 135 modules and 40KLOC in VDMSL, some of which is automatically generated from an XSD/XML data dictionary.

The work unravelled many technical issues in terms of design decisions, identified tool bugs, as well as the limits of Overture as a tool for industrial use at this scale. We hope to discuss these issues and make some useful suggestions.

14:50-15:30 Session 97: Overture: Perspectives and Methods (Overture)

This session focuses on methodology and prespectives on the use of the Overture tools in practice.

14:50
Transforming an industrial case study from VDM++ to VDM-SL

ABSTRACT. Normally transitions between different VDM dialects go from VDM- SL towards VDM++ or VDM-RT. In this paper we would like to demonstrate that it actually can make sense to move in the opposite direction. We present a case study where a requirement change late in the project deemed the need for distribution and concurrency aspects unnecessary. Consequently, the developed VDM-RT model was transformed to VDM++ and later to VDM-SL. The advan- tage of this transformation is to reduce complexity and prepare the model for a combined commercial and research setting.

15:10
Integrating VDM-SL into the continuous delivery pipelines of cloud-based software

ABSTRACT. The cloud is quickly becoming the principle means by which software is delivered into the hands of users. This has not only changed the shipping mechanism, but the whole process by which software is de- veloped. The application of lean manufacturing principles to software engineering, and the growth of continuous integration and delivery, have contributed to the end-to-end automation of the development lifecycle. Gone are the days of quarterly releases of monolithic systems; the cloud- based, software as a service is formed of hundred or even thousands of microservices with new versions available to the end user on a daily basis. If formal methods are to be relevant in the world of cloud computing, we must be able to apply the same principles; enabling easy componentiza- tion of specifications and the integration of the processes around those specifications into the fully mechanized process. In this paper we present tools that enable VDM-SL specifications to be constructed, tested and documented in the same way as their implementation through the use of a VDM Gradle plugin. By taking advantage of existing binary repository systems we will show that known dependency resolution instruments can be used to facilitate the breakdown of specifications and enable the easy re-use of foundational components. We also suggest that the deployment of those components to central repositories could reduce the learning curve of formal methods and concentrate efforts on the innovative. Fur- thermore, we propose a number of additional tools and integrations that we believe could increase the use of VDM-SL in the development of cloud software.

15:00-15:30 Session 98A: Polyhedra (CAV)
Location: Maths LT1
15:00
Fast Numerical Program Analysis with Reinforcement Learning

ABSTRACT. We leverage reinforcement learning (RL) to speed up numerical program analysis. The key insight is to establish a correspondence between concepts in RL and program analysis. For instance, a state in RL maps to an abstract program state in the analysis, an action maps to an abstract transformer, and at every state, we have a set of sound transformers (actions) that represent different trade-offs between precision and performance. At each iteration, the agent (analysis in our case) uses a policy that is learned offline by RL to decide on the transformer which minimizes the loss of precision at fixpoint while increasing analysis performance. Our approach leverages the idea of online decomposition (applicable to popular numerical abstract domains) to define a space of approximate transformers with varying degrees of precision and performance. Using a suitably designed set of features that capture key properties of both, abstract program states and available actions, we then apply Q-learning with linear function approximation to compute an optimized context-sensitive policy that chooses transformers during analysis. We implemented our approach for the notoriously expensive Polyhedra domain and evaluated it on a set of Linux device drivers that are expensive to analyze. The results show that our approach can yield massive speedups (up to $515$x) while maintaining precision at fixpoint.

15:15
A Direct Encoding for NNC Polyhedra

ABSTRACT. We present an alternative Double Description representation for the domain of NNC (not necessarily closed) polyhedra, together with the corresponding Chernikova-like conversion procedure. The representation uses no slack variable at all and provides a solution to a few technical issues caused by the encoding of an NNC polyhedron as a closed polyhedron in a higher dimension space. A preliminary experimental evaluation shows that the new conversion algorithm is able to achieve significant efficiency improvements.

15:00-15:30 Session 98B: Experience in analyzing large programs (F-IDE)
15:00
Experience Report on Formally Verifying Parts of OpenJDK's API with KeY

ABSTRACT. Deductive verification of software has not yet found its way into industry, as complexity and scalability issues require highly specialized experts. The long-term perspective is, however, to develop verification tools aiding industrial software developers to find bugs or bottlenecks in software systems faster and more easily. The KeY project constitutes a framework for specifying and verifying software systems, aiming at making formal verification tools applicable for mainstream software development. To help the developers of KeY, its users, and the deductive verification community, we summarize our experiences with KeY 2.6.1 in specifying and verifying real-world Java code from a users perspective. To this end, we concentrate on parts of the Collections-API of OpenJDK 6, where an informal specification exists. While we describe how we bridged informal and formal specification, we also exhibit accompanied challenges that we encountered. Our experiences are that (a) in principle, deductive verification for API-like code bases is feasible, but requires high expertise, (b) developing formal specifications for existing code bases is still notoriously hard, and (c) the under-specification of certain language constructs in Java is challenging for tool builders. Our initial effort in specifying parts of OpenJDK 6 constitutes a stepping stone towards a case study for future research.

15:30-16:00Coffee Break
16:00-18:00 Session 99A: Synthesis (CAV)
Location: Maths LT1
16:00
What’s hard about Boolean Functional Synthesis?

ABSTRACT. Given a relational specification between Boolean inputs and outputs, the goal of Boolean functional synthesis is to synthesize each output as a function of the inputs such that the specification is met. In this paper, we first show that unless some hard conjectures in complexity theory are falsified, Boolean functional synthesis must necessarily generate exponential-sized Skolem functions, thereby requiring exponential time, in the worst-case. Given this inherent hardness, what does one do to solve the problem? We present a two-phase algorithm for Boolean functional synthesis, where the first phase is efficient both in terms of time and sizes of synthesized functions, and solves an overwhelming majority of benchmarks. To explain this surprisingly good performance, we provide a sufficient condition under which the first phase must produce exact correct answers. When this condition fails, the second phase builds upon the result of the first phase, possibly requiring exponential time and generating exponential-sized functions in the worst-case. Detailed experimental evaluation shows our algorithm to perform better than state-of-the-art techniques for the vast majority of benchmarks.

16:15
Synthesis Modulo Theories

ABSTRACT. Abstract. Program synthesis is the mechanized construction of soft- ware. One of the main difficulties is the efficient exploration of the very large solution space, and tools often require a user-provided syntactic restriction of the search space. We propose a new approach to program synthesis that combines the strengths of a counterexample-guided in- ductive synthesiser with those of a theory solver, exploring the solution space more efficiently without relying on user guidance. We call this approach CEGIS(T ), where T is a first-order theory. In this paper, we focus on one particular application of CEGIS(T ), namely the synthesis of programs that require non-trivial constants, which is a fundamentally difficult task for state-of-the-art synthesisers. We present two exemplars, one based on Fourier-Motzkin (FM) variable elimination and one based on first-order satisfiability. We demonstrate the practical value of CEGIS(T ) by automatically synthesizing programs for a set of intricate benchmarks.

16:30
Synthesizing Reactive Systems from Hyperproperties

ABSTRACT. We study the reactive synthesis problem for hyperproperties given as formulas of the temporal logic HyperLTL. Hyperproperties generalize trace properties, i.e., sets of traces, to sets of sets of traces. Typical examples are information-flow policies like noninterference, which stipulate that no sensitive data must leak into the public domain. Such properties cannot be expressed in standard linear or branching-time temporal logics like LTL, CTL, or CTL*. We show that, while the synthesis problem is undecidable for full HyperLTL, it remains decidable for the exists*, exists*forall1, and the linear forall* fragments. Beyond these fragments, the synthesis problem immediately becomes undecidable. For universal HyperLTL, we present a semi-decisionprocedure that constructs implementations and counterexamples up to a given bound. We report encouraging experimental results obtained with a prototype implementation on example specifications with hyperproperties like symmetric responses, secrecy, and information flow.

16:45
Reactive Control Improvisation

ABSTRACT. Reactive synthesis is a paradigm for automatically building correct-by-construction systems that interact with an unknown or adversarial environment. We study how to do reactive synthesis when part of the specification of the system is that its behavior should be random. Randomness can be useful, for example, in a network protocol fuzz tester whose output should be varied, or a planner for a surveillance robot whose route should be unpredictable. However, existing reactive synthesis techniques do not provide a way to ensure random behavior while maintaining functional correctness. Towards this end, we generalize the recently-proposed framework of control improvisation (CI) to add reactivity. The resulting framework of reactive control improvisation provides a natural way to integrate a randomness requirement with the usual functional specifications of reactive synthesis over a finite window. We theoretically characterize when such problems are realizable, and give a general method for solving them. For specifications given by reachability or safety games or by deterministic finite automata, our method yields a polynomial-time synthesis algorithm. For various other types of specifications including temporal logic formulas, we obtain a polynomial-space algorithm and prove matching PSPACE-hardness results. We show that all of these randomized variants of reactive synthesis are no harder in a complexity-theoretic sense than their non-randomized counterparts.

17:00
Constraint-Based Synthesis of Coupling Proofs

ABSTRACT. Proof by coupling is a classical technique for proving properties about pairs of randomized algorithms by carefully relating (or coupling) two probabilistic executions. In this paper, our goal is to automatically construct such proofs for programs. First, we present f-coupled postconditions, an abstraction describing two coupled programs. Second, we show how specific forms of f-coupled postconditions imply probabilistic properties like uniformity and independence of program variables. Third, we demonstrate how to automate construction of coupling proofs by reducing the problem to solving a purely logical synthesis problem of the form ∃f. ∀X. φ, thus doing away with probabilistic reasoning. Finally, we evaluate our technique in a prototype implementation, and demonstrate its ability to construct a range of coupling proofs for various properties and randomized algorithms.

17:15
Controller Synthesis Made Real: Reach-avoid Specifications and Linear Dynamics
SPEAKER: Umang Mathur

ABSTRACT. We address the problem of synthesizing provably correct controllers for linear systems with reach-avoid specifications. Our solution uses a combination of an open-loop controller and a tracking controller, thereby reducing the problem to smaller tractable problems. We show that, once a tracking controller is fixed, the reachable states from an initial neighborhood, subject to any disturbance, can be over-approximated by a sequence of ellipsoids, with sizes that are independent of the open-loop controller. Hence, the open-loop controller can be synthesized independently to meet the reach-avoid specification for an initial neighborhood. Exploiting several techniques for tightening the over-approximations, we reduce the open-loop controller synthesis problem to satisfiability over quantifier-free linear real arithmetic. The overall synthesis algorithm, computes a tracking controller, and then iteratively covers the entire initial set to find open-loop controllers for initial neighborhoods. The algorithm is sound and, for a class of robust systems, is also complete. We present RealSyn, a tool implementing this synthesis algorithm, and we show that it scales to several high-dimensional systems with complex reach-avoid specifications.

17:30
Synthesis Of Asynchronous Reactive Programs From Temporal Specifications

ABSTRACT. Asynchronous interactions are ubiquitous in computing systems and complicate design and programming. Automatic construction (``synthesis'') of asynchronous programs from specifications could ease the difficulty, but known methods are intractable in practice. This work develops substantially simpler methods for asynchronous synthesis. An exponentially more compact automaton construction is developed for the reduction of asynchronous to synchronous synthesis. Experiments with a prototype implementation of the new method demonstrate practical feasibility. Furthermore, it is shown that for several useful classes of temporal properties, automaton-based methods can be avoided altogether and replaced with simpler Boolean constraint solving.

17:45
Syntax Guided Synthesis with Quantitative Syntactic Objectives
SPEAKER: Qinheping Hu

ABSTRACT. Automatic program synthesis promises to increase the productivity of programmers and end-users of computing devices by automating tedious and error-prone tasks. Despite the practical successes of program synthesis, we still do not have systematic frameworks to synthesize programs that are ``good'' according to certain metrics---e.g., produce programs of reasonable size or with good runtime---and to understand when synthesis can result in such good programs. In this paper, we propose QSyGuS, a unifying framework for describing syntax-guided synthesis problems with quantitative objectives over the syntax of the synthesized programs. QSyGuSbuilds on weighted (tree) grammars, a clean and foundational formalism that provides flexible support for different quantitative objectives, useful closure properties, and practical decision procedures. We then present an algorithm for solving QSyGuS. Our algorithm leverages closure properties of weighted grammars to generate intermediate problems that can be solved using non-quantitative \sygus solvers. Finally, we implement our algorithm in a tool, QuaSi, and evaluate it on 26 quantitative extensions of existing SyGuS benchmarks. QuaSi can synthesize optimal solutions in 15/26 benchmarks with times comparable to those needed to not find an arbitrary solution.

16:00-16:30 Session 99B (DS-FM)
Location: Blavatnik LT1
16:00
A Formal Study of MANET Routing Protocols

ABSTRACT. Mobile Ad-hoc Networks (MANETs) are self-organising networks that provide support for broadband communication, and have gained popularity in a wide range of application areas, such as public safety, emergency response networks, etc. Routing protocols of these networks provide the ground for nodes communication and finding routes which are later used for sending data packets to different destinations. Formal methods are used for formal modelling and verification of such protocols to ensure correctness and precision of design due to safety critical applications of MANETs. In this work, we sketch our different formal studies on MANET routing protocols.

16:00-18:00 Session 99C: Integrating formal verification results (F-IDE)
16:00
Isabelle/jEdit as IDE for domain-specific formal languages and informal text documents

ABSTRACT. Isabelle/jEdit is the main application of the Prover IDE (PIDE) framework and the default user-interface of Isabelle, but it is not limited to theorem proving. This paper explores possibilities to use it as a general IDE for formal languages that are defined in user-space, and embedded into informal text documents. It covers overall document structure with auxiliary files and document antiquotations, formal text delimiters and markers for interpretation (via control symbols). The ultimate question behind this: How far can we stretch a plain text editor like jEdit in order to support semantic text processing, with support by the underlying PIDE framework?

16:30
A Notebook Format for the Holistic Design of Embedded Systems

ABSTRACT. This paper proposes the use of notebooks for the design documentation and tool interaction in the rigorous design of embedded systems. Conventionally, a notebook is a sequence of cells alternating between (textual) code and prose to form a document that is meant to be read from top to bottom, in the spirit of literate programming. We extend the use of notebooks to embedded systems specified by pCharts. The charts are visually edited in cells inline. Other cells contain can statements that generate code and analyze the charts qualitatively and quantitatively; in addition, notebook cells can contain other instructions to build the product from the generated code. This allows a notebook to be replayed to re-analyze the design and re-build the product, like a script, but also allows the notebook to be used for presentations, as for this paper, and for the inspection of the design. The interaction with the notebook is done through a web browser that connects to a local or remote server, thus allowing a computationally intensive analysis to run remotely if needed. The pState notebooks are implemented as an extension to Jupyter. The underlying software architecture is described and the issue of proper placement of transition labels in charts embedded in notebooks is discussed.

17:00
The CLEAR Way To Transparent Formal Methods
SPEAKER: Brendan Hall

ABSTRACT. Although Formal Method (FM) based techniques and tools have impressively improved in recent years, the need to train engineers to be accomplished users of formal notation and tailor the tools/workflow to meet objectives of the specific application domain, involves a high initial investment and difficulty to actualize in large, legacy organizations. In this paper, we present our approach to address these challenges in Honeywell Aerospace and share our experiences en-route. Starting with a constrained, structured natural language notation to author requirements and orchestrated with a set of novel in-house tool suite our approach allows automatically transforms those requirements into a consolidated formal representation to perform rigorous analyses and test generation. While the notation's natural-language flavor provides a `softer' front end, the tools-suite allows 'transparent' use of formal tools at the back-end without engineers having to know the underlying mathematics and theories. The initial application of our approach across various avionic software systems in Honeywell is well-accepted due to it minimal impact on the existing workflow while leveraging the benefits of formal methods.

17:30
Integrating user design and formal models within PVSio-Web
SPEAKER: Paolo Masci

ABSTRACT. Creating formal models of interactive systems has wide reaching benefits, not only for verifying low-level correctness, but also as a tool for ensuring user interfaces behave logically and consistently. Despite this, tools for designing user experiences and tools for creating and working with formal models are typically distinctly separate systems. This work aims to bridge this divide by allowing the generation of state machine diagrams and formal models via a simple, interactive prototyping tool that mirrors the basic functionality of many modern digital prototyping applications.

16:00-18:00 Session 99D: Formal Methods in Industry (FLoC)
Location: Maths LT3
16:00
Amazon’s Automated Reasoning Group: Combining Amazon’s 14 foundational principles with the Strachey philosophy
16:30
Formal Reasoning and the Hacker Way
17:00
Business Perspective on Software Analysis
16:00-17:00 Session 99E: ICLP Invited Talk: Thomas Eiter (ICLP)
16:00
Answer Set Programs go 30: Past and Future

ABSTRACT. It is now thirty years ago that the concept of stable model has been proposed by Gelfond and Lifschitz in a seminal paper at ICLP, in a joint meeting with the Symposium on Logic Programming (SLP), at a time when non-monotonic reasoning and logic programming were among the hottest issues discussed in Knowledge Representation and Artificial Intelligence. While the interest in non-monotonic logics and formalisms has decreased over the years in general, the proposal to use logic programs for declarative problem solving, coined answer set programming, has led to a stream of research in which, in mutual support, theory and practice of logic-based problem solving have led to a flourishing branch of computational logic. This talk will make a tour through the history of Answer Set Programming, take a look at the current state of research, and speculate about possible futures of the paradigm -- of which multiple exist. It will consider developments in perspective and address some issues and lessons to be learned, which may be kept in mind to experience longevity and leave the opportunity to celebrate in decades from now again.

16:00-17:00 Session 99F: Tools and Rewriting (IJCAR)
Location: Maths LT2
16:00
A Coinductive Approach to Proving Reachability in Logically Constrained Term Rewriting Systems

ABSTRACT. We introduce a sound and complete coinductive proof calculus for reachability properties in transitions systems generated by logically constrained term rewriting rules over an order-sorted signature modulo builtins.

A key feature of the proof calculus is a circularity proof rule, which allows to obtain finite representations of the infinite coinductive proof trees. The paper also includes a brief description of a prototype implementation, which validates our approach on a number of examples.

16:30
FAME: An Automated Tool for Semantic Forgetting in Expressive Description Logics

ABSTRACT. In this paper, we describe a high-performance reasoning tool, called FAME, for semantic forgetting in expressive description logics. Forgetting is a non-standard reasoning service that seeks to create restricted views of ontologies by eliminating concept and role names from ontologies in a way such that all logical consequences up to the remaining signature are preserved. FAME is a Java-based implementation of an Ackermann-based method for forgetting concept and role names from ontologies expressible in the description logic ALCOIH, i.e., the basic ALC extended with nominals, inverse roles, and role inclusions. FAME can be used as a standalone tool or a Java library for forgetting or related tasks. Results of an evaluation of FAME on a corpus of 396 biomedical ontologies have shown that: (i) in more than 90% of the test cases FAME was successful (i.e., eliminated all specified concept and role names) and (ii) in more than 70% of these cases the elimination was done within a single second.

16:45
Efficient Model Construction for Horn Logic with VLog: System Description
SPEAKER: Jacopo Urbani

ABSTRACT. Existential rules are a syntactic variant of first-order Horn logic that has gained prominence in several communities. Such rules are used to express ontologies in knowledge representation, dependencies in databases, and recursive queries in data analytics. In this system description, we present our recent extension of the rule reasoning and query engine VLog with support for existential rules. Our column-oriented implementation of the restricted chase procedure aims at constructing a universal model for a Horn logic theory. While query answering in this logic is undecidable, and universal models might be infinite, our implementation can find a finite model in many cases. We conduct an evaluation over several real-world theories with millions of facts and thousands of rules, and we show that VLog can compete with the state of the art. Other notable features of our system include support for a variety of input sources and databases, query answering capabilities, cross-platform support, and excellent memory efficiency. The latter makes it possible to compute models with hundreds of millions of relational tuples on a laptop.

16:00-18:00 Session 99G: Overture: the Workshop part (Overture)

This session will be used to host a structured brainstorm on the future of Overture, from the perspective of the notation and semantics, tool support, applications, community building and exploitation.

16:00
The Overture Strategic Research Agenda

ABSTRACT. The closing session of the Overture workshop will be dedicated to discuss the future of the Overture tool set. It is a follow-up to similar sessions held at the Cyprus and Newcastle Overture workshops. The format of the workshop is a structured brainstorm, inspired by the presentations held earlier in the day.

17:00-18:00 Session 100A: ASP Extensions (ICLP)
17:00
Constraint Answer Set Programming without Grounding
SPEAKER: Joaquin Arias

ABSTRACT. Extending ASP with constraints (CASP) enhances its expressiveness and performance. This extension is not straightforward as the grounding phase, present in most ASP systems, removes variables and the links among them, and also causes a combinatorial explosion in the size of the program. This has led CASP systems to devise several methods to overcome this issue: restricting the constraint domains (e.g., discrete instead of dense), where constraints can appear, or the type (or number) of models that can be returned. In this paper we propose to incorporate constraints into s(ASP), a goal-directed, top-down execution model which implements ASP while retaining logical variables both during execution and in the answer sets. The resulting model, s(CASP), can constrain variables that (as in CLP) are kept during the execution and in the answer sets. s(CASP) inherits and generalizes the execution model of s(ASP) while parameterizing the constraint solver. We describe this novel execution model and show, through several examples, the enhanced expressiveness of s(CASP) w.r.t. ASP, CLP, and other ASP systems with constraints. We also report improved performance w.r.t. other very mature, highly optimized ASP systems in some benchmarks.

17:30
Translating LPOD and CR-Prolog2 into Standard Answer Set Programs
SPEAKER: Zhun Yang

ABSTRACT. Logic Programs with Ordered Disjunction (LPOD) is an extension of standard answer set programs to handle preference using the concept of ordered disjunction, and CR-Prolog2 is an extension of standard answer set programs with consistency restoring rules and LPOD-like ordered disjunction. We present reductions of each of these languages into the standard ASP language, which gives us an alternative insight into the semantics of the extensions in terms of the standard ASP language.

17:00-18:00 Session 100B: Awards (IJCAR)
  • Presentation of the IJCAR 2018 Best Paper Award
  • Presentation of the 2018 Woodie Bledsoe Student Travel Awards
  • Presentation of the Herbrand Award for Distinguished Contributions to Automated Reasoning to Bruno Buchberger
    • Presentation by the awardee: Automated Mathematical Invention: Would Gröbner Need a PhD Student Today?
Location: Maths LT2
19:00-21:30 FLoC reception at Oxford Town Hall (FLoC)

FLoC reception at Oxford Town Hall. Drinks and canapés available from 7pm (pre-booking via FLoC registration system required; guests welcome).