FLOC 2018: FEDERATED LOGIC CONFERENCE 2018
PROGRAM FOR FRIDAY, JULY 13TH
Days:
previous day
next day
all days

View: session overviewtalk overview

09:00-10:00 Session 83A: Invited talk: Erika Abraham (ADHS)
09:00
Old-established methods in a new look: How HyPro speeds up reachability computations for hybrid systems

ABSTRACT. Reachability analysis for hybrid systems is a highly challenging task. Besides logical approaches, methods based on flowpipe construction are in use, implemented in several tools. Despite exciting developments, these tools are still not ready for large-scale applications. In this talk we discuss how well-known techniques like over-approximation, counterexample-guided abstraction refinement or parallelization can be instantiated in this context to reduce the computational effort in practice, and how these ideas are implemented in the HyPro tool.

09:00-10:30 Session 83B: Program Verification (ADSL)
09:00
Resource semantics: substructural logic as a modelling technology (keynote)

ABSTRACT. The development of the logic of bunched implications, BI, together with its somewhat characteristic resource semantics, led to the formulation of Separation Logic, which forms the basis of the Infer program analyser deployed in Facebook's code production. This rather successful story sits within a broader, quite systematic logical context. I will review the (family of) logics, including process logics, that are supported by resource semantics, explaining their more-or-less uniform meta-theoretic basis and illustrating their uses in a range of modelling applications, including access control, systems security, and workflow simulation. 

09:50
Automating the Flow Framework

ABSTRACT. In previous work we presented the flow framework for verifying complex concurrent data structures using separation logic (SL). The framework enables the decomposition of the overall correctness proof of the data structure into (1) a proof of an abstract template algorithm that captures the underlying synchronization protocol between data structure operations, and (2) a refinement proof of this template to the concrete data structure implementation. In this paper, we investigate the automation of proofs expressed in this framework. We use the give-up template for concurrent dictionaries, and an implementation of this template based on B+ trees as a case study. We detail the challenges in automating the framework and present a revised version geared towards automation in a standard SL-based symbolic execution engine. We have implemented such an engine in the deductive verification tool GRASShopper and used it to mechanize the proof of our case study. Our experience suggests that the flow framework can express simple correctness proofs of complex concurrent data structures and can be automated using off-the-shelf verification tools, requiring only minor extensions to standard reasoning engines for SL.

09:00-10:30 Session 83C: CAV Tutorial: Shaz Qadeer (part 1) (CAV)
Location: Maths LT3
09:00
Proving a concurrent program correct by demonstrating it does nothing
SPEAKER: Shaz Qadeer

ABSTRACT. CIVL is a verifier for concurrent programs implemented as a conservative extension to the Boogie verification system. CIVL allows the proof of correctness of a concurrent program —shared-memory or message-passing— to be described as a sequence of program layers. The safety of a layer implies the safety of the layer just below, thus allowing the safety of the highest layer to transitively imply the safety of the lowest.

The central theme in CIVL is reasoning about atomic actions. Different layers of a program describe its behavior using atomic actions, higher layers with coarse-grained and lower layers with fine-grained atomic actions. A proof of correctness amounts to a demonstration that the highest layer is SKIP, the atomic action that does nothing. The formal and automated verification justifying the connection between adjacent layers combines several techniques—linear variables, reduction based on movers, location invariants, and procedure-local abstraction.

CIVL is available in the master branch of Boogie together with more than fifty micro-benchmarks. CIVL has also been used to refine a realistic concurrent garbage-collection algorithm from a simple high-level specification down to a highly-concurrent implementation described in terms of individual memory accesses.

09:00-10:30 Session 83D (FRIDA)
09:00
Building verified distributed systems with IVy

ABSTRACT. IVy is a language and a tool focused on building verified distributed systems using decidable logics. The talk will discuss the philosophy underlying IVy and give examples of the use of Ivy to prove distributed systems.

09:30
The latest gossip on BFT consensus

ABSTRACT. Tendermint is a new protocol for ordering events in a distributed network under adversarial conditions. More commonly known as Byzantine Fault Tolerant (BFT) consensus or atomic broadcast, the problem has attracted significant attention in recent years due to the widespread success of blockchain-based digital currencies, such as Bitcoin and Ethereum, which successfully solve the problem in a public setting without a central authority. Tendermint modernizes classic academic work on the subject and simplifies the design of the BFT algorithm by relying on a peer-to-peer gossip protocol among nodes. Furthermore, Tendermint uses a simple socket protocol to enable BFT state machine replication of arbitrary deterministic state machines, written in any programming language. Our implementation in Go achieves thousands of transactions per second on dozens of nodes distributed around the globe, with latencies of about one second, and performance degrading moderately in the face of adversarial attacks. This talk will provide an overview of how Tendermint works and how to build BFT applications with it, including scalable crypto-currencies.

10:00
Formal Verification of Internet of Things Protocols

ABSTRACT. We apply infinite-state model checking to formally verify Internet of Things protocols such as the Pub/Sub consistency protocol adopted by the REDIS distributed file system.

The verification method is based on a combination of SMT-solvers and overapproximations as those implemented in the Cubicle verification tool.

09:00-10:30 Session 83E (HCVS)
09:00
Horn Clauses and Beyond for Relational and Temporal Program Verification

ABSTRACT. In this talk, we present our recent and ongoing work on constraint solving for verification of higher-order functional programs, where we address two important classes of specifications: (1) relational specifications and (2) dependent temporal specifications. These classes impose a new challenge: validity checking of first-order formulas with least and greatest fixpoints respectively for inductive and coinductive predicates, which generalizes existing variants of constrained Horn clause (CHC) solving.

The former class of relational specifications includes functional equivalence, associativity, commutativity, distributivity, monotonicity, idempotency, and non-interference, whose verification often boils down to inferring mutual invariants among inputs and outputs of multiple function calls. To this end, we present a novel CHC solving method based on inductive theorem proving: the method reduces CHC solving to validity checking of first-order formulas with least fixpoints for inductive predicates, which are then checked by induction on the derivation of the predicates. The method thus enables relational verification by expressing and checking mutual invariants as induction hypotheses.

The latter class of dependent temporal specifications is used to constrain (possibly infinite) event sequences generated from the target program. We express temporal specifications as first-order formulas over finite and infinite strings that encode event sequences. The use of first-order formulas allows us to express temporal specifications that depend on program values, so that we can specify input-dependent temporal behavior of the target program. Furthermore, we use least and greatest fixpoints to respectively model finite and infinite event sequences generated from the target program. To solve such fixpoint constraints, we present a novel deductive system consisting of rules for soundly eliminating fixpoints via invariants and well-founded relations.

10:00
Metaprogramming and symbolic execution for detecting runtime errors in Erlang programs

ABSTRACT. Dynamically typed languages, like Erlang, allow developers to quickly write programs without explicitly providing any type information on expressions or function definitions. However, this feature makes dynamic languages less reliable, because many runtime errors would be easily caught in statically typed languages. Besides, adding concurrency to the mix leads to a programming language where some runtime errors only appear for a particular execution of the program. In this paper, we present our preliminary work on a tool that, by using the well-known techniques of metaprogramming and symbolic execution, can be used to perform bounded verification of Erlang programs. In particular, by using Constraint Logic Programming, we develop an interpreter that, given an Erlang program and a symbolic input, returns answer constraints that represent sets of concrete data for which the Erlang program will incur in a runtime error.

09:00-10:30 Session 83F (LSB)
09:00
Pathway Logic: Symbolic executable models for reasoning about cellular processes
09:00-10:30 Session 83G (LearnAut)
09:00
Algorithms for Weighted and Probabilistic Context-Free Grammars and Convex Optimization
10:00
Learning Graph Weighted Models on Pictures

ABSTRACT. Graph Weighted Models (GWMs) have recently been proposed as a natural generalization of weighted automata over strings and trees to arbitrary families of labeled graphs (and hypergraphs). A GWM generically associates a labeled graph with a tensor network and computes a value by successive contractions directed by its edges. In this paper, we consider the problem of learning GWMs defined over the graph family of pictures (or 2-dimensional words). As a proof of concept, we consider the simple Bars and Stripes picture language and provide an experimental study investigating whether this language can be learned in the form of a GWM from positive and negative examples using gradient-based methods. Our results suggest that this is indeed possible and that investigating the use of gradient-based methods to learn picture series and functions computed by GWMs over other families of graphs could be a fruitful direction.

09:00-10:30 Session 83H: Markov decision processes 1 (MoRe)
09:00
Conditioning and quantiles in Markovian models

ABSTRACT. The classical multi-objective analysis of Markov decision processes (MDP) and other stochastic models with nondeterminism seeks for policies to resolve the nondeterministic choices that achieve Pareto-optimality or maximize/minimize a linear goal function, while satisfying a list of probability or expectation constraints. Other approaches to reason about the trade-off between multiple cost and reward functions rely on conditioning or quantiles. The former, for instance, can serve to synthesize policies that maximize the expected utility in energy-critical situations or to minimize the cost of repair mechanism in exceptional error scenarios. Quantiles can, e.g., be used to identify policies with a minimal cost bound for meeting a utility constraint with sufficiently high probability. While algorithms for conditional probabilities in MDPs can rely on a fairly simple model transformation, reasoning about conditional expected accumulated costs and quantiles is computationally more difficult. The talk presents an overview of recent results in these directions.

10:00
Multiple Objectives and Cost Bounds in MDP

ABSTRACT. We summarize our recent TACAS paper, where we provide an efficient algorithm for multi-objective model-checking problems on Markov decision processes (MDPs) with multiple cost structures. The key problem at hand is to check whether there exists a scheduler for a given MDP such that all objectives over cost vectors are fulfilled. Reachability and expected cost objectives are covered and can be mixed. Empirical evaluation shows the algorithm’s scalability. We discuss the need for output beyond Pareto curves and exploit the available information from the algorithm to support decision makers.

09:00-10:30 Session 83I: Applications (RCRA)
09:00
Computing Personalised Treatments through In Silico Clinical Trials. A Case Study on Downregulation in Assisted Reproduction.
SPEAKER: Toni Mancini

ABSTRACT. ISCT, i.e., clinical experimental campaigns carried out by means of computer simulations, hold the promise to decrease time and cost for the safety and efficacy assessment of pharmacological treatments, reduce the need for animal and human testing, and enable precision medicine.

In this paper we present a case study aiming at quantifying, by means of a multi-arm ISCT supervised by intelligent search, the potential impact of precision medicine approaches on a real pharmacological treatment, namely the downregulation phase of a complex clinical protocol for assisted reproduction.

09:22
Optimal Fault-Tolerant Placement of Relay Nodes in a Mission Critical Wireless Network
SPEAKER: Toni Mancini

ABSTRACT. The operations of many critical infrastructures (e.g., airports) heavily depend on proper functioning of the radio communication network supporting operations. As a result, such a communication network is indeed a mission-critical communication network that needs adequate protection from external electromagnetic interferences. This is usually done through radiogoniometers. Basically, by using at least three suitably deployed radiogoniometers and a gateway gathering information from them, sources of electromagnetic emissions that are not supposed to be present in the monitored area can be localised. Typically, relay nodes are used to connect radiogoniometers to the gateway. As a result, some degree of fault-tolerance for the network of relay nodes is essential in order to offer a reliable monitoring. On the other hand, deployment of relay nodes is typically quite expensive. As a result, we have two conflicting requirements: minimise costs while guaranteeing a given fault-tolerance.

In this paper address the problem of computing a deployment for relay nodes that minimises the relay node network cost while at the same time guaranteeing proper working of the network even when some of the relay nodes (up to a given maximum number) become faulty (fault-tolerance).

We show that the above problem can be formulated as a MILP as well as a PB optimisation problem and present experimental results comparing the two approaches on realistic scenarios.

09:44
Greedy Randomized Search for Scalable Compilation of Quantum Circuits
SPEAKER: Angelo Oddi

ABSTRACT. This paper investigates the performances of a greedy randomized algorithm to optimize the realization of nearest-neighbor compliant quantum circuits. Currrent technological limitations (decoherence effect) impose that the overall duration (makespan) of the quantum circuit realization be minimized. One core contribution of this paper is a lexicographic two-key ranking function for quantum gate selection: the first key acts as a global closure metric to minimize the solution makespan; the second one is a local metric acting as ``tie-breaker'' for avoiding cycling. Our algorithm has been tested on a set of quantum circuit benchmark instances of increasing sizes available from the recent literature. We demonstrate that our heuristic approach outperforms the solutions obtained in previous research against the same benchmark, both from the CPU efficiency and from the solution quality standpoint.

10:06
Logic Programming approaches for routing fault-free and maximally-parallel Wavelength Routed Optical Networks on Chip (Application paper)

ABSTRACT. One promising trend in digital system integration consists of boosting on-chip communication performance by means of silicon photonics, thus materializing the so-called Optical Networks-on-Chip (ONoCs). Among them, wavelength routing can be used to route a signal to destination by univocally associating a routing path to the wavelength of the optical carrier. Such wavelengths should be chosen so to minimize interferences among optical channels and to avoid routing faults. As a result, physical parameter selection of such networks requires the solution of complex constrained optimization problems. In previous work, published in the proceedings of the International Conference on Computer-Aided Design, we proposed and solved the problem of computing the maximum parallelism obtainable in the communication between any two end-points while avoiding misrouting of optical signals. The underlying technology, only quickly mentioned in that paper, is Answer Set Programming (ASP). In this work, we detail the ASP approach we used to solve such problem. Another important design issue is to select the wavelengths of optical carriers such that they are spread across the available spectrum, in order to reduce the likelihood that, due to imperfections in the manufacturing process, unintended routing faults arise. We show how to address such problem in Constraint Logic Programming on Finite Domains (CLP(FD)).

This paper appears in Theory and Practice of Logic Programming, Volume 17, Issue 5-6 (33rd International Conference on Logic Programming).

09:00-10:30 Session 83J (SMT)
09:00
Invited talk: Experiments in Universal Logical Reasoning --- How to utilise ATPs and SMT solvers for the exploration of axiom systems for category theory in free logic

ABSTRACT. Classical higher-order logic, when utilized as a meta-logic in which various other (classical and non-classical) logics can be shallowly embedded, is suitable as a foundation for the development of a universal logical reasoning engine. Such an engine may be employed, as already envisioned by Leibniz, to support the rigorous formalisation and deep logical analysis of rational arguments on the computer.

In my talk I will exemplarily demonstrate how this approach can be utilised to implement "free logic" within the proof assistant Isabelle/HOL. SMT solvers and ATPs, which are integrated with Isabelle/HOL, can be then be used in this framework to (indirectly) automate free logic reasoning.

Free logic has many interesting applications, e.g. in natural language processing and as a logic of fiction. In mathematics, free logics are particularly suited in application domains, such as axiomatic category theory, where an adequate and elegant treatment of partiality and undefinedness is required.

In my talk I will summarise the results of an experiment with the above free logic reasoner, in which a series of mutually equivalent axiom systems for category theory has been systematically explored. As a side-effect of this work some (minor) issue in a prominent category theory textbook has been revealed.

The purpose of this work is not to claim any novel results in category theory, but to demonstrate an elegant way to automate reasoning in free logic, and to present illustrative experiments, which were (indirectly) supported by SMT solvers, ATPs and the model finder Nitpick.

10:00
Higher-Order SMT Solving

ABSTRACT. Satisfiability modulo theories (SMT) solvers have throughout the years been able to cope with increasingly expressive formulas, from ground logics to full first-order logic modulo theories. Nevertheless, higher-order logic within SMT (HOSMT) is still little explored. In this preliminary report we discuss how to extend SMT solvers to natively support higher-order reasoning without compromising their performances on FO problems. We present a pragmatic extension to the cvc4 solver in which we generalize existing data structures and algorithms to HOSMT, thus leveraging the extensive research and implementation efforts dedicated to efficient FO solving. Our evaluation shows that the initial implementation does not add significant overhead to FO problems and its performance is on par with the encoding-based approach for HOSMT. We also discuss an alternative extension being implemented in veriT, in which new data structures and algorithms are being developed from scratch to best support HOSMT, thus avoiding the inherent difficulties of generalizing in a graceful way existing infrastructure not indented to higher-order reasoning.

09:00-10:30 Session 83K: Machine learning and logic (SoMLMFM)
Location: Maths LT1
09:00
Machine learning and logic: Fast and slow thinking

ABSTRACT. There is a recent perception that computer science is undergoing a Kuhnian paradigm shift, with CS as a model-driven science being replaced as a data-driven science. I will argue that, in general new scientific theories refine old scientific theories, rather than replace them. Thus, data-driven CS and model-driven CS complement each other, just as fast thinking and slow thinking complement each other in human thinking, as explicated by Daniel Kahneman. I will then use automated vehicles as an example, where in recent years, car makers and tech companies have been racing to be the first to market. In this rush there has been little discussion of how to obtain scalable standardization of safety assurance, without which this technology will never be commercially deployable. Such assurance requires formal methods, and combining machine learning with logic is the challenge of the day.

09:30
Provably beneficial artificial intelligence

ABSTRACT. I will briefly survey recent and expected developments in AI and their implications. Beyond these, one must expect that AI capabilities will eventually exceed those of humans across a range of real-world-decision making scenarios. Should this be a cause for concern, as Elon Musk, Stephen Hawking, and others have suggested? And, if so, what can we do about it? While some in the mainstream AI community dismiss the issue, I will argue instead that a fundamental reorientation of the field is required. Instead of building systems that optimize arbitrary objectives, we need to learn how to build systems that will, in fact, be beneficial for us. I will show that it is useful to imbue systems with explicit uncertainty concerning the true objectives of the humans they are designed to help, as well as the ability to learn more about those objectives from observation of human behaviour.

10:00
Towards Robust and Explainable Artificial Intelligence

ABSTRACT. Deep learning has led to rapid progress being made in the field of machine learning and artificial intelligence, leading to dramatically improved solutions of many challenging problems such as image understanding, speech recognition, and automatic game playing. Despite these remarkable successes, researchers have observed some intriguing and troubling aspects of the behaviour of these models. A case in point is the presence of adversarial examples which make learning based systems fail in unexpected ways. Such behaviour and the difficultly of interpreting the behaviour of neural networks is a serious hindrance in the deployment of these models for safety-critical applications. In this talk, I will review the challenges in developing models that are robust and explainable and discuss the opportunities for collaboration between the formal methods and machine learning communities.

09:00-10:30 Session 83L (Tetrapod)
Location: Maths L6
09:00
Modularity in Software Synthesis

ABSTRACT. Software synthesis tools support the translation of requirements into acceptable software de-
signs. The requirements may be expressed logically, with a deductive design process, or the
requirements may come in the form of datasets, with an inductive design process. This session
focuses on modularity in software synthesis, and in particular on modular knowledge about re-
quirements, software design, and the structure of the design process. We briefly outline some of
the key forms of modularization that arise in these aspects of software synthesis.

1. Requirements

For applications where mathematical correctness is important, requirements and high-level
designs can be structured using formal logical specifications as finite presentations of theo-
ries, which are composed using specification morphisms and colimits. For machine learning
applications, the requirements come in form of structured data sets.

2. Software Design Knowledge

Representations of software design knowledge allow developers to guide an automated synthe-
sis system so that it can effectively translate requirement-level specifications into acceptable
high-level designs. Transformations (replace specification or code patterns by other code
patterns) and inference rules that relate logical goals with program structure (e.g. deduc-
tive synthesis rules) provide some of the basic design knowledge modules for any synthesis
system. Larger grain modules can capture reusable higher-level design knowledge: algorithm
design theories, datatype specifications and refinements, formalized design patterns, and
architectural patterns. These are used with deductive/refinement techniques to generate ini-
tial designs from requirement-level specifications. Machine learning applications also exploit
larger-grain patterns, such as linear regression models and neural networks. These provide a
computation pattern (with parameters to-be-learned and chosen hyperparameters) that can
be instantiated using methods for fitting models to the given data. More ad-hoc forms of de-
sign knowledge include sketches, program templates, and schemata. These are used with (1)
deductive/refinement techniques to generate instantiations from logical specifications, or (2)
inductive generalization techniques that search a space of instances given concrete examples.

3. Derivations

A software synthesis process that generates interesting software usually involves the compo-
sition of many kinds of design knowledge and so it is relevant to consider the structuring of
that composition, called a derivation. Derivations can be a sequence (or tree) of specifications
where each is derived from its predecessor by applying a module of design knowledge. Typ-
ically, the early knowledge applications in a derivation introduce the overall algorithmic or
architectural structure and then many more knowledge applications are applied to improve
performance and to fit the design to the target computational substrate.

 

09:45
Modularity in Mathematical Computation

ABSTRACT. Modularity in Mathematical Computation

09:00-10:30 Session 83M: UITP 1: Invited talk (UITP)
09:00
Designing Globular: formal proofs as geometrical objects (Invited Talk)
10:00
Interfacing with a Prover using Focusing and Logic Programming

ABSTRACT. Interfacing with theorem provers is normally done by specifying a strategy. Such a strategy can be local, such as using tactics in interactive theorem provers, or global when using automatic ones. Focused proof calculi have a clear separation between the phases in the proof search which can be done fully automatically and those phases which require decision making. The inference rules of these proof calculi can be defined using logical formulas, such as implications. Logic programming languages allow for proof search over a database of such logical formulas and also support interaction with the user via input/output calls. In this paper we describe a possible process of writing an interactive proof assistant over a focused sequent calculus using the higher-order programming language lambda-prolog. We show that one can gain a high level of trust in the correctness of the prover, up to the correctness of an extremely small kernel. This process might allow one to obtain a fully functional proof assistant using a small amount of code and by using a clear process for arbitrary focused calculi.

09:00-10:30 Session 83N (VDMW)
Location: Blavatnik LT2
09:00
Model checking and its applications
09:45
Computation and inference
09:00-10:30 Session 83P: Vampire - Past, Present and Future (Vampire)
09:00
Vampire: where have we come from, where are we going

ABSTRACT. In this session, I will give a broad overview of the Vampire system split into the past, present, and future. To begin with, I will give a quick beginner's guide to what Vampire is and what it looks like 'under the hood' focussing on new additions over the last 5 years. I will then talk about current projects and what you can expect to see in the near future. I will finish by talking about new ideas for the more distant future (i.e. things we haven't necessarily started working on yet). The session will be interactive and wherever possible I will show Vampire doing things.

09:00-10:30 Session 83Q (rv4rise)
09:00
Hardware-based runtime verification with Tessla

ABSTRACT. In this talk, we present a novel platform for online monitoring of multicore. It gives insight to the system’s behavior without affecting it. This insight is crucial to detect non-deterministic failures as for example caused by race conditions and access to inconsistent data. A system-on-chip is observed using the tracing capabilities available on many modern multi-core processors. They provide highly compressed tracing information over a separate tracing port. From this information our system reconstructs the sequence of instructions executed by the processor, which can then be analyzed online by a reconfigurable monitoring unit. Analyses are described in a high-level temporal stream-based specification language, called TESSLA, that are compiled to configurations of the monitoring unit. To cope with the amount of tracing data generated by modern processors the our system is implemented in hardware using an FPGA. The work presented is carried out within the COEMS project and has re­ceived funding from the Euro­pean Union’s Hori­zon 2020 research and innovation pro­gram under grant agreement no. 732016.

09:45
Real-time Stream Processing with RTLola

ABSTRACT. The online evaluation of real-time data streams is a ubiquitous part of modern IT systems. Applications range from collecting statistics on data-driven platforms to monitoring safety-critical real-time systems. We present RTLola, a specification and monitoring framework for stream-based real-time properties. RTLola’s stream processing engine translates input streams, which contain sensor readings and other data collected at runtime, into output streams, which contain aggregated statistics. This setup has the advantage of great expressiveness and easy-to-reuse, compositional specifications. A major challenge in monitoring real-time data is to build stable and efficient stream processing engines. A monitor that consumes an unbounded amount of memory is bound to eventually disrupt the normal operation of the system. In real-time applications, data arrives at varying rates that are usually hard to predict. A key feature of RTLola is that it computes guaranteed bounds on the memory based on the desired output rates, which are usually known in advance. We illustrate the use of RTLola with a series of case studies from a wide range of real-time systems, including networks and unmanned aerial systems.

09:30-10:30 Session 84A: Isabelle 1 (Isabelle)
Location: Blavatnik LT1
09:30
A Verified SAT Solver with Watched Literals Using Imperative HOL (Extended Abstract)

ABSTRACT. Based on our earlier formalization of conflict-driven clause learning (CDCL) in Isabelle/HOL, we refine the CDCL calculus to add a crucial optimization: two watched literals. We formalize the data structure and the invariants. Then we refine the calculus to obtain an executable SAT solver. Through a chain of refinements carried out using the Isabelle Refinement Framework, we target Imperative HOL and extract imperative Standard ML code. Although our solver is not competitive with the state of the art, it offers acceptable performance for some applications, and heuristics can be added to improve it further.

10:00
Lazy Algebraic Types in Isabelle/HOL

ABSTRACT. This paper presents the tool CodeLazy for Isabelle/HOL. It hooks into Isabelle’s code generator such that the generated code evaluates a user-specified set of type constructors lazily, even in target languages with eager evaluation. The lazy type must be algebraic, i.e., values must be built from constructors and a corresponding case operator decomposes them. Every datatype and codatatype is algebraic and thus eligible for lazification.

Lazification is transparent to the user: definitions, theorems, and the reasoning in HOL need not be changed. Instead, CodeLazy transforms the code equations for functions on lazy types when code is generated. It thus makes code-generation-based Isabelle tools like evaluation and quickcheck available for codatatypes, where eager evaluation frequently causes non-termination. Moreover, as all transformations are checked by Isabelle’s kernel, they cannot introduce correctness problems.

09:30-10:30 Session 84B: LCC Invited Talk: Mikolaj Bojanczyk (LCC)
Location: Maths L5
09:30
On polynomial time for infinite structures

ABSTRACT. I will discuss algorithms that process certain kinds of infinite structures (hereditarily definable sets with atoms). An example of such a structure is a directed graph, where the vertices are 10-tuples of rational numbers, and the edge relation is given by a quantifier-free formula that uses the ordering of the rational numbers (but does not use other structure, like addition or multiplication). An example of a decision problem is: given two such graphs decide if they are isomorphic. (For this particular example, it is not known if an algorithm exists.) I will be concerned with two questions: (a) which problems should be called “decidable”; and (b) which problems should be called “polynomial time”.

(Joint work with Szymon Toruńczyk)

 

10:30-11:00Coffee Break
10:50-12:05 Session 85: Stochastic systems 2 (ADHS)
10:50
Compositional Synthesis of Finite Abstractions for Continuous-Space Stochastic Control Systems: A Small-Gain Approach

ABSTRACT. This paper is concerned with a compositional approach for constructing finite abstractions (a.k.a. finite Markov decision processes) of interconnected discrete-time stochastic control systems. The proposed framework is based on a notion of so-called stochastic simulation function enabling us to use an abstract system as a substitution of the original one in the controller design process with guaranteed error bounds. In the first part of the paper, we derive sufficient small-gain type conditions for the compositional quantification of the distance in probability between the interconnection of stochastic control subsystems and that of their (finite or infinite) abstractions. In the second part of the paper, we construct finite abstractions together with their corresponding stochastic simulation functions for the class of linear stochastic control systems. We apply our results to the temperature regulation in a circular building by constructing compositionally a finite abstraction of a network containing 1000 rooms. We use the constructed finite abstractions as substitutes to synthesize policies compositionally regulating the temperature in each room for a bounded time horizon.

11:15
Temporal Logic Control of POMDPs Via Label-Based Stochastic Simulation Relations

ABSTRACT. The synthesis of controllers guaranteeing linear temporal logic specifications on partially observable Markov decision processes (POMDP) via their belief models causes computational issues due to the continuous spaces. In this work, we construct a finite-state abstraction on which a control policy is synthesized and refined back to the original belief model. We introduce a new notion of label- based approximate stochastic simulation to quantify the deviation between belief models. We develop a robust synthesis methodology that yields a lower bound on the satisfaction probability, by compensating for deviations a priori, and that utilizes a less conservative control refinement.

11:40
Concentration of Measure for Chance-Constrained Optimization

ABSTRACT. Chance-constrained optimization problems optimize a cost function in the presence of probabilistic constraints. They are convex in very special cases and, in practice, they are solved using approximation techniques. In this paper, we study approximation of chance constraints for the class of probability distributions that satisfy a concentration of measure property. We show that using concentration of measure, we can transform chance constraints to constraints on expectations, which can then be solved based on scenario optimization. Our approach depends solely on the concentration of measure property of the uncertainty and does not require the objective or constraint functions to be convex. We also give bounds on the required number of scenarios for achieving a certain confidence. We demonstrate our approach on a non-convex chanced-constrained optimization, and benchmark our technique against alternative approaches in the literature on chance-constrained LQG problem.

11:00-12:30 Session 86A: Proof Theory (ADSL)
11:00
Cyclic Theorem Prover for Separation Logic by Magic Wand
SPEAKER: Koji Nakazawa

ABSTRACT. We propose the strong wand and the Factor rule in a cyclic-proof system for the separation-logic entailment checking problem with general inductive predicates. The strong wand is similar to but stronger than the magic wand and is defined syntactically. The Factor rule, which uses the strong wand, is an inference rule for spatial factorization to expose an implicitly allocated cell in inductive predicates. By this rule, we can void getting stuck in the Unfold-Match-Remove strategy. We show a semi-decision algorithm of proof search in the cyclic-proof system with the Factor rule and the experimental results of its prototype implementation.

11:30
Labelled Cyclic Proofs for Separation Logic
SPEAKER: Daniel Mery

ABSTRACT. Separation Logic (SL) is a logical formalism for reasoning about programs that use pointers to mutate data structures. SL has proven itself successful in the field of program verification over the past fifteen years as an assertion language to state properties about memory heaps using Hoare triples. Since the full logic is not recursively enumerable, most of the proof-systems and verification tools for SL focus on the decidable but rather restricted symbolic heaps fragment. Moreover, recent proof-systems that go beyond symbolic heaps allow either the full set of connectives, or the definition of arbitrary predicates, but not both. In this work, we present a labelled proof-system called GM SL that allows both the definition of arbitrary inductive predicates and the full set of SL connectives.

12:00
A Complete Proof System for Basic Symbolic Heaps with Permissions
SPEAKER: Etienne Lozes

ABSTRACT. We consider basic symbolic heaps, i.e. symbolic heaps without any inductive predicates, but within a memory model featuring permissions. We propose a complete proof system for this logic that is entirely independent of the permission model. This is an ongoing work towards a complete proof system for symbolic heaps with lists, and more generally towards a proof theory of permissions in separation logics with recursive predicates and Boolean BI with abstract predicates.

11:00-12:30 Session 86B: CAV Tutorial: Shaz Qadeer (part 2) (CAV)
Location: Maths LT3
11:00
Proving a concurrent program correct by demonstrating it does nothing (continued)

ABSTRACT. CIVL is a verifier for concurrent programs implemented as a conservative extension to the Boogie verification system. CIVL allows the proof of correctness of a concurrent program —shared-memory or message-passing— to be described as a sequence of program layers. The safety of a layer implies the safety of the layer just below, thus allowing the safety of the highest layer to transitively imply the safety of the lowest.

The central theme in CIVL is reasoning about atomic actions. Different layers of a program describe its behavior using atomic actions, higher layers with coarse-grained and lower layers with fine-grained atomic actions. A proof of correctness amounts to a demonstration that the highest layer is SKIP, the atomic action that does nothing. The formal and automated verification justifying the connection between adjacent layers combines several techniques—linear variables, reduction based on movers, location invariants, and procedure-local abstraction.

CIVL is available in the master branch of Boogie together with more than fifty micro-benchmarks. CIVL has also been used to refine a realistic concurrent garbage-collection algorithm from a simple high-level specification down to a highly-concurrent implementation described in terms of individual memory accesses.

11:00-12:30 Session 86C (FRIDA)
11:00
Specifying Non-Atomic Methods of Concurrent Objects

ABSTRACT. Effective software specifications enable modular reasoning, allowing clients to establish program properties without knowing the details of module implementations. While some modules’ operations behave atomically, others admit weaker consistencies to increase performance. Consequently, since current methodologies do not capture the guarantees provided by operations of varying non-atomic consistencies, specifications are ineffective, forfeiting the ability to establish properties of programs that invoke non-atomic operations.

In this work we develop a methodology for specifying software modules whose operations satisfy multiple distinct consistency levels. In particular, we develop a simple annotation language for specifying weakly-consistent operations, wherein annotations impose varying constraints on the visibility among operations. To validate annotations, we identify a novel characterization of consistency, called happens-before agnostic consistency, which admits effective consistency-checking automation. Empirically, we demonstrate the efficacy of our approach by deriving and validating relaxed-visibility specifications for Java concurrent objects. Furthermore, we demonstrate an optimality of our annotation language, empirically, by establishing that even finer-grained languages do not capture stronger specifications for Java objects.

11:30
Peregrine - A Tool for Population Protocols

ABSTRACT. This talk introduces Peregrine, the first tool for the analysis and parameterized verification of population protocols. Population protocols are a model of distributed computation in which mobile anonymous agents interact stochastically to achieve a common task. Peregrine allows users to design protocols, to simulate them both manually and automatically, to gather statistics of properties such as convergence speed, and to verify correctness automatically.
 

In this talk, I provide a short introduction to population protocols through a live demonstration of Peregrine's user interface, and explain the theory behind Peregrine's automatic verification capabilities.

12:00
From Asynchronous Message-Passing Models To Rounds

ABSTRACT. One great issue in conceiving and specifying distributed algorithms is the sheer number of models, differing in subtle and often qualitative ways: synchrony, kind of faults, number of faults... In the context of message-passing, one solution is to restrict communication to proceed by round. A great variety of models can then be captured in the Heard-Of model, with predicates on the communication graph at each round. However, this raises another issue: how to characterize a given model by such a predicate? It depends on how to implement rounds in the model. This is straightforward in synchronous models, thanks to the upper bound on communication delay. On the other hand, asynchronous models allow unbounded message delays, which makes the implementation of rounds dependent on the specific message-passing model.

I will present our formalization of this implementation of rounds for asynchronous message-passing models. We proceed through games: the environment captures the non- determinism of a scheduler while processes decide, in turn, whether to change round or wait for more messages. Strategies of processes for these games, capturing the decision of when to change rounds, are studied through a dominance relation: a dominant strategy for a game implements the communication predicate which characterize the corresponding message-passing model. I will walk you through a classical message-passing model, the asynchronous one with reliable communication and at most f permanent crashes, and present its dominating strategy. Then I will develop our existence results for large classes of strategies, and for all games.

This is joint work with Aurélie Hurault and Philippe Quéinnec.

11:00-12:30 Session 86D (HCVS)
11:00
Solving Constrained Horn Clauses Using Dependence-Disjoint Expansions
SPEAKER: David Heath

ABSTRACT. Recursion-free Constrained Horn Clauses (CHCs) are logic-programming problems that can model safety properties of programs with bounded iteration and recursion. In addition, many CHC solvers which handle recursive systems reduce their inputs to a series of recursion-free CHC systems that can each be solved efficiently.

In this paper, we define a novel class of recursion-free systems, named Clause-Dependence Disjoint (CDD), that generalizes classes defined in previous work. The advantage of this class is that many CDD systems are smaller than systems which express the same constraints but are part of a different class. This advantage in size allows CDD systems to be solved more efficiently than their counterparts in other classes. Based on the class of CDD systems, we implemented a CHC solver named Shara. Shara solves arbitrary CHC systems by reducing the input to a series of CDD systems. Our evaluation indicates that Shara outperforms state-of-the-art implementations in many practical cases.

11:30
A simple functional presentation and an inductive correctness proof of the Horn algorithm

ABSTRACT. We present a recursive formulation of the Horn algorithm for deciding the satisfiability of propositional clauses. The usual presentations in imperative pseudo-code are informal and not suitable for simple proofs of its main properties. By defining the algorithm as a recursive function (computing a least fixed-point), we achieve: 1) a concise, yet rigorous, formalisation; 2) a clear form of visualising executions of the algorithm, step-by-step; 3) precise results, simple to state and with clean inductive proofs.

12:00
Towards Coinductive Theory Exploration in Horn Clause Logic: Extended Abstract

ABSTRACT. Coinduction occurs in two guises in Horn clause logic: in proofs of self-referencing properties and relations, and in proofs involving construction of (possibly irregular) infinite data. Both instances of coinductive reasoning appeared in the literature before, but a systematic analysis of these two kinds of proofs and of their relation was lacking. We propose a general proof-theoretic framework for handling both kinds of coinduction arising in Horn clause logic. To this aim, we propose a coinductive extension of Miller et al's framework of uniform proofs and prove its soundness relative to coinductive models of Horn clause logic.

11:00-12:30 Session 86E: Isabelle 2 (Isabelle)
Location: Blavatnik LT1
11:00
Formal Verification of Bounds for the LLL Basis Reduction Algorithm

ABSTRACT. The LLL basis reduction algorithm was the first polynomial-time algorithm to compute a reduced basis of a given lattice, and hence also a short vector in the lattice. It thereby approximates an NP-hard problem where the approximation quality solely depends on the dimension of the lattice, but not the lattice itself. The algorithm has several applications in number theory, computer algebra and cryptography.

In a recent paper, we presented the first formal soundness proof of the LLL algorithm. However, this proof did not include a formal statement of its complexity. Therefore, in this paper we provide two formal statements on the polynomial runtime. First, we formally prove a polynomial bound on the number of arithmetic operations. And second, we show that the numbers during the execution stay polynomial in size, so that each arithmetic operation can be performed in polynomial time.

11:30
Hilbert Meets Isabelle: Formalisation of the DPRM Theorem in Isabelle
SPEAKER: Marco David

ABSTRACT. Hilbert's tenth problem, posed in 1900 by David Hilbert, asks for a general algorithm to determine the solvability of any given Diophantine equation. In 1970, Yuri Matiyasevich proved the DPRM theorem which implies such an algorithm cannot exist. This paper will outline our attempt to formally state the DPRM theorem and verify Matiyasevich's proof using the proof assistant Isabelle/HOL.

12:00
Further Scaling of Isabelle Technology

ABSTRACT. Over 32 years, Isabelle has made a long way from a small experimental proof assistant to a versatile platform for proof document development. There has always been a challenge to keep up with the natural growth of applications, notably the Archive of Formal Proofs (AFP). Can we scale this technology further, towards really big libraries of formalized mathematics? Can the underlying Scala/JVM and Poly/ML platforms cope with the demands? Can we eventually do 10 times more and better? In this paper I will revisit these questions particularly from the perspective of: * Editing: Prover IDE infrastructure and front-ends, * Building: batch-mode tools and background services, * Browsing: HTML views and client-server applications.

11:00-12:30 Session 86F: LCC: Contributed Talks (LCC)
Location: Maths L5
11:00
Traversal-invariant definability and Logarithmic-space computation

ABSTRACT. In the presence of arithmetic, order-invariant definability in first-order logic captures constant parallel time, i.e. uniform AC⁰ [I]. The ordering is necessary to replicate the presentation of a structure on the input tape of a machine. What if that ordering was in fact a traversal of the structure? We show that an analogous notion of traversal-invariant definability characterizes deterministic logarithmic space (L) and that breadth-first traversals characterize nondeterministic logarithmic space (NL). The surprising feature of these characterizations is that we can work entirely within the framework of first-order logic without any extensions, other than the traversals themselves.

11:15
A recursion-theoretic characterisation of the positive polynomial-time functions
SPEAKER: Anupam Das

ABSTRACT. We extend work of Lautemann, Schwentick and Stewart '96 on characterisations of the 'positive' polynomial-time predicates (posP, also called mP by Grigni and Sipser '92) to function classes. Our main result is the obtention of a function algebra for the positive polynomial-time functions (posFP), by imposing a simple uniformity condition on the bounded recursion operator in Cobham's characterisation of FP.

11:30
Feasibly constructive proofs of succinct weak circuit lower bounds
SPEAKER: Ján Pich

ABSTRACT. We ask for feasibly constructive proofs of known circuit lower bounds for explicit functions on bit strings of length $n$. In 1995 Razborov showed that many can be proved in Cook's theory PV, a bounded arithmetic formalizing polynomial time reasoning. He formalized circuit lower bound statements for small $n$ of doubly logarithmic order. A more common formalization, considered in Krajíček's 1995 textbook, assumes $n$ only of logarithmic order. It is open whether PV proves known lower bounds in such succinct formalizations. We give such proofs in Jeřábek's theory of approximate counting $APC_1$, an extension of PV formalizing probabilistic polynomial time reasoning. Specifically, we prove in $APC_1$ lower bounds for the parity function and $AC^0$, for the mod $q$ counting function and $AC^0[p]$ (for some $n$ of intermediate order), and for the $k$-clique function and monotone circuits. We also formalize Razborov and Rudich's natural proof barrier. Further, we ask for feasibly constructible propositional proofs of circuit lower bounds. We discuss two ways to succinctly express circuit lower bounds by propositional formulas of polynomial size $n^{O(1)}$ or at least much smaller than size $2^{O(n)}$ of the common formula based on the truth table of the function: one via feasible functions witnessing errors of circuits trying to compute some hard function, and one via the anticheckers of Lipton and Young 1994 or partial truth tables. Our $APC_1$ formalizations can be applied to derive a conditional upper bound on succinct propositional formulas obtained by witnessing extracted from the $APC_1$ proofs. Namely, we show these formulas have short Extended Frege EF proofs from general circuit lower bounds expressed by the common``truth-table" formulas. We also show how to construct in quasipolynomial time propositional proofs of quasipolynomial size tautologies expressing $AC^0[p]$ quasipolynomial size lower bounds; these proofs are in Jeřábek's proof system WF. The last result is proved by formalizing a variant of Razborov's and Rudich's naturalization of Smolensky's proof for partial functions on the propositional level.

11:45
Computability over locally finite structures from algebra

ABSTRACT. Working with a particular notion of computability over general, first- order structures, we argue that computability classes over certain locally finite structures should capture Turing machine complexity classes. We exhibit this phenomenon for some locally finite structures from algebra.

12:00
A General Equational Framework for Static Profiling of Parametric Resource Usage

ABSTRACT. For some applications, standard resource analyses do not provide the information required. Such analyses estimate the total resource usage of a program (without executing it) as functions on input data sizes. However, some applications require knowing how such total resource usage is distributed over selected parts of a program. We propose a novel, general, and flexible framework for setting up cost equations/relations which can be instantiated for performing a wide range of resource usage analyses, including both static profiling and the inference of the standard notion of cost. We extend and generalize standard resource analysis techniques, so that the relations generated include additional Boolean control variables for switching on or off different terms in the relations, as required by the desired resource usage profile. We also instantiate our framework to perform static profiling of accumulated cost (also parameterized by input data sizes). Such information is much more useful to the software developer than the standard notion of cost: it identifies the parts of the program that have the greatest impact on the total program cost, and which therefore should be optimized first. We also report on an implementation of our framework within the CiaoPP system, and its instantiation for accumulated cost, and provide some experimental results. In addition to generality, our new method brings important advantages over our previous approach based on a program transformation, including support for non-deterministic programs, better and easier integration in the compiler, and higher efficiency.

12:15
Completeness in the Second Level of the Polynomial Time Hierarchy through Syntactic Properties

ABSTRACT. In the mid-90's Neil Immerman and José Medina initiated the search for syntactic tools to prove NP-completeness. They conjectured that if a problem defined by the conjunction of an Existential Second Order sentence and a First Order Formula is NP-complete then the Existential Second Order formula denes an NP-complete problem. This property was called superfluity of First Order Logic with respect to NP. Few years later it was proved by Nerio Borges and Blai Bonet that superfluity is true for the universal fragment of First Order Logic and with respect not only to NP but for a major rank of complexity classes. Our work extends that result to the Second Level of the Polynomial-Time Hierarchy

11:00-12:30 Session 86G (LSB)
11:00
Counterfactual resimulation for causal analysis of rule-based models
11:45
Bio-curation and rule-based modelling of protein interaction networks in KAMI
11:00-12:30 Session 86H (LearnAut)
11:00
TBA
12:00
Learning Tree Distributions by Hidden Markov Models

ABSTRACT. Hidden tree Markov models allow learning distributions for tree structured data while being interpretable as nondeterministic automata. We provide a concise summary of the main approaches in literature, focusing in particular on the causality assumptions introduced by the choice of a specic tree visit direction. We will then sketch a novel non-parametric generalization of the bottom-up hidden tree Markov model with its interpretation as a nondeterministic tree automaton with innite states.

11:00-12:30 Session 86I: Games and synthesis (MoRe)
11:00
Constraint Problem for Weak Subgame Perfect Equilibria with omega-regular Boolean Objectives

ABSTRACT. We study n-player turn-based games played on a finite directed graph. For each play, each player receives a gain that he wants to maximise. Instead of the well-known notions of Nash equilibrium (NE) and subgame perfect equilibrium (SPE), we focus on the recent notion of weak subgame perfect equilibrium (weak SPE), a refinement of SPE. In this setting, players who deviate cannot use the full class of strategies but only a subclass with a finite number of deviation steps. We are interested in the constraint problem: given an upper and a lower thresholds x, y in {0,1}^n, we want to determine if there exists a weak SPE such the gain of each player is componentwise between the two bounds. The goal of our ongoing research is to characterise the complexity of the constraint problem for the class of games with Boolean omega-regular objectives such that Muller, Büchi, Reachability ...

11:30
Parameterized complexity of games with monotonically ordered omega-regular objectives

ABSTRACT. In recent years, two-player zero-sum games with multiple objectives have received a lot of interest as a model for the synthesis of complex reactive systems. In this framework, Player 1 wins if he can ensure that all objectives are satisfied against any behavior of Player 2. When this is not possible to satisfy all the objectives at once, an alternative is to use some preorder on the objectives according to which subset of objectives Player 1 wants to satisfy. For example, it is often natural to provide more significance to one objective over another, a situation that can be modelled with lexicographically ordered objectives for instance. Inspired by recent work on concurrent games with multiple omega-regular objectives by Bouyer et al., we investigate in detail turned-based games with monotonically ordered and omega-regular objectives. We study the threshold problem which asks whether Player 1 can ensure a payoff greater than or equal to a given threshold w.r.t. a given monotonic preorder. As the number of objectives is usually much smaller than the size of the game graph, we provide a parametric complexity analysis and we show that our threshold problem is fixed parameter tractable for all monotonic preorders and all classical types of omega-regular objectives. We also provide polynomial time algorithms for Buchi, coBuchi and explicit Muller objectives for a large subclass of monotonic preorders that includes among others the lexicographic preorder. In the particular case of lexicographic preorder, we also study the complexity of computing the values and the memory requirements of optimal strategies.

12:00
Comfort and Energy Optimization for Floor Heating Systems
SPEAKER: Marco Muniz

ABSTRACT. Controller synthesis for stochastic hybrid switched systems, like e.g. a floor heating system in a house, is a complex computational task that cannot be solved by an exhaustive search though all the control options. The state-space to be explored is in general uncountable due to the presence of continuous variables (e.g. temperature readings in the different rooms) and even after discretization, the state-space remains huge and cannot be fully explored.

In previous work, we proposed an on-line synthesis methodology, where we periodically compute the controller only for the near future based on the current sensor readings. This computation is itself done by employing machine learning in the tool Uppaal Stratego. We focused on optimizing a single objective, the comfort of the users. Experiments have shown that our approach can lead to enormous improvement over the current controller. Our approach is now being implemented in a real house in Aalborg, Denmark.

In this work we focus on a more ambitious multi-objective problem, optimize the comfort of the users while reducing energy consumption. We apply our on-line synthesis methodology in this setting. We show the new challenges that arise, and propose alternatives to address these challenges.

11:00-12:30 Session 86J: Logic (RCRA)
11:00
Shaving with Occam's Razor: Deriving Minimalist Theorem Provers for Minimal Logic

ABSTRACT. Proving a theorem in intuitionistic propositional logic, with {\em implication} as its only primitive (also called minimal logic), is known as one of the simplest to state PSPACE-complete problem. At the same time, via the Curry-Howard isomorphism, it is instrumental to finding lambda terms that may inhabit a given type.

However, as hundreds of papers witness it, all starting with Gentzen's {\bf LJ} calculus, conceptual simplicity has not come in this case with comparable computational counterparts. Implementing such theorem provers faces challenges related not only to soundness and completeness but also to termination and scalability problems.

Can a simple solution, in the tradition of "lean theorem provers" be uncovered that matches the conceptual simplicity of the problem, while being able to handle also large randomly generated formulas?

In search for an answer, starting from Dyckhoff's {\bf LJT calculus}, we derive a sequence of minimalist theorem provers using declarative program transformations steps, while highlighting connections, via the Curry-Howard isomorphism, to type inference mechanisms for the simply typed lambda calculus.

We chose Prolog as our meta-language. Being derived from essentially the same formalisms as those we are covering reduces the semantic gap and results in surprisingly concise and efficient declarative implementations. Our implementation is available at: {\bf \url{https://github.com/ptarau/TypesAndProofs}}.

11:22
Externally Supported Models for Efficient Computation of Paracoherent Answer Sets

ABSTRACT. Answer Set Programming (ASP) is a well-established formalism for nonmonotonic reasoning. While incoherence, the non-existence of answer sets for some programs, is an important feature of ASP, it has frequently been criticised and indeed has some disadvantages, especially for query answering. Paracoherent semantics have been suggested as a remedy, which extend the classical notion of answer sets to draw meaningful conclusions also from incoherent programs. In this paper we present an alternative characterization of the two major paracoherent semantics in terms of (extended) externally supported models. This definition uses a transformation of ASP programs that is more parsimonious than the classic epistemic transformation used in recent implementations. A performance comparison carried out on benchmarks from ASP competitions shows that the usage of the new transformation brings about performance improvements that are independent of the underlying algorithms.

11:44
Algebraic Crossover Operators for Permutations

ABSTRACT. Crossover operators are very important tools in Evolutionary Computation. Here we are interested in crossovers for the permutation representation that find applications in combinatorial optimization problems such as the permutation flowshop scheduling and the traveling salesman problem. We introduce three families of permutation crossovers based on algebraic properties of the permutation space. In particular, we exploit the group and lattice structures of the space. A total of 14 new crossovers is provided. Algebraic and semantic properties of the operators are discussed, while their performances are investigated by experimentally comparing them with known permutation crossovers on standard benchmarks from four popular permutation problems. Three different experimental scenarios are considered and the results clearly validate our proposals.

12:06
An Enhanced Genetic Algorithm with the BLF2G Guillotine Placement Heuristic for the Orthogonal Cutting-Stock Problem

ABSTRACT. The orthogonal cutting-stock problem tries to place a given set of items into a minimum number of identically sized bins. As a part of solving this problem with the guillotine constraint, the authors propose combining the new BLF2G, Bottom Left Fill 2 direction Guillotine, placement heuristic with an advanced genetic algorithm. According to the item order, the BLF2G routine creates a direct placement of items in bins to give a cutting format. The genetic algorithm exploits the search space to find the supposed optimal item order. Other methods try to guide the evolutionary process by introducing a greedy heuristic to the initial population to enhance the results. The authors propose enriching the population via qualified individuals, without disturbing the genetic phase, by introducing a new enhancement to guide the evolutionary process. The evolution of the GA process is controlled, and when no improvements after some number of iterations are observed, a qualified individual is injected to the population to avoid premature convergence to a local optimum. To enrich the evolutionary process with qualified chromosomes a set of order-based individuals are generated. Our method is compared with other heuristics and metaheuristics found in the literature on existing data sets.

11:00-12:30 Session 86K (SMT)
11:00
Centralizing Equality Reasoning in MCSAT

ABSTRACT. MCSAT is an approach to SMT-solving that uses assignments of values to first-order variables in order to try and build a model of the input formula. When different theories are combined, as formalized in the CDSAT system, equalities between variables and terms play an important role, each theory module being required to understand equalities and which values are equal to which. This paper broaches the topic of how to reason about equalities in a centralized way, so that the theory reasoners can avoid replicating equality reasoning steps, and even benefit from a centralized implementation of equivalence classes of terms, which is based on a equality graph (Egraph). This paper sketches the design of a prototype based on this architecture.

11:30
Proofs in conflict-driven theory combination

ABSTRACT. Search-based satisfiability procedures try to construct a model of the input formula by simultaneously proposing candidate models and deriving new formulae implied by the input. When the formulae are satisfiable, these procedures generate a model as a witness. Dually, it is desirable to have a proof when the formulae are unsatisfiable. Conflict-driven procedures perform nontrivial inferences only when resolving conflicts between the formulae and assignments representing the candidate model. CDSAT (Conflict-Driven SATisfiability) is a method for conflict-driven reasoning in combinations of theories. It combines solvers for individual theories as theory modules within a solver for the union of the theories. In this paper we endow CDSAT with lemma learning and proof generation. For the latter, we present two techniques. The first one produces proof objects in memory: it assumes that all theory modules produce proof objects and it accommodates multiple proof formats. The second technique adapts the LCF approach to proofs from interactive theorem proving to conflict-driven SMT-solving and theory combination, by defining a small kernel of reasoning primitives that guarantees that CDSAT proofs are correct by construction.

12:00
Selfless Interpolation for Infinite-State Model Checking

ABSTRACT. We present a new method for interpolation in satisfiability modulo theories (SMT) that is aimed at applications in model-checking and invariant inference. The new method allows us to control the finite-convergence of interpolant sequences and, at the same time, provides expressive invariant-driven interpolants. It is based on a novel integration of model-driven quantifier elimination and abstract interpretation into existing SMT frameworks for interpolation. We have integrated the new approach into the Sally model checker and we include experimental evaluation showing its effectiveness.

11:00-12:30 Session 86L: Ethical and fair AI (SoMLMFM)
Location: Maths LT1
11:00
Ethically Aligned AI Systems

ABSTRACT. AI systems are increasingly acting in the same environment as humans, in areas as diverse as driving, assistive technology, and health care. Humans and machines will often need to work together and agree on common decisions. This requires alignment around some common moral values and ethical principles. Humans will accept and trust more machines that behave at least as ethically as other humans in the same environment. Also, this would make it easier for machines to explain their behavior in terms understandable by humans. Moreover, shared moral values and ethical principles will help in collective human-machine decision making, where consensus or compromise is needed. In this talk I will present some challenges in designing ethically aligned AI systems that humans can trust, and describe technical solutions for such challenges.

11:20
Program Fairness – a Formal Methods Perspective

ABSTRACT. With the range and sensitivity of algorithmic decisions expanding at a break-neck speed, it is imperative that we seriously investigate fairness and bias in decision-making programs. First, we show that a number of recently proposed formal definitions of fairness can be encoded as probabilistic program properties. Second, with the goal of enabling rigorous reasoning about fairness, we design a novel technique for verifying probabilistic properties that admits a wide class of decision-making programs. Third, we present the first verification tool for automatically certifying that a program meets a given fairness property. We evaluate our tool on a range of decision-making programs. Our evaluation demonstrates the tool’s ability to verify fairness for a range of different programs, which we show are out-of-reach for state-of-the-art program analysis techniques.

11:40
Deep Learning Methods for Large Scale Theorem Proving

ABSTRACT. Machine Learning (with hand engineered features) has become an enabling factor for reasoning in large theories. Here I will give overview of some recent work utilizing deep neural networks for both premise selection and guiding automated theorem provers. Also we will highlight connections and recent efforts on utilizing natural language corpora for automated reasoning.

12:00
AI2: AI Safety and Robustness with Abstract Interpretation

ABSTRACT. I will present AI2 (ai2.ethz.ch), a new approach for ensuring safety and robustness of AI systems based on abstract interpretation. AI2 is based on symbolic over-approximation and can automatically reason about safety properties (e.g., robustness) of realistic deep neural networks (e.g., convolutional). In the talk I will discuss how to soundly and precisely approximate the behavior of fully-connected layers with rectified linear unit activations (ReLU), convolutional layers with ReLU, and max pooling layers, allowing AI2 to scale to larger networks. I will also present a new abstraction which allows a trade-off between scalability and precision. Finally, I will discuss several open research challenges in applying symbolic methods to AI systems.

11:00-12:30 Session 86M (Tetrapod)
Location: Maths L6
11:00
Modularity in Ontologies

ABSTRACT. Modularity in Ontologies

11:45
Discussion 1

ABSTRACT. Discussions on the previous talks.

11:00-12:30 Session 86N: UITP 2 (UITP)
11:00
Partial Regularization of First-Order Resolution Proofs
SPEAKER: Jan Gorzny

ABSTRACT. Resolution and superposition are common techniques which have seen widespread use with propositional and first-order logic in modern theorem provers. In these cases, resolution proof production is a key feature of such tools; however, the proofs that they produce are not necessarily as concise as possible. For propositional resolution proofs, there are a wide variety of proof compression techniques. There are fewer techniques for compressing first-order resolution proofs generated by automated theorem provers. This paper describes an approach to compressing first-order logic proofs based on lifting proof compression ideas used in propositional logic to first-order logic. One method for propositional proof compression is partial regularization, which removes an inference n when it is redundant in the sense that its pivot literal already occurs as the pivot of another inference in every path from n to the root of the proof. This paper describes the generalization of the partial-regularization algorithm RecyclePivotsWithIntersection [10] from propositional logic to first-order logic. An empirical evaluation of the generalized algorithm and its combinations with the previously lifted GreedyLinearFirstOrderLowerUnits algorithm [12] is also presented.

11:30
Adding Text-Based Interaction to a Direct-Manipulation Interface for Program Verification -- Lessons Learned
SPEAKER: Sarah Grebing

ABSTRACT. Interactive program verification is characterized by iterations of unfinished proof attempts. For proof construction, many interactive program verification systems offer either text-based interactions, in using a proof scripting language, or a form of direct-manipulation interaction. We have combined a direct-manipulation program verification system with a text-based interface to leverage the advantages of both interaction paradigms. To give the user more insight when scripting proofs we have adapted well-known interaction concepts from the field of software debugging. In this paper we report on our experiences in combining the direct-manipulation interface of the interactive program verification system KeY with a text-based user interface to construct program verification proofs leveraging interaction concepts from the field of software-debugging for the proof process.

12:00
Automated Theorem Proving in a Chat Environment

ABSTRACT. We present a chat bot interface for the Coq proof assistant system. The bot provides a new modality of interaction with Coq that functions across multiple devices and platforms. Our system is particularly suitable for mobile platforms, Android and iOS. The basic architecture of the bot software is reviewed as are the outcomes of several rounds of beta-testing. Potential implications of the system are discussed, such as the possibility of a seamless collaborative proof environment.

11:00-12:30 Session 86P (VDMW)
Location: Blavatnik LT2
11:00
Verification of infinite-state systems using decidable logic
11:45
Should I join a startup?
11:00-12:30 Session 86R (rv4rise)
11:00
Systematic analysis and improvement of CNNs.

ABSTRACT. In this talk, I will present 1) a framework to systematically analyze convolutional neural networks and 2) a counterexample-guided data augmentation scheme. The analysis procedure comprises a falsification framework in the system-level context. The falsifier is based on an image generator that produces synthetic pictures by sampling in a lower dimension image modification subspace. The generated images are used to test the CNN and expose its vulnerabilities. The misclassified pictures (counterexamples) are then used to augment the original training set and hence improve the accuracy of the considered model. The talk focuses on case studies of object detection in autonomous driving based on deep learning.

11:45
Formalizing Requirements for Cyber-Physical Systems: Real-World Experiences and Challenges

ABSTRACT. Cyber-physical systems (CPSs) are used in many mission critical applications, such as automobiles, aircraft, and medical devices; therefore, it is vital to ensure that these systems behave correctly. Testing is required to ensure correct behavior for CPSs, but traditional methods to evaluate test data can be time consuming and unreliable. New technologies, such as property monitoring, can automatically evaluate measurements of system behaviors against requirements; however, most property monitoring methods rely on the availability of formal system requirements, often in the form of temporal logic formulae, which can be challenging to develop for complex applications. This talk describes industrial CPS challenges in defining formal requirements, presents examples from testing results for an automotive fuel cell application, and suggests possible directions for solutions.

12:30-14:00Lunch Break
14:00-15:15 Session 87A: Control Synthesis (ADHS)
14:00
Algorithm for Bernstein Polynomial Control Design

ABSTRACT. This paper considers control synthesis for polynomial control systems. The developed method leans upon Lyapunov stability and Bernstein certificates of positivity. We strive to develop an algorithm that computes a polynomial control and a polynomial Lyaponov function in the simplicial Bernstein form. Subsequently, we reduce the control synthesis problem to a finite number of evaluations of a polynomial within Bernstein coefficient bounds representing controls and Lyapunov functions. As a consequence, the equilibrium is asymptotically stable with this control.

14:25
Control Synthesis and Classification for Unicycle Dynamics Using the Gradient and Value Sampling Particle Filters

ABSTRACT. Value functions arising from dynamic programming can be used to synthesize optimal control inputs for general nonlinear systems with state and/or input constraints; however, the inputs generated by steepest descent on these value functions often lead to chattering behavior. In [Traft & Mitchell, 2016] we proposed the Gradient Sampling Particle Filter (GSPF), which combines robot state estimation and nonsmooth optimization algorithms to alleviate this problem. In this paper we extend the GSPF to velocity controlled unicycle (or equivalently differential drive) dynamics. We also show how the algorithm can be adapted to classify whether an exogenous input—such as one arising from shared human-in-the-loop control—is desirable. The two algorithms are demonstrated on a ground robot.

14:50
An Interval-Based Sliding Horizon Motion Planning Method

ABSTRACT. A new algorithm of motion planning based on set-membership approach is presented. The goal of this algorithm is to find a safe and optimal path taking into account various sources of bounded uncertainties on the dynamical model of the plant, on the model of the environment, while being robust with respect to the numerical approximations introduced by numerical integration methods. The main approach is based on a sliding horizon method to predict the behavior of the system allowing the computation of an optimal path. As an example, the motion planning algorithm is applied to an Autonomous Underwater Vehicle (AUV) case study, showing the benefit of the proposed approach.

14:00-15:50 Session 87B: Weak Memory and Complexity (ADSL)
14:00
Weak memory made easy by separation logic (keynote)

ABSTRACT. To adequately define the semantics of multithreaded programs, modern revisions of programming languages, such as C/C++ and Java, have adopted weak memory consistency models, which allow more program behaviours than can be observed by simply interleaving the threads of the program. Most computer scientists consider weak memory consistency as being an extremely complicated concept, and one better left only to a small circle of experts to master.

Yet, we show that weak memory is not that complicated. Using separation logic triples, we can not only provide concise explanations of the various constructs of the C/C++11 memory models, but these explanations are also sufficient for verifying the most important synchronisation patterns appearing in real-world concurrent programs.

14:50
On the Complexity of Pointer Arithmetic in Separation Logic

ABSTRACT. We investigate the complexity consequences of adding pointer arithmetic to separation logic. Specifically, we study an extension of the points-to fragment of symbolic-heap separation logic with sets of simple "difference constraints" of the form x <= y + k, where x and y are pointer variables and k is an integer offset. This extension can be considered a practically minimal language for separation logic with pointer arithmetic.

Most significantly, we find that, even for this minimal language, polynomial-time decidability is already impossible: satisfiability becomes NP-complete, while quantifier-free entailment becomes CoNP-complete and quantified entailment becomes \Pi^P_2-complete (where \Pi^P_2 is the second class in the polynomial-time hierarchy).

However, the language does satisfy the small model property, meaning that any satisfiable formula A has a model of size polynomial in A, whereas this property fails when richer forms of arithmetical constraints are permitted.

15:20
The Complexity of Prenex Separation Logic with One Selector
SPEAKER: Radu Iosif

ABSTRACT. We first show that infinite satisfiability can be reduced to finite satisfiability for all prenex formulas of Separation Logic with $k\geq1$ selector fields ($\seplogk{k}$). Second, we show that this entails the decidability of the finite and infinite satisfiability problem for the class of prenex formulas of $\seplogk{1}$, by reduction to the first-order theory of one unary function symbol and unary predicate symbols. We also prove that the complexity is not elementary, by reduction from the first-order theory of one unary function symbol. Finally, we prove that the Bernays-Sch\"onfinkel-Ramsey fragment of prenex $\seplogk{1}$ formulae with quantifier prefix in the language $\exists^*\forall^*$ is \pspace-complete. The definition of a complete (hierarchical) classification of the complexity of prenex $\seplogk{1}$, according to the quantifier alternation depth is left as an open problem.

14:00-15:30 Session 87C: CAV Tutorial: Matteo Maffei (CAV)
Location: Maths LT3
14:00
Foundations and Techniques for the Static Analysis of Ethereum Smart Contracts
SPEAKER: Matteo Maffei

ABSTRACT. The recent growth of the blockchain technology market puts its main cryptocurrencies in the spotlight. Among them, Ethereum stands out due to its virtual machine (EVM) supporting smart contracts, i.e., distributed programs that control the flow of the digital currency Ether. Being written in a Turing complete language, Ethereum smart contracts allow for expressing a broad spectrum of financial applications. The price for this expressiveness, however, is a significant semantic complexity, which increases the risk of programming errors. Recent attacks exploiting bugs in smart contract implementations call for the design of formal verification techniques for smart contracts. This, however, requires rigorous semantic foundations, a formal characterization of the expected security properties, and dedicated abstraction techniques tailored to the specific EVM semantics.

This tutorial will overview the state-of-the-art in smart contract verification, covering formal semantics, security definitions, and verification tools. We will then focus on EtherTrust, a framework for the static analysis of Ethereum smart contracts that we recently introduced, which includes the first complete small-step semantics of EVM bytecode, the first formal characterization of a large class of security properties for smart contracts, and the first static analysis for EVM bytecode that comes with a proof of soundness.

14:00-15:30 Session 87D (FRIDA)
14:00
Specifying and Verifying Concurrent Objects

ABSTRACT. Modern software developments kits simplify the programming of concurrent applications by providing shared state abstractions which encapsulate low-level accesses into higher-level abstract data types (ADTs). Programming such abstractions is however error prone. To minimize synchronization overhead between concurrent ADT invocations, implementors avoid blocking operations like lock acquisition, allowing methods to execute concurrently. However, concurrency risks unintended inter-operation interference, and risks conformance to well-established correctness criteria like linearizability. We present several results concerning the theoretical limits of verifying such concurrent ADTs and testing-based methods for discovering violations in practical implementations.

14:30
Making Parallel Snapshot Isolation Serializable

ABSTRACT. To achieve scalability and availability, modern Internet services often rely on  large-scale transactional databases that replicate and partition data across a large number of nodes. Achieving serializability on such databases is often impractical,  as it limits scalability and increases latency. For this reason, recent 
years have seen a plethora of new transactional consistency models for  large-scale databases. 

In this talk, we propose SPSI, a novel consistency model for such systems  that fits in between the well-known snapshot isolation (SI) 
and parallel snapshot isolation (PSI).  SPSI assumes a partitioning of objects in the databases into entity-groups: it then mandates that PSI guarantees are satisfied by 
the database when considered as a whole, while it also requires the stronger consistency guarantees provided by SI to be satisfied by individual entity groups. 
We show that SPSI can be effectively implemented in a sharded database.

One main feature of SPSI is that, by carefully choosing the grouping of objects of the database into entity groups, one can ensure that transactional applications are robust: they only produce serializable behaviours. We propose a static criterion for determining whether an application running under SPSI is  robust.

This is a joint work with Giovanni Bernardi, Borja de Regil, Andrea Cerone and Alexey Gotsman.

15:00
Parameterized Reachability for All Flavors of Threshold Automata

ABSTRACT. Threshold automata, and the counter systems they define, were introduced as a framework for parameterized model checking of fault-tolerant distributed algorithms. This application domain suggested natural constraints on the automata structure, and a specific form of acceleration, called single-rule acceleration: consecutive occurrences of the same automaton rule are executed as a single transition in the counter system. These accelerated systems have bounded diameter, and can be verified in a complete manner with bounded model checking.

We go beyond the original domain, and investigate extensions of threshold automata: non-linear guards, increments and decrements of shared variables, increments of shared variables within loops, etc., and show that the bounded diameter property holds for several extensions. Finally, we put single-rule acceleration in the scope of flat counter automata: although increments in loops may break the bounded diameter property, the corresponding counter automaton is flattable, and reachability can be verified using more permissive forms of acceleration.

14:00-15:30 Session 87E (HCVS)
14:00
Tree dimension in verification of constrained Horn clauses

ABSTRACT. I will introduce the notion of tree dimension and how it can be used in the verification of constrained Horn clauses. The dimension of a tree is a numerical measure of its branching complexity and the concept here applies to Horn clause derivation trees. I will show how to reason about the dimensions of derivations and how to filter out derivation trees below or above some dimension bound using clause transformations.

I will then present algorithms using these constructions to decompose a constrained Horn clauses verification problem. Finally I will report on implementations and experimental results.

15:00
Declarative Parameterized Verification of Topology-sensitive Distributed Protocols

ABSTRACT. We show that Cubicle,an SMT-based infinite-state model checker, can be applied as a verification engine for GLog, a logic-based specification language for topology-sensitive distributed protocols with asynchronous communication. Existential coverability queries in GLog can be translated into verification judgements in Cubicle by encoding relational updates rules as unbounded array transitions. We apply the resulting framework to automatically verify a distributed.  version of the Dining Philosopher mutual exclusion protocol formulated for an arbitrary number of nodes and communication buffers.

14:00-15:30 Session 87F: LCC: Martin Hofmann Memorial Session (LCC)
Location: Maths L5
14:00
Characterizing Complexity Classes in Presence of Higher-Order Functions — How Martin Hofmann Contributed to Shaping ICC

ABSTRACT. Implicit computational complexity is concerned with characterizing complexity classes with tools coming from logic (e.g. proof theory,  recursion theory) or from programming language theory (e.g. type systems). The presence of higher-order functions makes the task quite hard, given the strong expressive power higher-order functions provide. A key insight is that the expressive power is not provided by the mere *presence* of higher-order functions, but by the possibility of *copying* them. As a consequence, linearity turned out to be a powerful and versatile tool. I will give a survey of some of the results obtained along these lines, starting in the early nineties until today. This talk is dedicated to the memory of Martin Hofmann: he contributed intensely and decisively to this research line, and he introduced it to me gently but thoroughly. 

15:00
Static Prediction of Heap Space Usage for First-Order Functional Programs -- A Retrospective on my First Paper with Martin Hofmann
14:00-15:30 Session 87G (LSB)
Chair:
14:00
A logical framework for systems biology and biomedicine
14:45
Introducing iota, a logic for biological modelling
14:00-15:30 Session 87H (LearnAut)
14:00
Weak and Strong learning of Probabilistic Context-Free Grammars: theoretical and empirical perspectives

ABSTRACT. The problem of learning context-free grammars from a finite sample of their yields is a classic problem in grammatical inference; in this talk I will look at this problem and a modern variant: namely the problem of learning probabilistic grammars (PCFGs) from a sample of strings drawn i.i.d. from the distribution over strings defined by that grammar. We will consider both the problem of learning an accurate approximation of the distribution over strings (weak learning) and the much harder problem of learning the underlying unobserved parse trees as well (strong learning) which is related to the task of unsupervised parsing in the NLP community. I will discuss some theoretical results using distributional learning approaches and present some empirical results on both synthetic and natural examples, motivated by the problem of first language acquisition.

15:00
Decision problems for Clark-congruential languages
SPEAKER: Tobias Kappé

ABSTRACT. When trying to learn a language using the Minimally Adequate Teacher model, one assumes the presence of a teacher that can answer equivalence queries for the class of languages under study --- i.e., queries of the form ``is the language equivalent to L?''. This begs the question: is it feasible to implement such a teacher, or, more precisely, is equivalence for this class of languages decidable? We consider the Clark-congruential languages, a class of context-free languages of particular interest to MAT-based learning efforts, and show that, for this class, equivalence as well as congruence are decidable.

14:00-15:30 Session 87I: Timed and hybrid systems (MoRe)
14:00
A journey through negatively-weighted timed games: undecidability, decidability, approximability

ABSTRACT. Weighted timed games are zero-sum games played by two players on a timed automaton equipped with weights, where one player wants to minimise the accumulated weight while reaching a target. Used in a reactive synthesis perspective, this quantitative extension of timed games allows one to measure the quality of controllers in real-time systems. Weighted timed games are notoriously difficult and quickly undecidable, even when restricted to non-negative weights. However, for a few years now, we explored, and we continue to explore, the world of weighted timed games with negative weights too, in order to get a more useful modelling mechanism. This gave rise to stronger undecidability results, but we also discovered new decidable fragments. In this talk, I will survey these results: decidability when limiting the number of distinct weights in the game, using corner-point abstraction techniques; decidability for a large fragment of one-clock weighted timed games, and for the so-called divergent weighted timed games, using value iteration schemes; approximability in the case of the so-called almost-divergent weighted timed games.

15:00
Stochastic o-minimal hybrid systems

ABSTRACT. Timed automata and hybrid systems are important frameworks for the analysis of continuous-time systems. In this ongoing work, we study a stochastic extension of the latter. There are clear challenges regarding decidability: (i) the reachability problem is already quickly undecidable for non-stochastic hybrid systems; (ii) even in the simpler setting of timed automata, the finite abstraction known as the region graph does not preserve its correctness when lifting it to the stochastic setting. Our goal is to define the class of stochastic o-minimal hybrid systems (SoHSs) and to show that it ensures good properties while being reasonably rich (e.g., it encompasses continuous-time Markov chains). We look for decisiveness of SoHSs (as introduced by Adbulla et al.) and definability in our o-minimal structure. Hopefully, approximation of reachability probabilities could be obtainable in appropriate (and still quite rich) structures.

14:00-15:30 Session 87J: Constraint Satisfiability (RCRA)
14:00
Replaceability for constraint satisfaction problems: Algorithms, Inferences, and Complexity Patterns

ABSTRACT. Replaceability is a form of generalized substitutability whose features make it potentially of great importance for problem simplification. It differs from simple substitutability in that it only requires that substitutable values exist for every solution containing a given value without requiring that the former always be the same. This is also the most general form of substitutability that allows inferences from local to global versions of this property. Building on earlier work, this study first establishes that algorithms for localized replaceability (called consistent neighbourhood replaceability or CNR algorithms) based on all-solutions neighbourhood search outperform other replaceability algorithms by several orders of magnitude. It also examines the relative effectiveness of different forms of depth-first CNR algorithms. Secondly, it demonstrates an apparent complexity ridge, which does not occur at the same place in the problem space as the complexity areas for consistency or full search algorithms. Thirdly, it continues the study of methods for inferring replaceability in structured problems in order to improve efficiency. This includes correcting an oversight in earlier work and extending the formal analysis. It is also shown that some strategies for inferring replaceable values can be extended to disjunctive constraints in scheduling problems.

14:22
Partial (Neighbourhood) Singleton Arc Consistency for Constraint Satisfaction Problems

ABSTRACT. Algorithms based on singleton arc consistency (SAC) show considerable promise for improving backtrack search algorithms for constraint satisfaction problems (CSPs). The drawback is that even the most efficient of them is still comparatively expensive. Even when limited to preprocessing, they give overall improvement only when problems are quite difficult to solve with more typical procedures such as maintained arc consistency (MAC). The present work examines a form of partial SAC and neighbourhood SAC (NSAC) in which a subset of the variables in a CSP are chosen to be made SAC-consistent or neighbourhood-SAC-consistent. These consistencies are well-characterized in that algorithms have unique fixpoints and there are well-defined dominance relations. Heuristic strategies for choosing an effective subset of variables are described and tested, in particular a strategy of choosing by constraint weight after random probing. Experimental results justify the claim that these methods can be nearly as effective as full (N)SAC in terms of values discarded while significantly reducing the effort required.

14:44
Encoding PB Constraints into SAT via Binary Adders and BDDs -- Revisited
SPEAKER: Neng-Fa Zhou

ABSTRACT. Pseudo-Boolean constraints constitute an important class of constraints. Despite extensive studies of SAT encodings for PB constraints, there are no generally accepted SAT encodings for PB constraints. In this paper we revisit encoding PB constraints into SAT via binary adders and BDDs. For the binary adder encoding, we present an optimizing compiler that incorporates preprocessing, decomposition, constant propagation, and common subexpression elimination techniques tailored to PB constraints. For encoding via BDDs, we compare three methods for converting BDDs into SAT, namely, path encoding, 6-clause node encoding, and 2-clause node encoding. We experimentally compare these encodings on three sets of benchmarks. Our experiments revealed surprisingly good and consistent performance of the optimized adder encoder in comparison with other encoders.

15:06
On the Configuration of SAT Formulae

ABSTRACT. In this work we investigate how the performance of SAT solvers can be improved by SAT formulae configuration. We introduce a fully automated approach for this configuration task, that considers a number of criteria for optimising the order in which clauses and, within clauses, literals, are listed in a formula expressed using the CNF. Our experimental analysis, involving three state-of-the-art SAT solvers and six different benchmark sets, shows that this configuration can have a significant impact on solvers’ performance.

14:00-15:30 Session 87K (SMT)
14:00
Discovering Universally Quantified Solutions for Constrained Horn Clauses

ABSTRACT. The logic of Constrained Horn Clauses (CHC) provides an effective logical characterization of many problems in (software) verification. For example, CHC naturally capture inductive invariant discovery for sequential programs, compositional verification of concurrent and distributed systems, and verification of program equivalence. CHC is used as an intermediate representation by several state-of-the-art program analysis tools, including SeaHorn and JayHorn.

Existing CHC solvers, such as Spacer and Eldarica, are limited to discovering quantifier free solutions. This severely limits their applicability in software verification where memory is modelled by arrays and quantifiers are necessary to express program properties. In this work, we extend the IC3 framework (used by Spacer CHC solver) to discover quantified solutions to CHC. Our work addresses two major challenges: (a) finding a sufficient set of instantiations that guarantees progress, and (b) finding and generalizing quantified candidate solutions. We have implemented our algorithm in Z3 and describe experiments with it in the context of software verification of C programs.

14:30
What is the Point of an SMT-LIB Problem?
SPEAKER: Giles Reger

ABSTRACT. Many areas of automated reasoning are goal-oriented i.e the aim is to prove a goal from a set of axioms. Some methods, including the method of saturation-based reasoning explored in this paper, do not rely on having an explicit goal but can employ specific heuristics if one is present. SMT-LIB problems do not record a specific goal, meaning that we cannot straightforwardly employ goal-oriented proof search heuristics. In this work we examine methods for identifying the potential goal in an SMT-LIB problem and evaluate (using the Vampire theorem prover) whether this can help theorem provers based saturation-based proof search.

15:00
SMT-COMP 2018 Report
14:00-15:30 Session 87L: Applications of AI (SoMLMFM)
Location: Maths LT1
14:00
The realities of applied AI

ABSTRACT. Artificial Intelligence will improve productivity across a broad range of applications and across a broad range of industries. The benefit to humanity will be substantial but there are important lessons to learn and steps to take. NVIDIA works closely with world-wide researchers, Enterprise & startups in both getting started & problem-solving. This talk will address some of the issues to be considered and explain why software and community are key to success with AI.

14:30
Safe reinforcement learning via formal methods

ABSTRACT. Formal verification provides a high degree of confidence in safe system operation, but only if reality matches the verified model. Although a good model will be accurate most of the time, even the best models are incomplete. This is especially true in Cyber-Physical Systems because high-fidelity physical models of systems are expensive to develop and often intractable to verify. Conversely, reinforcement learning-based controllers are lauded for their flexibility in unmodeled environments, but do not provide guarantees of safe operation.

This talk presents joint work with Nathan Fulton on provably safe learning that provides the best of both worlds: the exploration and optimization capabilities of learning along with the safety guarantees of formal verification. The main insight is that formal verification combined with verified runtime monitoring can ensure the safety of a learning agent. Verification results are preserved whenever learning agents limit exploration within the confounds of verified control choices as long as observed reality comports with the model used for off-line verification. When a model violation is detected, the agent abandons efficiency and instead attempts to learn a control strategy that guides the agent to a modeled portion of the state space.

15:00
Programming from examples: PL meets ML

ABSTRACT. Programming by Examples (PBE) involves synthesizing intended programs in an underlying domain-specific language from example-based specifications. PBE systems are already revolutionizing the application domain of data wrangling and are set to significantly impact several other domains including code refactoring. There are three key components in a PBE system. (i) A search algorithm that can efficiently search for programs that are consistent with the examples provided by the user. We leverage a divide-and-conquer-based deductive search paradigm to inductively reduce the problem of synthesizing a program expression of a certain kind that satisfies a given specification into sub-problems that refer to sub-expressions or sub-specifications. (ii) Program ranking techniques to pick an intended program from among the many that satisfy the examples provided by the user. We leverage features of the program structure as well of the outputs generated by the program on test inputs. (iii) User interaction models to facilitate usability and debuggability. We leverage active-learning techniques based on clustering inputs and synthesizing multiple programs. Each of these three PBE components leverage both formal methods and heuristics. We make the case for synthesizing these heuristics from training data using appropriate machine learning methods. This not only leads to better heuristics, but also enables easier development, maintenance, and even personalization of a PBE system.

14:00-15:30 Session 87M (Tetrapod)
Location: Maths L6
14:00
Modularity in Proof Checking

ABSTRACT. Modularity in Proof Checking

14:45
Modularity in Large Proofs

ABSTRACT. Modularity in Large Proofs

14:00-15:30 Session 87N: UITP 3/ Isabelle 3 Joint Session (Isabelle and UITP)
14:00
Isabelle/PIDE after 10 years of development

ABSTRACT. The beginnings of the Isabelle Prover IDE framework (Isabelle/PIDE) go back to the year 2008. This is a report on the project after 10 years, with system descriptions for various PIDE front-ends, namely (1) Isabelle/jEdit, the main PIDE application and default Isabelle user-interface, (2) Isabelle/VSCode, an experimental application of the Language Server Protocol in Visual Studio Code (by Microsoft), and (3) a headless PIDE server with JSON protocol messages over TCP sockets.

14:30
Text-Orientated Formal Mathematics

ABSTRACT. The System for Automated Deduction (SAD) by Andrei Paskevich proof-checks texts written in nearly natural mathematical language, without explicit system commands or prover commands. Natural language features like soft typing by noun phrases guide efficient text processing and proof checking. We have implemented practical improvements of the checking algorithm and extensions of the input language, so that we are now able to process considerably longer and more complicated texts. These are immediately readable by mathematicians, as demonstrated by subsequent examples. This suggests an approach to formal mathematics through collections or libraries of texts in (a controlled) natural language.

15:00
Drawing Trees

ABSTRACT. We formally prove in Isabelle/HOL two properties of an algorithm for laying out trees visually. The first property states that removing layout annotations recovers the original tree. The second property states that nodes are placed at least a unit of distance apart. We have yet to formalize three additional properties: That parents are centered above their children, that drawings are symmetrical with respect to reflection and that identical subtrees are rendered identically.

14:00-15:30 Session 87P (VDMW)
Location: Blavatnik LT2
14:00
How to get a paper accepted?
14:30
How to attend a conference?
15:00
Hands-on: Elevator pitch
14:00-15:30 Session 87Q: Contributed Talks (Vampire)
14:00
Towards Word Sense Disambiguation by Reasoning
SPEAKER: Javier Álvez

ABSTRACT. In this proposal we describe a practical application of Vampire for Word Sense Disambiguation (WSD). In particular, we propose a method for the automatic disambiguation of the semantic relations in BLESS, which is a dataset designed to evaluate models of distributional semantics. It compiles 200 concrete nouns (100 animate and 100 inanimate nouns) from different classes and each noun is linked to other words via the semantic relations hyperonymy, cohyponymy, meronymy, attributes, events and random. For the disambiguation of the word pairs and semantic relations of BLESS, we have to map the words to WordNet synsets. WordNet is a large lexical database of English where nouns, verbs, adjectives and adverbs are grouped into sets of synonyms (synsets). Each synset denotes a distinct concept and they are interlinked among them by means of lexical-semantic relations such as synonymy, antonymy, hyponymy, meronymy or morphosemantic relations. Since a word can have different meanings, we need to choose which synset it belongs to, that is, the one that better fits the semantics of the word in the given context. To that end, we plan to use the knowledge in Adimen-SUMO, which is obtained by means of a suitable transformation of the knowledge in the core of SUMO into first-order logic (FOL) and enables its use by FOL automated theorem provers such as Vampire. WordNet and SUMO (and therefore Adimen-SUMO) are connected in a semantic mapping by means of three semantic relations: equivalence, subsumption and instance. By exploiting this mapping, we will automatically create a set of conjectures for each word pair by considering the semantic relations provided by BLESS. Then, these conjectures will be evaluated using Vampire. According to the outcomes, each word will be connected to a single synset and, consequently, disambiguated. Finally, we plan to compare the results provided by our proposal and different disambiguation systems that can be found in the literature.

14:30
Testing ATP folklore: a statistical analysis of Vampire proofs.

ABSTRACT. There are a number of well-known assumptions about automated theorem proving such as "proofs usually only use a small proportion of the given axioms to prove a goal". In this talk we present a statistical analysis of a large number of Vampire proofs and use this to investigate a number of these assumptions coming from ATP folklore.

15:00
Foundations of SPECTRE

ABSTRACT. ~

14:00-15:30 Session 87R (rv4rise)
14:00
Signal classification using temporal logic

ABSTRACT. In recent years, there has been a great interest in applying formal methods to the field of machine learning (ML). In this talk, I will discuss inferring high-level descriptions of a system from its execution signals. The system behaviors are formalized using a temporal logic language called Signal Temporal Logic (STL). I will first focus on supervised, off-line classification. Given a set of pairs of signals and labels, where each label indicates whether the respective signal exhibits some system property, a decision-tree algorithm outputs an STL formula which can distinguish the signals. I will then show how this method can be extended to the online learning scenario. In this setting, it is assumed that new signals may arrive over time and the previously inferred formula should be updated to accommodate the new data. Finally, I will focus on the (unsupervised) clustering problem. In this case, it is assumed that only a set of unlabeled signals is available, that is, it is not known a priori if a signal exhibits a specific behavior or satisfies some property, and the end goal is to group similar signals together, in so-called clusters, and to describe each cluster with a formula.

14:45
First order temporal properties of continuous signals

ABSTRACT. We propose to introduce first order quantifiers in Signal Temporal Logic. Our definitions enable to describe many properties of interest, such as the stabilization of a signal around an undetermined threshold, otherwise not expressible. A monitoring algorithm is described for the one-variable fragment of this new logic, based on a geometric representation of validity domains.

15:30-16:00Coffee Break
16:00-17:30 Session 88A (FRIDA)
16:00
Stateless model-checking under the data-centric view

ABSTRACT. Verifying concurrent systems is notoriously hard due to the inherent non-determinism in interprocess communication. Effective verification methods rely on dynamic partial order reduction (DPOR) techniques, which can reason about system correctness without exploring all concurrent traces explicitly. DPOR is traditionally based on the Mazurkiewicz equivalence, which distinguishes between traces based on the relative order of their conflicting events.

In this talk I will present the recently introduced Observation equivalence for DPOR. In contrast to reordering conflicting events, this equivalence is data centric: it distinguishes traces based on which write events are observed by which read events. This equivalence induces a partitioning that is exponentially coarser than the Mazurkiewicz partitioning without sacrificing completeness of bug finding. It also admits natural, efficient DPOR-like algorithms for exploring the partitioning optimally, leading to speedups over the standard methods. I will conclude with some new challenges posed by this data-centric view.

16:30
Parameterized Model Checking of Synchronous Distributed Algorithms by Abstraction

ABSTRACT. Parameterized verification of fault-tolerant distributed algorithms has recently gained more and more attention. Most of the existing work considers asynchronous distributed systems (interleaving semantics). However, there exists a considerable distributed computing literature on synchronous fault-tolerant distributed algorithms: conceptually, all processes proceed in lock-step rounds, that is, synchronized steps of all (correct) processes bring the system into the next state.

We introduce an abstraction technique for synchronous fault-tolerant distributed algorithms that reduces parameterized verification of synchronous fault-tolerant distributed algorithms to finite-state model checking of an abstract system. Using the TLC model checker, we automatically verified several algorithms from the literature, some of which were not automatically verified before. Our benchmarks include fault-tolerant algorithms that solve atomic commitment, 2-set agreement, and consensus.

16:00-17:00 Session 88B (HCVS)
16:00
Report on the CHC competition

ABSTRACT. New this year will be the CHC competition CHC-COMP, which will compare state-of-the-art tools for CHC solving with respect to performance and effectiveness on a set of publicly available benchmarks. More informations can be found at https://chc-comp.github.io/.

16:00-18:00 Session 88C: Isabelle 4 (Isabelle)
Location: Blavatnik LT1
16:00
Substitutionless First-Order Logic: A Formal Soundness Proof

ABSTRACT. Substitution under quantifiers is non-trivial and may obscure a proof system for newcomers. Monk (Arch. Math. Log. Grundl. 1965) successfully eliminates substitution via identities and also uses a so-called normalization of formulas as a further simplification. We formalize the substitutionless proof system in Isabelle/HOL, spelling out its side conditions explicitly and verifying its soundness.

16:15
The remote_build Tool

ABSTRACT. This is an introduction to the remote_build tool for transparent remote session builds. The intended workflow for a user is to locally issue a build command for some session heap images and then continue working, while the actual build runs on a remote machine and the resulting heap images are synchronized incrementally as soon as they are available.

16:30
PaMpeR: A Proof Method Recommendation System for Isabelle/HOL

ABSTRACT. Deciding which sub-tool to use for a given proof state requires expertise specific to each ITP. To mitigate this problem, we present PaMpeR, a proof method recommendation system for Isabelle/HOL. Given a proof state, PaMpeR recommends proof methods to discharge the proof goal and provides qualitative explanations as to why it suggests these methods. PaMpeR generates these recommendations based on existing hand-written proof corpora, thus transferring experienced users' expertise to new users. Our evaluation shows that PaMpeR correctly predicts experienced users' proof methods invocation especially when it comes to special purpose proof methods.

17:00
Using Isabelle/UTP for the Verification of Sorting Algorithms: A Case Study

ABSTRACT. We verify functional correctness of insertion sort as well as the partition function of quicksort. We use Isabelle/UTP and its denotational semantics for imperative programs as a verification framework. We propose a forward Hoare VCG for our reasoning and we discuss the different technical challenges encountered while using Isabelle/UTP.

16:00-18:00 Session 88D: LCC: Martin Hofmann Memorial Session (LCC)
Location: Maths L5
16:00
My Journey through Observational Equivalence Research with Martin Hofmann
16:30
Memories of Martin
16:00-18:00 Session 88E (LSB)
Chair:
16:00
A logic modelling workflow for systems pharmacology
16:00-18:00 Session 88F (LearnAut)
16:00
The Learnability of Symbolic Automata

ABSTRACT. Symbolic automata allow transitions to carry predicates over rich alphabet theories, such as linear arithmetic, and therefore extend classic automata to operate over infinite alphabets, such as the set of rational numbers. In this paper, we study the problem of learning symbolic automata and exactly characterize under what conditions symbolic automata can be learned efficiently. First, we present \alg, an $L^*$-style algorithm for learning symbolic automata using membership and equivalence queries, which treats the predicates appearing on transitions as their own learnable entities. The main novelty of \alg is that it can take as input an algorithm $\Lambda$ for learning predicates in the underlying alphabet theory and it uses $\Lambda$ to infer the predicates appearing on the transitions in the target automaton. Using this idea, \alg is able to learn automata operating over alphabets theories in which predicates are efficiently learnable using membership and equivalence queries. Furthermore, we prove that a necessary condition for efficient learnability of an \SFA is that predicates in the underlying algebra are also efficiently learnable using queries. Our results settle the learnability of a large class of \SFA instances, leaving open only a subclass of \SFAs over efficiently learnable alphabet theories. We implement \alg in an open-source library and show that it can efficiently learn automata that cannot be learned using existing algorithms and significantly outperform existing automata learning algorithms over large alphabets.

16:30
Automatic clustering of a network protocol with weakly-supervised clustering

ABSTRACT. Abstraction is a fundamental part when learning behavioral models of systems. Usually the process of abstraction is manually defined by domain experts. This paper presents a method to perform automatic abstraction for network protocols. In particular a weakly supervised clustering algorithm is used to build an abstraction with a small vocabulary size for the widely used TLS protocol. To show the effectiveness of the proposed method we compare the resultant abstract messages to a manually constructed (reference) abstraction. With a small amount of side-information in the form of a few labeled examples this method finds an abstraction that matches the reference abstraction perfectly.

17:00
Toward Learning Boolean Functions from Positive Examples

ABSTRACT. In this article we investigate learning of DNF representations of boolean functions with two special focuses: (i) we consider that a sample with only positive examples is given to the learner (and hence no membership queries are allowed); and (ii) the function we consider are true on a sparse set, namely its cardinality is far smaller than $2^n$ where $n$ is the number of variables of the problem instance. After motivating this problem, we provide a conceptual solution of how much to generalize beyond the sample and where. We introduce an empirical greedy algorithm to address the problem under study and illustrate it on a randomly sampled DNF.

17:30
Learning Several Languages from Labeled Strings: State Merging and Evolutionary Approaches

ABSTRACT. The problem of learning pairwise disjoint deterministic finite automata (DFA) from positive examples has been recently addressed. In this paper, we address the problem of identifying a set of DFAs from labeled strings and come up with two methods. The first is based on state merging and a heuristic related to the size of each state merging iteration. State merging operations involving a large number of states are extracted, to provide sub-DFAs. The second method is based on a multi-objective evolutionary algorithm whose fitness function takes into account the accuracy of the DFA w.r.t. the learning sample, as well as the desired number of DFAs. We evaluate our methods on a dataset originated from industry.

16:00-18:00 Session 88G: Markov decision processes 2 (MoRe)
16:00
Multi-Scenario Uncertainty in Markov Decision Processes

ABSTRACT. Markov decision processes (MDPs) are a well-known formalism for mod- eling discrete event systems. Several extensions of the model have been proposed in order to encompass additional model-level uncertainty in MDPs. Here, we introduce a multi-scenario uncertainty model which has been proposed in the author’s PhD thesis and possible future research.

16:30
Safe and Optimal Scheduling of Hard and Soft Tasks

ABSTRACT. We consider a stochastic scheduling problem with both hard and soft tasks. Each task is described by a discrete probability distribution over possible execution times, a deadline and a distribution over inter-arrival times. Our scheduling problem is non-clairvoyant in the sense that the execution and inter-arrival times are subject to uncertainty modeled by stochastic distributions. Given an instance of our problem, we want to synthesize a schedule that is safe and efficient: safety imposes that deadline of hard tasks are never violated while efficient means that soft tasks should be completed as often as possible. To enforce that property, we impose a cost when a soft task deadline is violated and we are looking for a schedule that minimizes the expected value of the limit average of this cost.

First, we show that the dynamics of such a system can be modeled as a finite Markov Decision Process (MDP). Second, we show that the problem of deciding the existence of a safe and efficient scheduler is PP-hard and in EXPTIME. Third, we have implemented our synthesis algorithm that partly relies on the probabilistic model-checker Storm to analyze the underlying MDP. Given a set of both hard and soft tasks, we first compute the set of safe schedules, i.e., the ones in which the hard tasks always finish their execution, then we compute the scheduler that minimizes the limit average cost among the set of all safe schedules. Finally, we compare the optimal solutions obtained with our procedure against an adaptation of the earliest deadline first (EDF) algorithm to account for the soft task. We show that this EDF strategy can be arbitrarily worse in terms of the expected limit average cost for missing the deadlines of the soft tasks when compared to our optimal procedure.

17:00
Logically-Constrained Reinforcement Learning

ABSTRACT. We propose a Reinforcement Learning (RL) algorithm to synthesize policies for a Markov Decision Process (MDP), such that a linear time property is satisfied. We convert the property into a Limit Deterministic Buchi Automaton (LDBA), then construct a product MDP between the automaton and the original MDP. A reward function is then assigned to the states of the product automaton, according to accepting conditions of the LDBA. With this reward function, our algorithm synthesizes a policy that satisfies the linear time property: as such, the policy synthesis procedure is "constrained" by the given specification. Additionally, we show that the RL procedure sets up an online value iteration method to calculate the maximum probability of satisfying the given property, at any given state of the MDP - a convergence proof for the procedure is provided. Finally, the performance of the algorithm is evaluated via a set of numerical examples. We observe an improvement of one order of magnitude in the number of iterations required for the synthesis compared to existing approaches.

16:00-16:40 Session 88I: Explainable machine learning (SoMLMFM)
Location: Maths LT1
16:00
Influence-directed explanations for machine learning systems

ABSTRACT. Machine learning systems are often inscrutable black boxes: they are effective predictors but lack human-understandable explanations. I will describe a new paradigm for influence-directed explanations that address this problem. Influence-directed explanations shed light on the inner workings of black-box machine learning systems by identifying components that causally influence system behavior and by providing human-understandable interpretation to the concepts represented by these components. I will describe instances of this paradigm that are model-agnostic and instances that are specific to deep neural networks. Influence-directed explanations serve as a foundation for reasoning about fairness and privacy of machine learning models. Our initial exploration suggests that formal methods for analysis of probabilistic systems have an important role to play in efficient generation of influence-directed explanations. Joint work with colleagues at Carnegie Mellon University.

16:20
Safety verification for deep neural networks with provable guarantees

ABSTRACT. Deep neural networks have achieved impressive experimental results in image classification, but can surprisingly be unstable with respect to adversarial perturbations, that is, minimal changes to the input image that cause the network to misclassify it. With potential applications including perception modules and end-to-end controllers for self-driving cars, this raises concerns about their safety. This lecture will describe progress with developing automated verification techniques for deep neural networks to ensure safety of their classification decisions with respect to image manipulations, for example scratches or changes to camera angle or lighting conditions, that should not affect the classification. The techniques exploit Lipschitz continuity of the networks and aim to approximate, for a given set of inputs, the reachable set of network outputs in terms of lower and upper bounds, in anytime manner, with provable guarantees. We develop novel algorithms based on games and global optimisation, and evaluate them on state-of-the-art networks.

16:00-18:00 Session 88J (Tetrapod)
Location: Maths L6
16:00
Modularity in Proof Assistants

ABSTRACT. Modularity in Proof Assistants

16:45
Discussion 2

ABSTRACT. Discussions on the previous talks

16:00-17:00 Session 88K (VDMW)
Chair:
Location: Blavatnik LT2
16:00
Learning from failure
16:00-17:30 Session 88L: Contributed Talks (Vampire)
16:00
Experimenting with Theory Instantiation in Vampire
SPEAKER: Martin Riener

ABSTRACT. Theory instantiation tackles the problem of theory reasoning with quantifiers in Vampire using an SMT solver. In contrast to AVATAR modulo theories it works locally by instantiating a clause such that its pure theory part becomes inconsistent and can be deleted. We present our experiments with different clause filtering strategies. This achieves two goals, better interaction with unification with abstraction as well as restricting the problem passed to the SMT solver to well supported theories like linear integer arithmetic.

16:30
Giving AVATAR Quantifiers to Play With
SPEAKER: Martin Suda

ABSTRACT. We investigate an extension of the AVATAR architecture which exposes first-order variables to the underlying SMT-solver. This enables (a controlled amount of) quantifier instantiation effort from within the SMT solver to complement the first-order prover's native support for quantifier reasoning. We report on experimental results with Vampire combined with the SMT solver CVC4.

17:00
Developments in Higher-order Theorem proving within Vampire
SPEAKER: Ahmed Bhayat

ABSTRACT. This talk presents on-going work on extending Vampire to higher-order reasoning. Currently, two strategies are being explored. A translation from higher to first-order logic has been implemented and a mixture of heuristics and special inference rules have been implemented to control the search procedure. Alternatively, we are working on extending Vampire's data structures to support higher-order features and then pragmatically introducing higher-order reasoning.

16:00-18:00 Session 88M (rv4rise)
16:00
Using the F1/10 Autonomous Racing Platform for Runtime Verification Research

ABSTRACT. Monitor synthesis from high-level specifications provides a fast and convenient way of developing monitoring code with guarantees on monitoring complexity. It remains important to test the generated code on a realistic setup, that closely mimics the final deployment environment. In this talk we will describe the F1/10 Autonomous Racing Platform, and its associated simulator, as an environment for developing and testing auto-generated monitoring code for robotic applications. F1/10 is a fully open-source project for creating a 1/10th scale autonomous race car, and its associated simulator allows easy and risk-free testing of navigation and monitoring algorithms. The platform allows developers to run their monitors on the deployment hardware, inside a deployment software stack based on ROS, which means that the code can be deployed as-is on the car. This tutorial-style talk will also demonstrate how to setup the simulator and we will synthesize ROS-based monitors using the Reelay tool. We will also describe plans for creating hardware-as-a-service for testing algorithms targeting autonomous driving.

16:10-18:00 Session 89: Small Models and SL-COMP (ADSL)
Chair:
16:10
Sloth: Separation Logic and Theories via Small Models
SPEAKER: Jens Katelaan

ABSTRACT. We present Sloth, a solver for separation logic modulo theory constraints specified in the separation logic SL∗, a propositional separation logic that we recently introduced in our IJCAR'18 paper "A Separation Logic with Data: Small Models and Automation." SL∗ admits NP decision procedures despite its high expressiveness; features of the logic include support for list and tree fragments, universal data constraints about both lists and trees, and Boolean closure of spatial formulas. Sloth solves SL* constraints via an SMT encoding of SL* that is based on the logic's small-model property. We argue that, while clearly still a work in progress, Sloth already demonstrates that SL* lends itself to the automation of nontrivial examples. These results complement the theoretical work presented in the IJCAR'18 paper.

16:40
Presentation of Results from SL-COMP 2018

ABSTRACT. TBA

16:30-18:00 Session 90: Talks in memoriam Mike Gordon (CAV)
Location: Maths LT3
16:30
Proof Scripting from HOL to Goaled
17:00
HOL Developed and HOL Used: Interconnected Stories of Real-World Applications

ABSTRACT. I will describe work, starting with my PhD, that involved use of the HOL system’s evolving capabilities to model and verify properties of detailed real-world systems. In particular, HOL turns out to be a great tool for operational semantics applications. The projects I worked on in Cambridge were formalisations of C and TCP/IP, and both point forward to more recent work of the 2010s. At the same time, I will try to describe the work I and others put into making HOL an ever-more capable system for supporting such work.

 

17:30
From Processor Verification Upwards
16:40-18:00 Session 91: Panel Session & Summary (SoMLMFM)

Panel session preceded by an introductory talk.

Location: Maths LT1
16:40
What Just Happened in AI

ABSTRACT. I will discuss in this talk my perspective on recent developments in artificial intelligence, which have been triggered by the successes of deep learning. The discussion will be based on a position paper that will appear in CACM with the title “Human-Level Intelligence or Animal-Like Abilities?” My presentation will be hinged on a distinction between model-based and function-based approaches to AI, and will address a spectrum of issues and considerations, while placing current developments in a historical perspective. I will also share thoughts on how best to capitalize on current progress as we move forward, touching on both educational and research needs.

17:00
Panel session

ABSTRACT. The panel session will be moderated by:

  • Francesca Rossi, AI Ethics Global Leader / Distinguished Research Staff Member at IBM Research AI, and Professor of Computer Science—specializing in artificial intelligence, University of Padova

and the panelists are:

  • Adnan Darwiche, Professor and Chairman of the Science department at UCLA, who directs the Automated Reasoning Ggroup which focuses on probabilistic and logical reasoning, and their applications to machine learning
  • Stuart Russell, Professor of Computer Science and Smith-Zadeh Professor in Engineering, University of California, Berkeley, and Adjunct Professor of Neurological Surgery, University of California, San Francisco, who is interested in building systems that can act intelligently in the real world
  • Vasu Singh, who is managing safety and rigorous software engineering of the Deep Learning SW at NVIDIA and prior to this worked in the embedded/automotive industry (Lam Research, BMW)
  • Chris Holmes, Professor of Biostatistics at the Department of Statistics and the Nuffield Department of Medicine at the University of Oxford, whose research explores the potential of computational statistics and statistical machine learning to assist in the medical and health sciences
19:00-21:30 Workshops dinner at Keble College (FLoC)

Workshops dinner at Keble College. Drinks reception from 7pm, to be seated by 7:30 (pre-booking via FLoC registration system required; guests welcome).

Location: Keble College