FLOC 2018: FEDERATED LOGIC CONFERENCE 2018
PROGRAM FOR THURSDAY, JULY 12TH
Days:
previous day
next day
all days

View: session overviewtalk overview

09:00-10:00 Session 70A: Invited talk: Calin Belta (ADHS)
09:00
Formal Synthesis of Control Strategies for Dynamical Systems

ABSTRACT. In control theory, complex models of physical processes, such as systems of differential equations, are analyzed or controlled from simple specifications, such as stability and set invariance. In formal methods, rich specifications, such as formulae of temporal logics, are checked against simple models of software programs and digital circuits, such as finite transition systems. With the development and integration of cyber physical and safety critical systems, there is an increasing need for computational tools for verification and control of complex systems from rich, temporal logic specifications. In this talk, I will discuss a set of approaches to formal synthesis of control strategies for dynamical systems from temporal logic specifications. I will first show how automata games for finite systems can be extended to obtain conservative control strategies for low dimensional linear and multilinear dynamical systems. I will then present several methods to reduce conservativeness and improve the scalability of the control synthesis algorithms for more general classes of dynamics. I will illustrate the usefulness of these approaches with examples from robotics and traffic control.

09:00-10:00 Session 70B: FSCD Invited talk: Stéphanie Delaune (FSCD)
09:00
Analysing privacy-type properties in cryptographic protocols

ABSTRACT. Cryptographic protocols aim at securing communications over insecure networks such as the Internet, where dishonest users may listen to communications and interfere with them. For example, passports are no more pure paper documents. Instead, they contain a chip that stores additional information such as pictures and fingerprints of their holder. In order to ensure privacy, these chips include a mechanism, i.e. a cryptographic protocol, that does not let the passport disclose private information to external users. This is just a single example but of course privacy appears in many other contexts such as RFIDs technologies or electronic voting. Formal methods have been successfully applied to automatically analyze traditional protocols and discover flaws. Privacy-type security properties (e.g. anonymity, unlinkability, ...) are expressed relying on a notion of behavioral equivalence, and are actually more difficult to analyse. We will discuss some recent advances that have been done to analyse automatically equivalence-based security properties, and we will review some issues that remain to be solved in this field.

09:00-10:30 Session 70C (ITP)
Location: Blavatnik LT1
09:00
A Formal Proof of the Minor-Exclusion Property for Treewidth-Two Graphs

ABSTRACT. We give a formal and constructive proof in Coq/Ssreflect of the known result that the graphs of treewidth two are exactly those that do not admit K4 as a minor. This result is a milestone towards a formal proof of the recent result that isomorphism of treewidth-two graphs can be finitely axiomatized. The proof is based on a function extracting terms from K4-free graphs in such a way that the interpretation of an extracted term yields a treewidth-two graph isomorphic to the original graph.

09:30
Verifying the LTL to Büchi Automata Translation via Very Weak Alternating Automata
SPEAKER: Simon Jantsch

ABSTRACT. We present a formalization of a translation from LTL formulae to generalized Büchi Automata in the HOL4 theorem prover. Translations from temporal logics to automata are at the core of model checking algorithms based on automata-theoretic techniques. The translation we verify proceeds in two steps: it produces Very Weak Alternating Automata at an intermediate stage, and then ultimately produces a generalized Büchi Automaton. After verifying both transformations, we also encode both of these automata models using a generic, functional graph type, and use the CakeML compiler to generate fully verified machine code implementing the translation.

10:00
Verification of PCP-Related Computational Reductions in Coq

ABSTRACT. We formally verify several computational reductions concerning the Post correspondence problem (PCP) using the proof assistant Coq. Our verification includes a reduction of the halting problem for Turing machines to string rewriting, a reduction of string rewriting to PCP, and reductions of PCP to the intersection problem and the palindrome problem for context-free grammars.

09:00-10:40 Session 70D (LICS)
Location: Maths LT1
09:00
Species, Profunctors and Taylor Expansion Weighted by SMCC--A Unified Framework for Modelling Nondeterministic, Probabilistic and Quantum Programs--

ABSTRACT. Motivated by a tight connection between Joyal's combinatorial species and quantitative models of linear logic, this paper introduces \emph{weighted generalised species} (or \emph{weighted profunctors}), where weights are morphisms of a given symmetric monoidal closed category (SMCC). For each SMCC W, we show that the category of W-weighted profunctors is a Lafont category, a categorical model of linear logic with exponential. As a model of programming languages, the construction of this paper gives a unified framework that induces adequate models of nondeterministic, probabilistic, algebraic and quantum programming languages by appropriately choosing the weight SMCC.

09:20
Logical paradoxes in quantum computation

ABSTRACT. The precise features of quantum theory enabling quantum computational advantage are unclear. Contextuality has emerged as a promising hypothesis: e.g. the magic states needed to practically achieve quantum computation are contextual.

Strong contextuality, as defined by Abramsky-Brandenburger, is an extremal form of contextuality describing systems that exhibit logically paradoxical behaviour.

After introducing number-theoretic techniques for constructing exotic quantum paradoxes, we give large families of strongly contextual magic states that are optimal in the sense that they enable deterministic injection of gates of the Clifford hierarchy. We thereby bolster a refinement of the resource theory of contextuality that emphasises the computational power of logical paradoxes.

09:40
A Complete Axiomatisation of the ZX-Calculus for Clifford+T Quantum Mechanics

ABSTRACT. We introduce the first complete and approximatively universal diagrammatic language for quantum mechanics. We make the ZX-Calculus, a diagrammatic language introduced by Coecke and Duncan, complete for the so-called Clifford+T quantum mechanics by adding two new axioms to the language. The completeness of the ZX-Calculus for Clifford+T quantum mechanics was one of the main open questions in categorical quantum mechanics. We prove the completeness of the Clifford+T ZX-Calculus using the recently studied ZW-Calculus, a calculus dealing with integer matrices. We also prove that this fragment of the ZX-Calculus represents exactly all the matrices over some finite dimensional extension of the ring of dyadic rationals.

10:00
Two complete axiomatisations of pure-state qubit quantum computing
SPEAKER: Kang Feng Ng

ABSTRACT. Categorical quantum mechanics places finite-dimensional quantum theory in the context of compact closed categories, with an emphasis on diagrammatic reasoning. In this framework, two equational diagrammatic calculi have been proposed for pure-state qubit quantum computing: the ZW calculus, developed by Coecke, Kissinger and the first author for the purpose of qubit entanglement classification, and the ZX calculus, introduced by Coecke and Duncan to give an abstract description of complementary observables. Neither calculus, however, provided a complete axiomatisation of their model.

In this paper, we present extended versions of ZW and ZX, and show their completeness for pure-state qubit theory, thus solving two major open problems in categorical quantum mechanics. First, we extend the original ZW calculus to represent states and linear maps with coefficients in an arbitrary commutative ring, and prove completeness by a strategy that rewrites all diagrams into a normal form. We then extend the language and axioms of the original ZX calculus, and show their completeness for pure-state qubit theory through a translation between ZX and ZW specialised to the field of complex numbers. This translation expands the one used by Jeandel, Perdrix, and Vilmart to derive an axiomatisation of the approximately universal Clifford+T fragment; restricting the field of complex numbers to a suitable subring, we obtain an alternative axiomatisation of the same theory.

10:20
Diagrammatic Reasoning beyond Clifford+T Quantum Mechanics

ABSTRACT. The ZX-Calculus is a graphical language for diagrammatic reasoning in quantum mechanics and quantum information theory. An axiomatisation has recently been proven to be complete for an approximatively universal fragment of quantum mechanics, the so-called Clifford+T fragment. We focus here on the expressive power of this axiomatisation beyond Clifford+T Quantum mechanics. We consider the full pure qubit quantum mechanics, and mainly prove two results: (i) First, the axiomatisation for Clifford+T quantum mechanics is also complete for all equations involving some kind of linear diagrams. The linearity of the diagrams reflects the phase group structure, an essential feature of the ZX-calculus. In particular all the axioms of the ZX-calculus are involving linear diagrams. (ii) We also show that the axiomatisation for Clifford+T is not complete in general but can be completed by adding a single (non linear) axiom, providing a simpler axiomatisation of the ZX-calculus for pure quantum mechanics than the one recently introduced by Ng&Wang.

09:00-10:40 Session 70E (LICS)
Location: Maths LT3
09:00
Quantitative Behavioural Reasoning for Higher-order Effectful Programs: Applicative Distances

ABSTRACT. This paper studies the quantitative refinement of Absramsky's applicative similarity and bisimilarity in the context of a generalisation of Fuzz, a call-by-value $\lambda$-calculus with a linear type system that can express program sensitivity, enriched with algebraic operations \emph{\`a la} Plotkin and Power. To do so a general, abstract framework for studying behavioural relations taking values over quantales is defined according to Lawvere's analysis of generalised metric spaces. Barr's notion of relator (or lax extension) is then extended to quantale-valued relations adapting and extending results from the field of monoidal topology. Abstract notions of quantale-valued effectful applicative similarity and bisimilarity are then defined and proved to be a compatible generalised metric (in the sense of Lawvere) and pseudometric, respectively, under mild conditions.

09:20
A Generalized Modality for Recursion

ABSTRACT. Nakano's *later* modality allows types to express that the output of a function does not immediately depend on its input, and thus that computing its fixpoint is safe. This idea, guarded recursion, has proved useful in various contexts, from functional programming with infinite data structures to formulations of step-indexing internal to type theory. Categorical models have revealed that the later modality corresponds in essence to a simple reindexing of the discrete time scale.

Unfortunately, existing guarded type theories suffer from significant limitations for programming purposes. These limitations stem from the fact that the later modality is not expressive enough to capture precise input-output dependencies of functions. As a consequence, guarded type theories reject many productive definitions. Combining insights from guarded type theories and synchronous programming languages, we propose a new modality for guarded recursion. % This modality can apply to a type any sufficiently well-behaved reindexing of the time scale. We call such reindexings *time warps*. Several modalities from the literature, including later, correspond to fixed time warps, and thus arise as special cases of ours.

We integrate our modality into a typed lambda-calculus. We equip this calculus with an operational semantics, as well as an adequate denotational semantics in the topos of trees, a standard categorical model for guarded recursion. Building on top of categorical ideas, we describe an abstract type-checking algorithm whose completeness entails the coherence of both semantics.

09:40
A functional interpretation with state

ABSTRACT. We present a new variant of Goedel's functional interpretation in which extracted programs, rather than being pure terms of system T, interact with a global state. The purpose of the state is to store relevant information about the underlying mathematical environment. Because the validity of extracted programs can depend on the validity of the state, this offers us an alternative way of dealing with the contraction problem. Furthermore, this new formulation of the functional interpretation gives us a clear semantic insight into the computational content of proofs, and provides us with a way of improving the efficiency of extracted programs.

10:00
The Geometry of Computation-Graph Abstraction

ABSTRACT. The popular library TensorFlow (TF) has familiarised the mainstream of machine-learning community with programming language concepts such as dataflow computing and automatic differentiation. Additionally, it has introduced some genuinely new syntactic and semantic programming concepts. In this paper we study one such new concept. It is a subtle but important feature of TensorFlow, the ability to extract and manipulate the state of a computation graph. This feature allows the convenient specification of parameterised models by freeing the programmer of much of the bureaucratic burden of managing them, while still permitting the use of generic, model-independent, search and optimisation algorithms. We study this new language feature, which we call `graph abstraction' in the context of the call-by-value lambda calculus, using the recently developed Dynamic Geometry of Interaction formalism. We give a simple type system guaranteeing the safety of graph abstraction, and we also show the safety of critical language properties such as garbage collection and the beta law. The semantic model also suggests that the feature could be implemented in a general-purpose functional language reasonably efficiently.

10:20
Degrees of Relatedness - A Unified Framework for Parametricity, Irrelevance, Ad Hoc Polymorphism, Intersections, Unions and Algebra in Dependent Type Theory
SPEAKER: Andreas Nuyts

ABSTRACT. Dependent type theory allows us to write programs and to prove properties about those programs in the same language. However, some properties do not require much proof, as they are evident from a program's implementation, e.g. if a polymorphic program is not ad hoc but relationally parametric, then we get parametricity theorems for free. If we want to safely shortcut proofs by relying on the evident good behaviour of a program, then we need a type-level guarantee that the program is indeed well-behaved. This can be achieved by annotating function types with a modality describing the behaviour of functions.

We consider a dependent type system with modalities for relational parametricity, irrelevance (i.e. type-checking time erasability of an argument) and ad hoc polymorphism. The interplay of three modalities and dependent types raises a number of questions. For example: If a term depends on a variable with a given modality, then how should its type depend on it? Are all modalities always applicable, e.g. should we consider parametric functions from the booleans to the naturals? Do we need any further modalities in order to properly reason about these three?

We develop a type system, based on a denotational presheaf model, that answers these questions. The core idea is to equip every type with a number of relations --- just equality for small types, but more for larger types --- and to describe function behaviour by saying how functions act on those relations. The system has modality-aware equality judgements (ignoring irrelevant parts) and modality-aware proving operators (for proving free theorems) which are even self-applicable. It also supports sized types, some form of intersection and union types, and parametric quantification over algebraic structures. We prove soundness in a denotational presheaf model.

09:00-10:30 Session 70F (SMT)
09:00
Invited talk: Automating Separation Logics using SMT

ABSTRACT. Separation logic (SL) has gained widespread popularity as a formal foundation of tools that analyze and verify heap-manipulating programs. Its great asset lies in its assertion language, which can succinctly express how data structures are laid out in memory, and its discipline of local reasoning, which mimics human intuition about how to prove heap programs correct. This talk discusses approaches to automated reasoning in separation logics using SMT solvers. I will present fragments of SL that admit reductions to decidable first-order theories. I will also discuss incomplete approaches based on combinations of SL-style proof-theoretic reasoning and SMT techniques for some nonstandard models of SL. These techniques have been implemented in the deductive program verifier GRASShopper.

10:00
Revisiting Enumerative Instantiation

ABSTRACT. Formal methods applications often rely on SMT solvers to automatically discharge proof obligations. SMT solvers handle quantified formulas using incomplete heuristic techniques like E-matching, and often resort to model-based quantifier instantiation (MBQI) when these techniques fail. This paper revisits enumerative instantiation, a technique that considers instantiations based on exhaustive enumeration of ground terms. Although simple, we argue that enumerative instantiation can supplement other instantiation techniques and be a viable alternative to MBQI for valid proof obligations. We first present a stronger Herbrand Theorem, better suited as a basis for the instantiation loop used in SMT solvers; it furthermore requires considering less instances than classical Herbrand instantiation. Based on this result, we present different strategies for combining enumerative instantiation with other instantiation techniques in an effective way. The experimental evaluation shows that the implementation of these new techniques in the SMT solver cvc4 leads to significant improvements in several benchmark libraries, including many stemming from verification efforts.

09:30-10:30 Session 71: Side channels (CSF)
09:30
Symbolic Side-Channel Analysis for Probabilistic Programs

ABSTRACT. In this paper we describe symbolic side-channel analysis techniques for detecting and quantifying information leakage, given in terms of Shannon and Min Entropy. Measuring the precise leakage is challenging due to the randomness and noise often present in program executions and side-channel observations. We account for this noise by introducing additional (symbolic) program inputs which are interpreted {\em probabilistically}, using symbolic execution with {\em parameterized} model counting. We also explore an approximate sampling approach for increased scalability. In contrast to typical Monte Carlo techniques, our approach works by sampling symbolic paths, representing multiple concrete paths, and uses pruning to accelerate computation and guarantee convergence to the optimal results. The key novelty of our approach is to provide bounds on the leakage that are provably under- and over-approximating the real leakage. We implemented the techniques in the Symbolic PathFinder tool and we demonstrate them on Java programs.

10:00
Secure compilation of side-channel countermeasures: the case of cryptographic “constant-time”

ABSTRACT. Software-based countermeasures provide effective mitigation against side-channel attacks, often with minimal efficiency and deployment overheads. Their effectiveness is often amenable to rigorous analysis: specifically, several popular countermeasures can be formalized as information flow policies, and correct implementation of the countermeasures can be verified with state-of-the-art analysis and verification techniques. However, in absence of further justification, the guarantees only hold for the language (source, target, or intermediate representation) on which the analysis is performed.

We consider the problem of preserving side-channel countermeasures by compilation for cryptographic “constant-time”, a popular countermeasure against cache-based timing attacks. We present a general method, based on the notion of 2-simulation, for proving that a compilation pass preserves the constant-time countermeasure. Using the Coq proof assistant, we verify the correctness of our method and of several representative instantiations.

10:00-10:30 Session 72: Unification (FSCD)
10:00
Term-Graph Anti-Unification
SPEAKER: Temur Kutsia

ABSTRACT. We study anti-unification for possibly cyclic, unranked term-graphs and develop an algorithm, which computes a minimal complete set of least general generalizations for them. For bisimilar graphs the algorithm computes the join in the lattice generated by a functional bisimulation. Besides, we consider the case when the graph edges are not ordered (modeled by commutativity).

These results generalize anti-unification for ranked and unranked terms to the corresponding term-graphs, and solve also anti-unification problems for rational terms and dags. Our results open a way to widen anti-unification based code clone detection techniques from a tree representation to a graph representation of the code.

10:30-11:00Coffee Break
10:50-12:30 Session 73A: Reachability and safety analysis (ADHS)
10:50
Reachability Analysis for One Dimensional Linear Parabolic Equation

ABSTRACT. Partial differential equations (PDEs) mathematically describe a wide range of phenomena such as fluid dynamics, or quantum mechanics. Although great achievements have been accomplished in the field of numerical methods for solving PDEs, from a safety verification (or falsification) perspective, methods are still needed to verify (or falsify) a system whose dynamics is specified as a PDE that may depend not only on space, but also on time. As many cyber-physical systems (CPS) involve sensing and control of physical phenomena modeled as PDEs, reachability analysis of PDEs provides novel methods for safety verification and falsification. As a first step to address this challenging problem, we propose a reachability analysis approach leveraging the well-known Galerkin Finite Element Method (FEM) for a class of one-dimensional linear parabolic PDEs with fixed but uncertain inputs and initial conditions, which is a subclass of PDEs that is useful for modeling, for instance, heat flows. In particular, a continuous approximate reachable set of the parabolic PDE is computed using linear interpolation. Since a complete conservativeness is hardly achieved by using the approximate reachable set, to enhance the conservativeness, we investigate the error bound between the numerical solution and the exact analytically unsolvable solution to bloat the continuous approximate reachable set. This bloated reachable set is then used for safety verification and falsification. In the case that the safety specification is violated, our approach produces a numerical trace to prove that there exists an initial condition and input that lead the system to an unsafe state.

11:15
On Generating A Variety of Unsafe Counterexamples for Linear Dynamical Systems

ABSTRACT. Counterexamples encountered in formal verification are typically used as evidence for violation of specification. They also play a crucial role in CEGAR based techniques, where the counterexample guides the refinements to be performed on the abstractions. While several scalable techniques for verification have been developed for safety verification of hybrid systems, less attention has been paid to extracting the various types of counterexamples for safety violations. Since these systems are infinite state systems, the number of counterexamples for safety violations are potentially infinite and hence searching for the right counterexample becomes a challenging task. In this paper, we present a technique for providing various types of counterexamples for a safety violation of the linear dynamical system. More specifically, we develop algorithms to extract the longest counterexample — the execution that stays in the unsafe set for most time, and deepest counterexample — the execution that ventures the most into the unsafe set in a specific direction provided by the user.

11:40
T-Barrier Certificates: A Continuous Analogy to K-Induction

ABSTRACT. Safety proofs of discrete and continuous systems often use related proof approaches, and insight can be obtained by comparing reasoning methods across domains. For example, proofs using inductive invariants in discrete systems are analogous to barrier certificate methods in continuous systems. In this paper, we present and prove the soundness of continuous and hybrid analogs to the k-induction proof rule, which we call t-barrier certificates. The method combines symbolic reasoning and time-bounded reachability along the barrier in order to prove system safety. Compared with traditional barrier certificates, a larger class of functions can be shown to be t-barrier certificates, so we expect them to be easier to find. Compared with traditional reachability analysis, t-barrier certificates can be computationally tractable in nonlinear settings despite large initial sets, and they prove time-unbounded safety. We demonstrate the feasibility of the approach with a nonlinear harmonic oscillator example, using sympy and Z3 for symbolic reasoning and Flow* for reachability analysis.

12:05
Learning and Verification of Feedback Control Systems Using Feedforward Neural Networks

ABSTRACT. We present an approach to learn and formally verify feedback laws for data-driven models of neural networks. Neural networks are emerging as powerful and general data-driven representations for functions. This has led to their increased use in data-driven plant models and the representation of feedback laws in control systems. However, it is hard to formally verify properties of such feedback control systems. The proposed learning approach uses a receding horizon formulation that samples from the initial states and disturbances to enforce properties such as reachability, safety and stability. Next, our verification approach uses an over-approximate reachability analysis over the system, supported by range analysis for feedforward neural networks. We report promising results obtained by applying our techniques on several challenging nonlinear dynamical systems.

10:50-12:30 Session 73B: Observation and Estimation (ADHS)
10:50
Optimization-Based Design of Bounded-Error Estimators Robust to Missing Data

ABSTRACT. Non-asymptotic bounded-error state estimators that provide hard bounds on the estimation error are crucial for safety-critical applications. This paper proposes a class of optimal bounded-error affine estimators to achieve a novel property we are calling Equalized Recovery that can be computed by leveraging ideas from the dual problem of affine finite horizon optimal control design. In particular, by using Q-parametrization, the estimator design problem is reduced to a convex optimization problem. An extension of this estimator to handle missing data (e.g., due to package drops or sensor glitches) is also proposed. These ideas are illustrated with a numerical example motivated by vehicle safety systems.

11:15
Observability of Linear Hybrid Systems with Unknown Inputs and Discrete Dynamics Modeled by Petri Nets

ABSTRACT. This work deals with the observability analysis for LHS’s considering both known and unknown inputs and constrained discrete dynamics, modeled by Petri nets. For this, the concept of eventual observability is recalled as the possibility of uniquely determining both the discrete and the continuous states after a finite number of switchings. In this way, the information provided by the continuous and the discrete outputs of the LHS can be combined to determine the discrete state after a finite number of switchings. Next, based on the knowledge of the visited locations, a continuous observer can estimate the continuous state. It is shown that under this approach the observability conditions are greatly relaxed with respect to other approaches in the literature, in particular, neither the observability of the linear systems nor the observability of the underlying discrete event system are required.

11:40
On Approximate Predictability of Metric Systems

ABSTRACT. In this paper we introduce and characterize the notion of approximate predictability for the general class of metric systems, which are a powerful modeling framework to deal with complex heterogeneous systems such as hybrid systems. Approximate predictability corresponds to the possibility of predicting the occurrence of specific states belonging to a particular subset of interest, in advance with respect to their occurrence, on the basis of observations corrupted by measurement errors. We establish a relation between approximate predictability of a given metric system and approximate predictability of a metric system that approximately simulates the given one. This relation allows checking approximate predictability of a system with an infinite number of states, provided that one is able to construct a metric system with a finite number of states and inputs, approximating the original one in the sense of approximate simulation. The analysis of approximate predictability of Piecewise Affine (PWA) systems is carried out as an application of the proposed approach.

12:05
Input Design for Nonlinear Model Discrimination Via Affine Abstraction

ABSTRACT. This paper considers the design of separating input signals in order to discriminate among a finite number of uncertain nonlinear models. Each nonlinear model corresponds to a system operating mode, unobserved intents of other drivers or robots, or to fault types or attack strategies, etc., and the separating inputs are designed such that the output trajectories of all the nonlinear models are guaranteed to be distinguishable from each other under any realization of uncertainties in the initial condition, model discrepancies or noise. We propose a two-step approach. First, using an optimization-based approach, we over-approximate nonlinear dynamics by uncertain affine models, as abstractions that preserve all its system behaviors such that any discrimination guarantees for the affine abstraction also hold for the original nonlinear system. Then, we propose a novel solution in the form of a mixed-integer linear program (MILP) to the active model discrimination problem for uncertain affine models, which includes the affine abstraction and thus, the nonlinear models. Finally, we demonstrate the effectiveness of our approach for identifying the intention of other vehicles in a highway lane changing scenario.

11:00-12:30 Session 74A: Security protocols II (CSF)
Chair:
11:00
A little more conversation, a little less action, a lot more satisfaction: Global states in ProVerif.

ABSTRACT. ProVerif is a popular tool for the fully automatic analysis of security protocols, offering very good support to detect flaws or prove security. One exception is the case of protocols with global states such as counters, tables, or more generally, memory cells. ProVerif fails to analyse such protocols, due to its internal abstraction.

Our key idea is to devise a generic transformation of the security properties queried to ProVerif. We prove the soundness of our transformation and implement it into a front-end GBVerif. Our experiments show that our front-end (combined with ProVerif) outperforms the few existing tools, both in terms of efficiency and protocol coverage. We successfully apply our tool to a dozen of protocols of the literature, yielding the first fully automatic proof of a security API and a payment protocol of the literature.

11:30
Automated Unbounded Verification of Stateful Cryptographic Protocols with Exclusive OR
SPEAKER: Jannik Dreier

ABSTRACT. Exclusive-or (XOR) operations are common in cryptographic protocols, in particular in RFID protocols and electronic payment protocols. Although there are numerous applications, due to the inherent complexity of faithful models of XOR, there is only limited tool support for the verification of cryptographic protocols using XOR.

The Tamarin prover is a state-of-the-art verification tool for cryptographic protocols in the symbolic model. In this paper, we improve the underlying theory and the tool to deal with an equational theory modeling XOR operations. The XOR theory can be freely combined with all equational theories previously supported, including user-defined equational theories. This makes Tamarin the first tool to support simultaneously this large set of equational theories, protocols with global mutable state, an unbounded number of sessions, and complex security properties including observational equivalence. We demonstrate the effectiveness of our approach by analyzing several protocols that rely on XOR, in particular multiple RFID-protocols, where we can identify attacks as well as provide proofs.

12:00
A Typing Result for Stateful Protocols

ABSTRACT. There are several typing results that, for certain classes of protocols, show it is without loss of attacks to restrict the intruder to sending only well-typed messages. So far, all these typing results hold only for relatively simple protocols that do not keep a state beyond single sessions, excluding stateful protocols that, e.g., maintain long-term databases. Recently, several verification tools for stateful protocols have been proposed, e.g., Set-pi, AIF-omega, and SAPIC/Tamarin, but for none of these a typing result has been established. The main contribution of this paper is a typing result, for a large class of stateful protocols, based on a symbolic protocol model. We illustrate how to connect several formalisms for stateful protocols to this symbolic model. Finally, we discuss how the conditions of our typing result apply to existing protocols, or can be achieved by minor modifications.

11:00-12:30 Session 74B: Unification (FSCD)
11:00
Fixed-Point Constraints for Nominal Equational Unification

ABSTRACT. We propose a new axiomatisation of the alpha-equivalence relation for nominal terms, based on a primitive notion of fixed-point constraint. We show that the standard freshness relation between atoms and terms can be derived from the more primitive notion of permutation fixed-point, and use this result to prove the correctness of the new alpha-equivalence axiomatisation. This gives rise to a new notion of nominal unification, where solutions for unification problems are pairs of a fixed-point context and a substitution. This notion of nominal unifier behaves better than the standard notion based on freshness contexts: nominal unification remains finitary in the presence of equational theories such as commutativity, whereas it becomes infinitary when unifiers are expressed using freshness contexts.

11:30
Higher-Order Equational Pattern Anti-Unification
SPEAKER: David Cerna

ABSTRACT. We consider anti-unification for simply typed lambda terms in associative, commutative, and associative-commutative theories and develop a sound and complete algorithm which takes two lambda terms and computes their generalizations in the form of higher-order patterns. The problem is finitary: the minimal complete set of generalizations contains finitely many elements. We define the notion of optimal solution and investigate special fragments of the problem for which the optimal solution can be computed in linear or polynomial time.

12:00
Nominal Unification with Atom and Context Variables

ABSTRACT. Automated deduction in higher-order program calculi, where properties of transformation rules are demanded, or confluence or other equational properties are requested, can often be done by syntactically computing overlaps (critical pairs) of reduction rules and transformation rules. Since higher-order calculi have alpha-equivalence as fundamental equivalence, the reasoning procedure must deal with it. We define ASD1-unification problems, which are higher-order equational unification problems employing variables for atoms, expressions and contexts, with additional distinct-variable constraints, and which have to be solved w.r.t. alpha-equivalence. Our proposal is to extend nominal unification to solve these unification problems. We succeeded in constructing the nominal unification algorithm NomUnifyASC. We show that NomUnifyASC is sound and complete for these problem class, and outputs a set of unifiers with constraints in nondeterministic polynomial time if the final constraints are satisfiable. We also show that solvability of the output constraints can be decided in NEXPTIME, and for a fixed number of context-variables in NP time. For terms without context-variables and atom-variables, NomUnifyASC runs in polynomial time, is unitary, and extends the classical problem by permitting distinct-variable constraints.

11:00-12:30 Session 74C (ITP)
Location: Blavatnik LT1
11:00
Verified Memoization and Dynamic Programming
SPEAKER: Simon Wimmer

ABSTRACT. We present a lightweight framework in Isabelle/HOL for the automatic verified (functional or imperative) memoization of recursive functions. Our tool constructs a memoized version of the recursive function and proves a correspondence theorem between the two functions. A number of simple techniques allow us to achieve bottom-up computation and space-efficient memoization. The framework’s utility is demonstrated on a number of classic dynamic programming problems.

11:30
Fast Machine Words in Isabelle/HOL

ABSTRACT. Code generated from a verified formalisation typically runs faster when it uses machine words instead of a syntactic representation of integers. This paper presents a library for Isabelle/HOL that links the existing formalisation of words to the machine words that the four target languages of Isabelle/HOL’s code generator provide. Our design ensures that (i) Isabelle/HOL machine words can be mapped soundly and efficiently to all target languages despite the differences in the APIs; (ii) they can be used uniformly with the three evaluation engines in Isabelle/HOL, namely code generation, normalisation by evaluation, and term rewriting; and (iii) they blend in with the existing formalisations of machine words. Several large-scale formalisation projects use our library to speed up their generated code.

To validate the unverified link between machine words in the logic and those in the target languages, we extended Isabelle/HOL with a general-purpose testing facility that compiles test cases expressed within Isabelle/HOL to the four target languages and runs them with the most common implementations of each language. When we applied this to our library of machine words, we discovered miscomputations in the 64-bit word library of one of the target-language implementations.

12:00
Proof Pearl: Constructive Extraction of Cycle Finding Algorithms

ABSTRACT. We present a short implementation of the well-known Tortoise and Hare cycle finding algorithm attributed to Robert W. Floyd in the constructive setting of axiom-free Coq. This algorithm is interesting from a constructive perspective because it is both very simple and potentially non-terminating (depending on the input). To overcome potential non-termination, we encode the given termination argument (there exists a cycle) into a bar inductive predicate that we use as termination certificate for the Coq implementation. From this development, we extract the standard OCaml implementation of this algorithm. We generalize the method to the full Floyd's algorithm that computes the entry point and the period of the cycle of the iterated sequence, when they do exist. We also consider the case of the more efficient Brent's algorithm for computing the period only. Again, the extracted codes correspond to the standard OCaml implementations of these algorithms.

11:00-12:00 Session 74D: LICS Invited Talk: Val Tannen (LICS)
Chair:
Location: Maths LT1
11:00
Provenance Analysis for First-order Model Checking

ABSTRACT. Is a given finite structure a model of a given first-order sentence? The provenance analysis of this question determines how its answer depends on the atomic facts that determine the structure. Provenance questions like this one have emerged in databases, scientific workflows, networks, formal verification, and other areas.

In joint work with Erich Grädel (RWTH Aachen University), done during the Logical Structures in Computation Program at the Simons Institute, we extend the semiring provenance framework, developed in databases, to the first-order model checking problem. This provides a non-standard semantics for first-order logic that refines logical truth to values in commutative semirings: a semiring of provenance polynomials, the Viterbi semiring of confidence scores, access control semirings, etc. We discuss applications and extensions of this framework.

11:00-12:00 Session 74E: SMT (SAT)
11:00
Effective use of SMT solvers for Program Equivalence Checking through Invariant Sketching and Query Decomposition

ABSTRACT. Program equivalence checking is a fundamental problem in computer science with applications to translation validation and automatic synthesis of compiler optimizations. Modern equivalence checkers employ SMT solvers to discharge proof obligations generated by their equivalence checking algorithm. Equivalence checkers also involve algorithms to infer invariants that relate the intermediate states of the two programs being compared for equivalence. We present a new algorithm, called {\em invariant-sketching} that allows the inference of the required invariants through the generation of counter-examples using SMT solvers. We also present an algorithm, called {\em query-decomposition} that allows effective use of SMT solvers for application to equivalence checking. Both invariant-sketching and query-decomposition help us prove equivalence across program transformations that could not be handled by previous equivalence checking algorithms.

11:30
Experimenting on Solving Nonlinear Integer Arithmetic with Incremental Linearization
SPEAKER: Ahmed Irfan

ABSTRACT. Incremental linearization is a conceptually simple, yet effective, technique that we have recently proposed for solving SMT problems over nonlinear real arithmetic constraints. In this paper, we show how the same approach can be applied successfully also to the harder case of nonlinear integer arithmetic problems. We describe in detail our implementation of the basic ideas inside the MathSAT SMT solver, and evaluate its effectiveness with an extensive experimental analysis over all nonlinear integer benchmarks in SMT-LIB. Our results show that MathSAT is very competitive with (and often outperforms) state-of-the-art SMT solvers based on alternative techniques.

11:00-12:30 Session 74F (SMT)
11:00
Building Better Bit-Blasting for Floating-Point Problems

ABSTRACT. An effective approach to handling the theory of floating- point is to reduce it to the theory of bit-vectors. Implementing the re- quired encodings is complex, error prone and require a deep understand- ing of floating-point hardware. This paper presents SymFPU, a library of encodings that can be included in solvers. It also includes a verification argument for its correctness, and experimental results showing that its use in CVC4 out-performs all previous tools. As well as a significantly improved performance and correctness, it is hoped this will give a simple route to add support for the theory of floating-point.

11:30
The next 10^4 UppSAT Approximations

ABSTRACT. Reasoning about complex SMT theories is still quite challenging, for instance bit-vectors, floating-point arithmetic, or strings. Approximations offer a means of mapping a complex theory into a simpler one, and attempting to reconstruct models or proofs in the original theory afterwards. UppSAT is an approximating abstract SMT-solver, based on the systematic approximation refinement framework. The framework can be instantiated using an approximation and a back-end SMT solver. Implemented in Scala, UppSAT is designed with easy and flexible specification of approximations in mind. We discuss the structure of approximations in UppSAT and the components needed for their specification. Because approximation components can be defined relatively independently, they can be flexibly combined to obtain many different flavours of approximation. In this extended abstract we discuss what kinds of approximations can be expressed in UppSAT, along with design choices that enable the modular mix-and-match specification of approximations. Finally, we also outline ideas for several new approximations and strategies which we are currently working on (with first results expected to be available at the workshop).

12:00
Alt-Ergo 2.2

ABSTRACT. Alt-Ergo is an SMT solver jointly developed by Université Paris-Sud and the OCamlPro company. The first version was released in 2006. Since then, its architecture has been continuously adapted for proving formulas generated by software development frameworks. As type systems with polymorphism arise naturally is such platforms, the design of Alt-Ergo has been guided (and constrained) by a native -- and non SMT-LIB compliant -- input language for a polymorphic first-order logic.

In this paper, we present the last version of Alt-Ergo, its architecture and main features. The main recent work is a support for a conservative polymorphic extension of the SMT-LIB 2 standard. We measure Alt-Ergo’s performances with this new frontend on a set of benchmarks coming from the deductive program verification systems Frama-C, SPARK 2014, Why3 and Atelier-B, as well as from the SMT-LIB benchmarks library.

12:00-12:40 Session 75A (LICS)
Chair:
Location: Maths LT1
12:00
On computability and tractability for infinite sets

ABSTRACT. We propose a definition for computable functions on hereditarily definable sets. Such sets are possibly infinite data structures that can be defined using a fixed underlying logical structure, such as (N,=). We show that, under suitable assumptions on the underlying structure, a programming language called definable while programs captures exactly the computable functions. Next, we introduce a complexity class called fixed-dimension polynomial time, which intuitively speaking describes polynomial computation on hereditarily definable sets. We show that this complexity class contains all functions computed by definable while programs with suitably defined resource bounds. Proving the converse inclusion would prove that Choiceless Polynomial Time with Counting captures order-invariant polynomial time on finite graphs.

12:20
Definable Ellipsoid Method, Sums-of-Squares Proofs, and the Isomorphism Problem

ABSTRACT. The ellipsoid method is an algorithm that solves the (weak) feasibility and linear optimization problems for convex sets by making oracle calls to their (weak) separation problem. We observe that the previously known method to show that this reduction can be done in fixed-point logic with counting (FPC) for linear and semidefinite programs applies to any family of explicitly bounded convex sets. We use this observation to show that the exact feasibility problem for semidefinite programs is expressible in the infinitary version of FPC. As a corollary we get that, for the isomorphism problem, the Lasserre/Sums-of-Squares semidefinite programming hierarchy of relaxations collapses to the Sherali-Adams linear programming hierarchy, up to a small loss in the degree.

12:00-12:30 Session 75B: Tools & Applications I (SAT)
12:00
XORSAT Set Membership Filters
SPEAKER: Sean Weaver

ABSTRACT. Set membership filters are used as a primary test for whether large sets contain a given element. The most common such filter is the Bloom filter. Most pertinent to this article is the recently introduced Satisfiability (SAT) filter. This article proposes the XOR-Satisfiability (XORSAT) filter, a variant of the SAT filter based on random k-XORSAT. Experimental results show that this new filter can be more than 99% efficient (i.e., achieves the information-theoretic limit) while also having a query speed comparable to the standard Bloom filter, making it practical for use with very large data sets.

12:30-14:00Lunch Break
14:00-15:40 Session 76A: Optimal and model predictive control (ADHS)
14:00
Occupation Measure Methods for Modelling and Analysis of Biological Hybrid Systems

ABSTRACT. Mechanistic models in biology often involve numerous parameters about which we do not have direct experimental information. The traditional approach is to fit these parameters using extensive numerical simulations (e.g. by the Monte-Carlo method), and eventually revising the model if the predictions do not correspond to the actual measurements. In this work we propose a methodology for hybrid system model revision, when new types of functions are needed to capture time varying parameters. To this end, we formulate a hybrid optimal control problem with intermediate points as successive infinite-dimensional linear programs (LP) on occupation measures. Then, these infinite-dimensional LPs are solved using a hierarchy of semidefinite relaxations. The whole procedure is applied on a recent model for haemoglobin production in erythrocytes.

14:25
Safety Control, a Quantitative Approach

ABSTRACT. Safety control consists in maintaining the state of a given system inside a specified set of safe states. Traditionally, the problem is tackled using set-theoretic methods, which are mostly qualitative: states are partitioned between safety-controllable (i.e. states that belong to the maximal controlled invariant subset of the safe set) and safety-uncontrollable states. In this paper, we present a quantitative approach to safety controller synthesis. Our approach makes it possible to compute a measure of safety, which quantifies how far from the unsafe set (respectively, how close to the safe set) one can stay when starting from a given controllable (respectively, uncontrollable) state. For finite transition systems, such a measure can be computed in finite-time using a functional fixed-point iteration. In addition, we show that the level sets of the functional fixed-point coincide with the maximal controlled invariant subsets of a parameterized family of sets and that one can synthesize a common safety controller for all the sets of the family. In the second part of the paper, we show how the approach can be used in the framework of abstraction-based synthesis to lift these results to infinite transition systems with finite abstractions. To illustrate the effectiveness of the approach, we show an application of the approach to a simple boost DC-DC converter.

14:50
Computing Controlled Invariant Sets for Hybrid Systems with Applications to Model-Predictive Control

ABSTRACT. In this paper, we develop a method for computing controlled invariant sets using Semidefinite Programming. We apply our method to the controller design problem for switching affine systems with polytopic safe sets. The task is reduced to a semidefinite programming problem by enforcing an invariance relation in the dual space of the geometric problem. The paper ends with an application to safety critical model predictive control.

15:15
Data-Driven Switched Affine Modeling for Model Predictive Control

ABSTRACT. Model Predictive Control (MPC) is a well consolidated technique to design optimal control strategies, leveraging the capability of a mathematical model to predict the system’s behavior over a predictive horizon. However, building physics-based models for large scale systems, such as buildings and process control, can be cost and time prohibitive. To overcome this problem we propose in this paper a methodology to exploit machine learning techniques (i.e. regression trees and random forests) in order to build a state-space switched affine dynamical model of a large scale system only using historical data. Finite Receding Horizon Control (RHC) setup using control-oriented data-driven models based on regression trees and random forests is presented as well. A comparison with an optimal MPC benchmark and a related methodology is provided on an energy management system to show the performance of the proposed modeling framework. Simulation results show that the proposed approach is very close to the optimum and provides better performance with respect to the related methodology in terms of cost function optimization.

14:00-15:40 Session 76B: Networked systems (ADHS)
14:00
Network-Aware Design of State-Feedback Controllers for Linear Wireless Networked Control Systems

ABSTRACT. We study the problem of stabilizing the origin of a plant, modeled as a discrete- time linear system, for which the communication with the controller is ensured by a wireless network. The transmissions over the wireless channel are characterized by the so-called stochastic allowable transmission intervals (SATI), that is a stochastic version of the maximum allowable transmission interval (MATI). Instead of deterministic transmissions, SATI gives stability conditions in terms of the cumulative probability of successful transmission over N steps. We argue that SATI is well-suited for wireless networked control systems to cope both with the stochastic nature of the communications and the design of energy-efficient communication strategies. Our objective is to synthesize a stabilizing state-feedback controller and SATI parameters simultaneously. We model the overall closed-loop system as a Markov jump linear system and we first provide linear conditions for the stability of the wireless networked control systems in a mean-square sense. We then provide linear matrix inequalities conditions for the design of state-feedback controllers to ensure stability of the closed-loop system. These conditions can be used to obtain both the controller and the SATI. A numerical example is presented to illustrate our results.

14:25
Space-Time Budget Allocation for Marketing Over Social Networks

ABSTRACT. We address formally the problem of opinion dynamics when the agents of a social network (e.g., consumers) are not only influenced by their neighbors but also by an external influential entity referred to as a marketer. The influential entity tries to sway the overall opinion to its own side by using a specific influence budget during discrete-time advertising campaigns; consequently, the overall closed-loop dynamics becomes a linear-impulsive (hybrid) one. The main technical issue addressed is finding how the marketer should allocate its budget over time (through marketing campaigns) and over space (among the agents) such that the agents’ opinion be as close as possible to a desired opinion; for instance, the marketer may prioritize certain agents over others based on their influence in the social graph. The corresponding space-time allocation problem is formulated and solved for several special cases of practical interest. Valuable insights can be extracted from our analysis. For instance, for most cases we prove that the marketer has an interest in investing most of its budget at the beginning of the process and that budget should be shared among agents according to the famous water-filling allocation rule. Numerical examples illustrate the analysis.

14:50
Tracking Control via Variable-gain Integrator and Lookahead Simulation: Application to Leader-follower Multiagent Networks

ABSTRACT. This paper concerns an output-tracking technique based on a standalone integrator with variable gain. The control algorithm, recently proposed by the authors, appears to have a wide scope in linear and nonlinear systems while aiming at simple and efficient computations in the loop. For a class of memoryless systems it resembles a Newton-Raphson flow for solving the loop equation, but it is applicable to a broader class of dynamical systems. Furthermore, the technique is suitable for tracking constant as well as time-dependent reference signals, and its convergence performance is robust with respect to computational errors in the loop. The objective of this paper is to test the technique on a control problem arising in multi-agent systems. Specifically, we are motivated by regulating trajectories of follower agents by a lead agent in a platoon or swarm of multi-agent networks connected by the graph Laplacian. We study a particular example, which is challenging from the standpoint of control, with the aim of identifying the limits of the technique and investigating their possible extensions.

15:15
Hybrid System Modeling of Multi-Agent Coverage Problems with Energy Depletion and Repletion

ABSTRACT. We present a hybrid system model describing the behavior of multiple agents cooperating to solve an optimal coverage problem under energy depletion and repletion constraints. The model captures the controlled switching of agents between coverage (when energy is depleted) and battery charging (when energy is replenished) modes. Our analysis contains three parts. The first part shows how the model guarantees the feasibility of the coverage problem by defining a guard function on each agent’s battery level to prevent it from dying on its way to a charging station. The second part provides two scheduling algorithms to solve the contention problem of agents competing for the only charging station in the mission space. The third part shows the optimality of the motion plan adopted in the proposed model.

14:00-15:00 Session 76C: FSCD General Meeting (FSCD)

AGENDA of FSCD18 General Meeting

  1. Welcome by Steering Committee Chair, Luke Ong
  2. Report of FSCD18 PC Chair, Helene Kirchner
  3. Report of FSCD18 Conference Chair, Paula Severi
  4. Progress Report of FSCD19 - PC Chair: Herman Geuvers - Conference Chair: Jakob Rehof
  5. Election of two Steering Committee members
  6. Proposal to host FSCD20 (colocating with IJCAR2020) in Paris: Stefano Guerrini and Giulio Manzonetto
  7. AOB
  8. Handover to new SC Chair (2018-2021), Delia Kesner.
Chair:
14:00-15:00 Session 76D: ITP Invited Talk: Jean-Christophe Filliâtre (ITP)
Location: Blavatnik LT1
14:00
Deductive Program Verification

ABSTRACT. Among formal methods, the deductive verification approach consists in
first building verification conditions and then resorting to
traditional theorem proving.  Most deductive verification tools
involve a high degree of proof automation through the use of SMT
solvers. Yet there may be a substantial part of interactive theorem
proving in program verification, such as inserting logical cuts, ghost
code, or inductive proofs via lemma functions. In this talk, I will
show how the Why3 tool for deductive verification resembles more and
more a traditional ITP, while stressing key differences between the
two.

14:00-15:40 Session 76E (LICS)
Location: Maths LT1
14:00
Computable decision making on the reals and other spaces via partiality and nondeterminism

ABSTRACT. Though many safety-critical software systems use floating point to represent real-world input and output, programmers usually have idealized versions in mind that compute with real numbers. Significant deviations from the ideal can cause errors and jeopardize safety. Some programming systems implement exact real arithmetic, which resolves this matter but complicates others, such as decision making. In these systems, it is impossible to compute (total and deterministic) discrete decisions based on connected spaces such as R. We present programming-language semantics based on constructive topology with variants allowing nondeterminism and/or partiality. Either nondeterminism or partiality suffices to allow computable decision making on connected spaces such as R. We then introduce pattern matching on spaces, a language construct for creating programs on spaces, generalizing pattern matching in functional programming, where patterns need not represent decidable predicates and also may overlap or be inexhaustive, giving rise to nondeterminism or partiality, respectively. Nondeterminism and/or partiality also yield formal logics for constructing approximate decision procedures. We implemented these constructs in the Marshall language for exact real arithmetic.

14:20
A Theory of Register Monitors

ABSTRACT. The task of a monitor is to watch, at run-time, the execution of a reactive system, and signal the occurrence of an error in the observed sequence of events. While finite-state monitors have been studied extensively, in practice, monitoring software also makes use of unbounded memory. We define a model of automata equipped with integer-valued registers which can execute only a bounded number of instructions between consecutive events, and thus can form the theoretical basis for the study of infinite-state monitors. We classify these {\em register monitors} according to the number $k$ of available registers, and the type of register instructions. In stark contrast to the theory of computability for register machines, we prove that for every $k>1$, monitors with $k+1$ counters (with instruction set $\langle +1,= \rangle$) are strictly more expressive than monitors with $k$ counters: there is a safety $\omega$-language $L_k$ such that the finite prefixes not in $L_k$ can be recognized on-line with $k+1$ but not with $k$ counters. We also show that adder monitors (with instruction set $\langle 1,+,= \rangle$) are strictly more expressive than counter monitors, but are complete for monitoring all computable safety $\omega$-languages for $k=6$. {\em Real-time monitors} are further required to signal the occurrence of an error as soon as it occurs. The expressiveness hierarchy for counter monitors carries over to real-time monitors. We then show that 2 adders cannot simulate 3 counters in real-time. Finally, we show that real-time adder monitors with inequalities are at least as expressive as the model of real-time Turing machines studied in the 1960s.

14:40
Efficient Algorithms for Asymptotic Bounds on Termination Time in VASS
SPEAKER: Dominik Velan

ABSTRACT. Vector Addition Systems with States (VASS) provide a well-known and fundamental model for the analysis of concurrent processes, parametrized systems, and are also used as abstract models of programs in resource bound analysis. In this paper we study the problem of obtaining asymptotic bounds on the termination time of a given VASS. In particular, we focus on the practically important case of obtaining polynomial bounds on termination time.

Our main contributions are as follows: First, we present a polynomial-time algorithm for deciding whether a given VASS has a linear asymptotic complexity. We also show that if a complexity of a VASS is not linear, it is at least quadratic. Second, we classify VASS according to quantitative properties of their cycles. We show that certain singularities in these properties are the key reason for non-polynomial asymptotic complexity of VASS. In absence of singularities, we show that the asymptotic complexity is always polynomial and of the form $\Theta(n^k)$, for some integer $k\leq d$. We present a polynomial-time algorithm computing the optimal $k$. For general VASS, the same algorithm, which is based on a complete technique for the construction of ranking functions in VASS, produces a valid lower bound, i.e. a $k$ such that the termination complexity is $\Omega(n^k)$. Our results are based on new insights into the geometry of VASS dynamics, which hold the potential for further applicability to VASS analysis.

15:00
A Hybrid, Dynamic Logic for Hybrid-Dynamic Information Flow

ABSTRACT. Information-flow security is important to the safety and privacy of cyber-physical systems (CPSs) across many domains: information leakage can both violate user privacy and provide information that supports further attacks. CPSs face the challenge that information can flow both in discrete cyber channels and in continuous real-valued physical channels ranging from time to physical flow of resources. We call these hybrid information flows and introduce dHL, the first logic for verifying these flows in hybrid-dynamical models of CPSs. We achieve verification of hybrid information flows by extending differential dynamic logic (dL) for hybrid-dynamical systems with hybrid-logical features for explicit representation and relation of program states. By verifying hybrid information flows, we ensure security even under a strong attacker model wherein an attacker can observe time and physical values continuously. We present a Hilbert-style proof calculus for dHL, prove it sound, and compare the expressive power of dHL with dL. We demonstrate dHL's abilities by developing a hybrid system model of the smart electrical grid FREEDM. We verify that the naive model has a previously-unknown information flow vulnerability and verify that a revised model resolves the vulnerability. To the best of our knowledge, this is both the first information flow proof for hybrid information flows and the first for a hybrid-dynamical model. We discuss applications of hybrid information flow to a range of critical systems.

15:20
Differential Equation Axiomatization: The Impressive Power of Differential Ghosts

ABSTRACT. We prove the completeness of an axiomatization for differential equation invariants. First, we show that the differential equation axioms in differential dynamic logic are complete for all algebraic invariants. Our proof exploits differential ghosts, which introduce additional variables that can be chosen to evolve freely along new differential equations. Cleverly chosen differential ghosts are the proof-theoretical counterpart of dark matter. They create new hypothetical state, whose relationship to the original state variables satisfies invariants that did not exist before. The reflection of these new invariants in the original system enables its analysis.

We then show that extending the axiomatization with existence and uniqueness axioms makes it complete for all local progress properties, and further extension with a real induction axiom makes it complete for all real arithmetic invariants. This yields a parsimonious axiomatization, which serves as the logical foundation for reasoning about invariants of differential equations. Moreover, our approach is purely axiomatic, and so the axiomatization is suitable for sound implementation in foundational theorem provers.

14:00-15:40 Session 76F (LICS)
Chair:
Location: Maths LT3
14:00
Probabilistic stable functions on discrete cones are power series.

ABSTRACT. We study the category Cstabm of measurable cones and measurable stable functions—a denotational model of an higher-order language with continuous probabilities and full recursion. We look at Cstabm as a model for discrete probabilities, by showing the existence of a full, faithful and cartesian closed functor which embeds probabilistic coherence spaces—a fully abstract denotational model of an higher language with full recursion and discrete probabilities—into Cstabm. The proof is based on a generalization of Bernstein’s theorem in real analysis allowing to see stable functions between some cones as generalized power series.

14:20
Allegories: decidability and graph homomorphisms

ABSTRACT. Allegories were introduced by Freyd and Scedrov; they form a fragment of Tarski’s calculus of relations. We show that their equational theory is decidable by characterising it in terms of a specific class of graph homomorphisms. We actually do so for an extension of allegories which we prove to be conservative, namely allegories with top. This generalisation makes it possible to exploit the correspondence between terms and K4-free graphs, for which isomorphism was known to be finitely axiomatisable.

14:40
Syntax and Semantics for Operations with Scopes
SPEAKER: Maciej Piróg

ABSTRACT. Motivated by the problem of separating syntax from semantics in programming with algebraic effects and handlers, we propose a categorical model of abstract syntax with so-called scoped operations. As a building block of a term, a scoped operation is not merely a node in a tree, as it can also encompass a whole part of the term (a scope). Some examples from the area of programming are given by the operation "catch" for handling exceptions, in which the part in the scope is the code that may raise an exception, or the operation "once", which selects a single solution from a nondeterministic computation. A distinctive feature of such operations is their behaviour under program composition, that is, syntactic substitution.

Our model is based on what Ghani et al. call the monad of explicit substitutions, defined using the initial-algebra semantics in the category of endofunctors. We also introduce a new kind of multi-sorted algebras, called scoped algebras, which serve as interpretations of syntax with scopes. In generality, scoped algebras are given in the style of the presheaf formalisation of syntax with binders of Fiore et al. As the main technical result, we show that our monad indeed arises from free objects in the category of scoped algebras.

Importantly, we show that our results are immediately applicable. In particular, we show a Haskell implementation together with some practical, real-life examples.

15:00
Rewriting with Frobenius
SPEAKER: Fabio Zanasi

ABSTRACT. Symmetric monoidal categories have become ubiquitous as a formal environment for the analysis of compound systems in a compositional, resource-sensitive manner using the graphical syntax of string diagrams. Recently, reasoning with string diagrams has been implemented concretely via double-pushout (DPO) hypergraph rewriting. The hypergraph representation has the twin advantages of being convenient for mechanisation and of completely absorbing the structural laws of symmetric monoidal categories, leaving just the domain-specific equations explicit in the rewriting system. In many applications across different disciplines (linguistics, concurrency, quantum computation, control theory, ...) the structural component appears to be richer than just the symmetric monoidal structure, as it includes one or more Frobenius algebras. In this work we develop a DPO rewriting formalism which is able to absorb multiple Frobenius structures, thus sensibly simplifying diagrammatic reasoning in the aforementioned applications. As a proof of concept, we use our formalism to describe an algorithm which computes the reduced form of a diagram of the theory of interacting bialgebras using a simple rewrite strategy.

15:20
Ribbon tensorial logic

ABSTRACT. We introduce a topologically-aware version of tensorial logic, called ribbon tensorial logic. To every proof of the logic, we associate a ribbon tangle which tracks the flow of tensorial negations inside the proof. The translation is functorial: it is performed by exhibiting a correspondence between the notion of dialogue category in proof theory and the notion of ribbon category in knot theory. Our main result is that the translation is also faithful: two proofs are equal modulo the equational theory of tensorial logic if and only if the associated ribbon tangles are equal up to topological deformation. This ``proof-as-tangle'' theorem may be understood at the same time as a coherence theorem for ribbon dialogue categories, and as a mathematical foundation for topological game semantics.

14:00-15:30 Session 76G: Tools & Applications II (SAT)
14:00
ALIAS: A Modular Tool for Finding Backdoors for SAT
SPEAKER: Oleg Zaikin

ABSTRACT. We present ALIAS, a modular tool aimed at finding backdoors for hard SAT instances. Here by a backdoor for a specific SAT solver and SAT formula we mean a set of its variables, all possible instantiations of which lead to construction of a family of subformulas with the total solving time less than that for an original formula. For a particular backdoor, the tool uses the Monte-Carlo algorithm to estimate the runtime of a solver when partitioning an original problem via said backdoor. Thus, the problem of finding a backdoor is viewed as a black-box optimization problem. The tool's modular structure allows to employ state-of-the-art SAT solvers and black-box optimization heuristics. In practice, for a number of hard SAT instances, the tool made it possible to solve them much faster than using state-of-the-art multithreaded SAT-solvers.

14:30
PySAT: A Python Toolkit for Prototyping with SAT Oracles

ABSTRACT. Boolean satisfiability (SAT) solvers are at the core of efficient approaches for solving a vast multitude of practical problems. Moreover, albeit targeting an NP-complete problem, SAT solvers are increasingly used for tackling problems beyond NP. Despite the success of SAT in practice, modeling with SAT and more importantly implementing SAT-based problem solving solutions is often a difficult and error-prone task. This paper proposes the PySAT toolkit, which enables fast Python-based prototyping using SAT oracles and SAT-related technology. PySAT provides a simple API for working with a few state-of-the-art SAT oracles and also integrates a number of cardinality constraint encodings, all aiming at simplifying the prototyping process. Experimental results presented in the paper show that PySAT-based implementations can be as efficient as those written in a low-level language.

15:00
Constrained Image Generation Using Binarized Neural Networks with Decision Procedures
SPEAKER: Luca Pulina

ABSTRACT. We consider the problem of binary image generation with given properties. This problem arises in a number of practical applications, including generation of artificial porous medium for an electrode of lithium-ion batteries, for composed materials, etc. A generated image represents a porous medium. As such, it is subject to two sets of constraints: topological constraints on the structure and process constraints on the physical process over this structure. To perform image generation we need to define a mapping from a porous medium to its physical process parameters. For a given geometry of a porous medium, this mapping can be done by solving a partial differential equation (PDE). However, embedding a PDE solver into the search procedure is computationally expensive. We use a binarized neural network to approximate a PDE solver. This allows us to encode the entire problem as a logical formula. Our main contribution is that, for the first time, we show that this problem can be tackled using decision procedures. Our experiments show that our model is able to produce random constrained images that satisfy both topological and process constraints.

14:00-15:30 Session 76H (SMT)
14:00
Invited talk: Verifying Learners and Learning Verifiers
15:00
Puli - A Problem-Specific OMT Solver

ABSTRACT. In our previous papers, we investigated several aspects of applying Optimization Modulo Theories (OMT) solvers to Wireless Sensor Networks (WSNs). None of the solvers we used in our experiments scaled enough for WSNs of common size in practice. This is particularly true when investigating additional dependability and security constraints on WSNs of high density. In this paper, we propose an idea of speeding up the OMT solving process by taking into consideration some resources in the systems and by applying regression analysis on those resource values. For instance, in WSNs, the electrical charge in the batteries of sensor nodes can be considered to be a resource that is being consumed as approaching the maximal lifetime of the network. Another example is the knapsack problem where the remaining capacity of the knapsack can be used as such a resource. We show how to integrate this idea in search algorithms in the OMT framework and introduce a new OMT solver called Puli. We present experiments with Puli on WSN and knapsack benchmarks, which show remarkable improvements in the number of solved instances as well as computation time compared to existing solvers. Furthermore, we show that further significant improvement can be realized on so-called monotonous problems, such as WSN optimization, for which Puli can generate more precise assertions. We present Puli as a work-in-progress prototype that we are planning to upgrade to an official release soon, which we want to make publicly available.

15:00-15:30 Session 77A: System presentation (FSCD)
15:00
ProTeM: A Proof Term Manipulator

ABSTRACT. Proof terms are a useful concept for reasoning about computations in term rewriting. Human calculation with proof terms is tedious and error-prone. We present ProTeM, a new tool that offers support for manipulating proof terms that represent multisteps in left-linear rewrite systems.

15:00-15:30 Session 77B (ITP)
Location: Blavatnik LT1
15:00
ProofWatch: Watchlist Guidance for Large Theories in E

ABSTRACT. Watchlist (also hint list) is a mechanism that allows related proofs to guide a proof search for a new conjecture. This mechanism has been used with the Otter and Prover9 theorem provers, both for interactive formalizations and for human-assisted proving of open conjectures in small theories. In this work we explore the use of watchlists in large theories coming from first-order translations of large ITP libraries, aiming at improving hammer-style automation by smarter internal guidance of the ATP systems. In particular, we (i) design watchlist-based clause evaluation heuristics inside the E ATP system, and (ii) develop new proof guiding algorithms that load many previous proofs inside the ATP and focus the proof search using a dynamically updated notion of proof matching. The methods are evaluated on a large set of problems coming from the Mizar library, showing significant improvement of E’s standard portfolio of strategies, and also of the previous best set of strategies invented for Mizar by evolutionary methods.

15:30-16:00Coffee Break
16:00-17:00 Session 78B (ITP)
Location: Blavatnik LT1
16:00
Verified Tail Bounds for Randomized Programs

ABSTRACT. We mechanize in Coq a theorem by Karp, along with several extensions, that provide an easy to use "cookbook" method for verifying tail bounds of randomized algorithms, much like the traditional "Master Theorem" gives bounds for deterministic algorithms. We apply these results to several examples: the number of comparisons performed by QuickSort, the span of parallel QuickSort, the height of randomly generated binary search trees, and the number of rounds needed for a distributed leader election protocol. Because the constants involved in our symbolic bounds are concrete, we are able to use them to derive numerical probability bounds for various input sizes for these examples.

16:30
Verified Analysis of Random Binary Tree Structures
SPEAKER: Manuel Eberl

ABSTRACT. This work is a case study of the formal verification and complexity analysis of some famous probabilistic data structures and algorithms in the proof assistant Isabelle/HOL: – the expected number of comparisons in randomised Quicksort – the average-case analysis of deterministic Quicksort – the expected shape of an unbalanced random Binary Search Tree – the expected shape of a Treap The last two have, to our knowledge, never been analysed in a theorem prover before and the last one is particularly interesting because the analysis involves continuous distributions. The verification builds on the existing probability and measure theory in Isabelle/HOL. Algorithms are shallowly embedded and expressed in the Giry monad, which allows for a very natural and high-level presentation.

16:00-17:00 Session 78C: LICS Invited Talk: Javier Esparza (LICS)
Location: Maths LT1
16:00
Verification of population protocols

ABSTRACT. Population protocols (Angluin et al., PODC 2004) are a formal model of sensor networks consisting of identical, finite-state mobile devices. When two devices come into the range of each other, they interact and change their state. Computations are infinite sequences of pairwise interactions where the interacting processes are picked by a fair scheduler.

A population protocol is well specified if for every initial configuration of devices, and for every fair computation starting at it, all devices eventually agree on a consensus value that only depends on the initial configuration. If a protocol is well specified, then it is said to compute the predicate that assigns to each initial configuration its consensus value.

In the talk I will report on our recent results on automatic verification of population protocols: Is a given protocol well specified? Does a given protocol compute a given predicate? Does it compute it within a given time? These questions require to check liveness properties of parameterized systems with an arbitrary number of agents, and are very challenging. I show how to attack them using the theory of Vector Addition Systems and SMT technology.

16:00-18:00 Session 78D: Awards & Competitions (SAT)
16:00
Award Ceremony

ABSTRACT. Presentation of the best paper and best student paper awards.

16:05
MaxSAT Evaluation 2018
16:33
QBFEVAL’18
SPEAKER: Luca Pulina
17:01
Sparkle SAT Challenge 2018
SPEAKER: Chuan Luo
17:29
SAT Competition 2018
SPEAKER: Marijn Heule
16:00-17:30 Session 78E (SMT)
16:00
SMT-based Compile-time Verification of Safety Properties for Smart Contracts
SPEAKER: Leonardo Alt

ABSTRACT. Ethereum smart contracts are programs that run inside a public distributed database called a blockchain. These smart contracts are used to handle tokens of value, can be accessed and analyzed by everyone and are immutable once deployed. Those characteristics make it imperative that smart contracts are bug-free at deployment time, hence the need to verify them formally. In this paper we describe our current efforts in building an SMT-based formal verification module within the compiler of Solidity, a popular language for writing smart contracts. The tool is seamlessly integrated into the compiler, where during compilation, the user is automatically warned of and given counterexamples for potential arithmetic overflow/underflow, unreachable code, trivial conditions, and assertion fails. We present how the component currently translates a subset of Solidity into SMT statements using different theories, and discuss future challenges such as multi-transaction and state invariants.

16:30
SMT Solving Modulo Tableau and Rewriting Theories

ABSTRACT. We propose an automated theorem prover that combines an SMT solver with tableau calculus and rewriting. Tableau inference rules are used to unfold propositional content into clauses while atomic formulas are handled using satisfiability decision procedures as in traditional SMT solvers. To deal with quantified first order formulas, we use metavariables and perform rigid unification modulo equalities and rewriting, for which we introduce an algorithm based on superposition, but where all clauses contain a single atomic formula. Rewriting is introduced along the lines of deduction modulo theory, where axioms are turned into rewrite rules over both terms and propositions. Finally, we assess our approach over a benchmark of problems in the set theory of the B method.

17:00
Rewrites for SMT Solvers using Syntax-Guided Enumeration

ABSTRACT. In this paper, we explore a development paradigm where rewrite rules are suggested to the SMT solver developer using syntax-guided enumeration. We capitalize on the recent advances in enumerative syntax-guided synthesis (SyGuS) techniques for efficiently enumerating terms in a grammar of interest, and novel sampling techniques for testing equivalence between terms. We present our preliminary experience with this feature in the SMT solver CVC4, showing its impact on its rewriting capabilities using several internal metrics, and its subsequent impact on solving bit-vector and string constraints in applications.

16:10-17:25 Session 79A: Applications 2 (ADHS)
16:10
Multi-Energy Scheduling Using a Hybrid Systems Approach

ABSTRACT. This paper presents a mixed logical dynamical (MLD) approach for modelling a multi-energy system. The electrical and thermal energy streams are linked through the operation of combined cycle power plants (CCPPs). The MLD approach is used to develop detailed models of the gas turbines (GTs), steam turbines (STs) and boilers. The power trajectories followed by the GTs, STs and boilers during various start-up methods are also modelled. The utility of the developed model is demonstrated by formulating and solving an optimal scheduling problem to satisfy both electrical and thermal loads in the system. The cost benefit of including flexible loads in the scheduling problem formulation is demonstrated through suitable case studies.

16:35
Hierarchical Model Predictive Control for Building Energy Management of Hybrid Systems

ABSTRACT. In this paper a two-layer controller is proposed to tackle the building energy management problem for hybrid systems at different levels of abstraction and different time scales. In the upper layer a relaxed long term energy allocation problem with a large decision time step is defined, taking into account the energy prices, the comfort requirements, and a global power constraint. The discrete decision variables are considered only in the lower layer, where the continuous global solution computed by the first optimization is projected into local mixed-integer programming (MIP) tracking problems with a shorter prediction horizon and a higher sampling rate. To fulfill the building global power constraint each load has a specific priority to access the available power, following a non-iterative priority algorithm.

17:00
Verifying nonlinear analog and mixed-signal circuits with inputs

ABSTRACT. We present a new technique for verifying nonlinear and hybrid models with inputs. We observe that once an input signal is fixed, the sensitivity analysis of the model can be computed much more precisely. Based on this result, we propose a new simulation-driven verification algorithm and apply it to a suite of nonlinear and hybrid models of CMOS digital circuits under different input signals. The models are low-dimensional but with highly nonlinear ODEs, with nearly hundreds of logarithmic and exponential terms. Some of our experiments analyze the metastability of bistable circuits with very sensitive ODEs and rigorously establish the connection between metastability recovery time and sensitivity.

16:10-17:25 Session 79B: Switched systems 2 (ADHS)
16:10
Bounding Errors Due to Switching Delays in Incrementally Stable Switched Systems

ABSTRACT. Time delays pose an important challenge in networked control systems, which are now ubiquitous. Focusing on switched systems, we introduce a framework that provides an upper bound for errors caused by switching delays. Our framework is based on approximate bisimulation, a notion that has been previously utilized mainly for symbolic (discrete) abstrac- tion of state spaces. Notable in our framework is that, in deriving an approximate bisimulation and thus an error bound, we use a simple incremental stability assumption (namely δ-GUAS) that does not itself refer to time delays. That this is the same assumption used for state-space discretization enables a two-step workflow for control synthesis for switched systems, in which a single Lyapunov-type stability certificate serves for two different purposes of state discretization and coping with time delays. We demonstrate the proposed framework with a boost DC-DC converter, a common example of switched systems.

16:35
Symbolic Models for Incrementally Stable Switched Systems with Aperiodic Time Sampling

ABSTRACT. In this paper, we consider the problem of symbolic model design for the class of incrementally stable switched systems. Contrarily to the existing results in the literature where switching is considered as periodically controlled, in this paper, we consider aperiodic time sampling resulting either from uncertain or event-based sampling mechanisms. Firstly, we establish sufficient conditions ensuring that usual symbolic models computed using periodic time-sampling remain approximately bisimilar to a switched system when the sampling period is uncertain and belongs to a given interval; estimates on the bounds of the interval are provided. Secondly, we propose a new method to compute symbolic models related by feedback refinement relations to incrementally stable switched systems, using an event-based approximation scheme. For a given precision, these event-based models are guaranteed to enable transitions of shorter duration and are likely to allow for more reactiveness in controller design. Finally, an example is proposed in order to illustrate the proposed results and simulations are performed for a Boost dc-dc converter structure.

17:00
Control Synthesis for Stochastic Switched Systems Using the Tamed Euler Method

ABSTRACT. In this paper, we explain how, under the one-sided Lipschitz (OSL) hypothesis, one can find an error bound for a variant of the Euler-Maruyama approximation method for stochastic switched systems. We then explain how this bound can be used to control stochastic switched switched system in order to stabilize them in a given region. The method is illustrated on several examples of the literature.

16:30-17:30 Session 80: Confluence (FSCD)
16:30
Decreasing diagrams with two labels are complete for confluence of countable systems
SPEAKER: Roy Overbeek

ABSTRACT. Like termination, confluence is a central property of rewrite systems. Unlike for termination, however, there exists no known complexity hierarchy for confluence. In this paper we investigate whether the decreasing diagrams technique can be used to obtain such a hierarchy. The decreasing diagrams technique is one of the strongest and most versatile methods for proving confluence of abstract reduction systems, it is complete for countable systems, and it has many well-known confluence criteria as corollaries.

So what makes decreasing diagrams so powerful? In contrast to other confluence techniques, decreasing diagrams employs a labelling of the steps $\to$ with labels from a well-founded order in order to conclude confluence of the underlying unlabelled relation. Hence it is natural to ask how the size of the label set influences the strength of the technique. In particular, what class of abstract reduction systems can be proven confluent using decreasing diagrams restricted to $1$~label, $2$ labels, $3$ labels, and so on?

Surprisingly, we find that two labels suffice for proving confluence for every abstract rewrite system having the cofinality property, thus in particular for every confluent, countable system. We also show that this result stands in sharp contrast to the situation for commutation of rewrite relations, where the hierarchy does not collapse.

Finally, as a background theme, we discuss the logical issue of first-order definability of the notion of confluence.

17:00
Confluence of Prefix-Constrained Rewrite Systems

ABSTRACT. Prefix-constrained rewriting is a strict extension of context-sensitive rewriting. We study the confluence of prefix-constrained rewrite systems, which are composed of rules of the form L:l -> r where L is a regular string language that defines the allowed rewritable positions. The usual notion of Knuth-Bendix's critical pair needs to be extended using regular string languages, and the convergence of all critical pairs is not enough to ensure local confluence. Thanks to an additional restriction we get local confluence, and then confluence for terminating systems, which makes the word problem decidable. Moreover we present an extended Knuth-Bendix completion procedure, to transform a non-confluent prefix-constrained rewrite system into a confluent one.

17:00-17:45 Session 81B: Alonzo Church Award Session (LICS)
Location: Maths LT1
17:00
The 2018 Alonzo Church Award for Outstanding Contributions to Logic and Computation
17:15
Constraints, Graphs, Algebra, Logic, and Complexity

ABSTRACT. This is the Acceptance Speech for the 2018 Alonzo Church Award for Outstanding Contributions to Logic and Computation, given to Moshe Vardi and Tomás Feder in recognition of their “fundamental contributions to the computational complexity of constraint-satisfaction problems.” Eligible for the award are the authors of a paper or series of papers published within the past 25 years. Their research resulted in the formulation of the Feder-Vardi Dichotomy Conjecture, which culminated in 2017 in two independent proofs, by Andrei Bulatov and Dmitriy Zhuk.

The award was established in 2015 by the ACM Special Interest Group for Logic and Computation, the European Association for Theoretical Computer Science, the European Association for Computer Science Logic and the Kurt Gödel Society.

This year’s award is given for two papers written by Vardi and Feder: “Monotone Monadic SNP and Constraint Satisfaction,” delivered to the ACM Symposium on the Theory of Computing, 1993; and “The Computational Structure of Monotone Monadic SNP and Constraint Satisfaction: A Study through Datalog and Group Theory,” published in the SIAM Journal of Computing in 1998.

17:30-18:00 Session 82: Rewriting (FSCD)
17:30
Coherence of Gray categories via rewriting
SPEAKER: Simon Forest

ABSTRACT. Over the recent years, the theory of rewriting has been extended in order to provide systematic techniques to show coherence results for strict higher categories. Here, we investigate a further generalization to low-dimensional weak categories, and consider in details the first non-trivial case: presentations of tricategories. By a general result, those are equivalent to the stricter Gray categories, for which we introduce a notion of rewriting system, as well as associated tools: Tietze transformations, critical pairs, termination orders, etc. We show that a finite rewriting system admits a finite number of critical pairs and, as a variant of Newman's lemma in our context, that a convergent rewriting system is coherent, meaning that two parallel 3-cells are necessarily equal. This is illustrated on rewriting systems corresponding to various well-known structures in the context of tricategories (monoids, adjunctions, Frobenius monoids). Finally, we discuss generalizations in arbitrary dimension.

19:45-21:30 ADHS Banquet at Balliol College (ADHS)

The ADHS18 banquet will be at Balliol College, Oxford. Drinks reception from 7:45pm, to be seated by 8:15 (pre-booking via FLoC registration system required; guests welcome).

Location: Balliol College