FLOC 2018: FEDERATED LOGIC CONFERENCE 2018
PROGRAM FOR WEDNESDAY, JULY 11TH
Days:
previous day
next day
all days

View: session overviewtalk overview

09:00-10:15 Session 60A: Invited talk: Raphael Jungers (ADHS)
09:00
Provably efficient algorithms on hybrid automata

ABSTRACT. With the upcoming Industry 4.0, we are facing an increasing complexification of the engineered control systems, and at the same time a need for firm guarantees on optimality and safety of their control loop. Think of a plant controlled through a wireless control network, where resource constraints limit the amount of information available to the controller, while disruptions (either malicious, or intrinsic to wireless communications) may incur loss of information between the controller and the plant. With such nonstandard non-idealities, classical control theory is not able to provide exact algorithms with guarantees of performance.

I will argue that even though these new systems are far more complex than classical textbook models, advanced techniques from Mathematics, Optimization, or Computer Science sometimes allow the recovery of the powerful properties of usual control techniques. I will exemplify this on constrained switching systems, a particular class of hybrid systems. We will see that canonical problems like stability, controllability, or identifiability, may still be exactly solved, even with guarantees on the efficiency of the algorithm. I will focus on two algorithmic techniques that are particularly well suited for challenges in Hybrid Automata: Path-Complete Lyapunov functions and Chance-Constrained Optimization.

09:00-10:30 Session 60B: Information flow (CSF)
Location: Maths LT2
09:00
A Permission-Dependent Type System for Secure Information Flow Analysis
SPEAKER: Alwen Tiu

ABSTRACT. We introduce a novel type system for enforcing secure information flow in an imperative language. Our work is motivated by the problem of statically checking potential information leakage in Android applications. To this end, we design a lightweight type system featuring Android permission model, where the permissions are statically assigned to applications and are used to enforce access control in the applications. We take inspiration from a type system by Banerjee and Naumann (BN) to allow security types to be dependent on the permissions of the applications. A novel feature of our type system is a typing rule for conditional branching induced by permission testing, which introduces a merging operator on security types, allowing more precise security policies to be enforced. The soundness of our type system is proved with respect to non-interference. In addition, a type inference algorithm is presented for the underlying security type system, by reducing the inference problem to a constraint solving problem in the lattice of security types.

09:30
Types for Information Flow Control: Labeling Granularity and Semantic Models
SPEAKER: Vineet Rajani

ABSTRACT. Language-based information flow control (IFC) tracks dependencies within a program using sensitivity labels and prohibits public outputs from depending on secret inputs. In particular, literature has proposed several type systems for tracking these dependencies. On one extreme, there are fine-grained type systems (like Flow Caml) that track dependence at the level of individual values, by labeling all values individually. On the other extreme are coarse-grained type systems (like SLIO) that track dependence coarsely, by associating a single label with an entire computation context and not labeling all values individually.

In this paper, we show that, despite their glaring differences, both these styles are, in fact, equally expressive. To do this, we show a semantics- and type-preserving translation from a coarse-grained type system to a fine-grained one and vice-versa. The forward translation isn't surprising, but the backward translation is: It requires a construct to arbitrarily limit the scope of a context label in the coarse-grained type system (e.g., HLIO's ``toLabeled'' construct), as well as a small change to how labels are interpreted in coarse-grained type systems. Along the way, we show how to build logical relation models of both fine-grained and coarse-grained type systems (with full higher-order state). We use these relations to prove the two type systems and our translations sound.

10:00
Inductive Invariants for Noninterference in Multi-agent Workflows

ABSTRACT. Our goal is to certify absence of information leaks in multi-agent workflows, such as conference management systems like \textsc{EasyChair}. These workflows can be executed by any number of agents some of which may form coalitions against the system. Therefore, checking noninterference is a challenging problem.

Our paper offers two main contributions: First, a technique is provided to translate noninterference (in presence of various agent capabilities and declassification conditions) into universally quantified invariants of an instrumented new workflow program. Second, general techniques are developed for checking and inferring universally quantified inductive invariants for workflow programs. In particular, a large class of workflows is identified where inductiveness of invariants is decidable, as well as a smaller, still useful class of workflows where the weakest inductive universal invariant implying the desired invariant, is effectively computable. The new algorithms are implemented and applied to certify noninterference for workflows arising from conference management systems.

09:00-10:30 Session 60C: Lambda Calculus (FSCD)
09:00
On repetitive right application of B-terms

ABSTRACT. B-terms are built from the B combinator alone defined by B f g x ≡ f (g x), which is well-known as a function composition operator. This paper investigates an interesting property of B-terms, that is, whether repetitive right applications of a B-term circulates or not. We discuss conditions for B-terms to and not to have the property through a sound and complete equational axiomatization. Specifically, we give examples of B-terms which have the property and show that there are infinitely many B-terms which does not have the property. Also, we introduce a canonical representation of B-terms that is useful to detect cycles, or equivalently, to prove the property, with an efficient algorithm.

09:30
Homogeneity without Loss of Generality

ABSTRACT. We consider higher-order recursion schemes as generators of infinite trees. A sort (simple type) is called homogeneous when all arguments of higher order are taken before any arguments of lower order. We prove that every scheme can be converted into an equivalent one (i.e, generating the same tree) that is homogeneous, that is, uses only homogeneous sorts. Then, we prove the same for safe schemes: every safe scheme can be converted into an equivalent safe homogeneous scheme. Furthermore, we compare two definition of safe schemes: the original definition of Damm, and the modern one. Finally, we prove a lemma which illustrates usefulness of the homogeneity assumption. The results are known, but we prove them in a novel way: by directly manipulating considered schemes.

10:00
Strict Ideal Completions of the Lambda Calculus

ABSTRACT. The infinitary lambda calculi pioneered by Kennaway et al. extend the basic lambda calculus by metric completion to infinite terms and reductions. Depending on the chosen metric, the resulting infinitary calculi exhibit different notions of strictness. To obtain infinitary normalisation and infinitary confluence properties for these calculi, Kennaway et al. extend β-reduction with infinitely many ‘⊥-rules’, which contract meaningless terms directly to ⊥. Three of the resulting Böhm reduction calculi have unique infinitary normal forms corresponding to Böhm-like trees.

In this paper we develop a corresponding theory of infinitary lambda calculi based on ideal completion instead of metric completion. We show that each of our calculi conservatively extends the corresponding metric-based calculus. Three of our calculi are infinitary normalising and confluent; their unique infinitary normal forms are exactly the Böhm-like trees of the corresponding metric-based calculi. Our calculi dispense with the infinitely many ⊥-rules of the metric-based calculi. The fully non-strict calculus (111) consists of only β-reduction, while the other two calculi (001 and 101) require two additional rules that precisely state their strictness properties: λx.⊥ → ⊥ and ⊥ M → ⊥.

09:00-10:30 Session 60D (ITP)
Location: Blavatnik LT1
09:00
Program Verification in the Presence of Cached Address Translation
SPEAKER: Hira Syeda

ABSTRACT. Operating system (OS) kernels achieve isolation between user-level processes using multi-level page tables and translation lookaside buffers (TLBs). Controlling the TLB correctly is a fundamental security property --- yet all large-scale formal OS verification projects leave the TLB as an assumption. We present a logic for reasoning about low-level programs in the presence of TLB address translation. We extract invariants and necessary conditions for correct TLB operation that mirror the informal reasoning of OS engineers. Our program logic reduces to a standard logic for user-level reasoning, reduces to side-condition checks for kernel-level reasoning, and can handle typical OS kernel tasks such as context switching and page table manipulations.

09:30
Physical Addressing on Real Hardware in Isabelle/HOL
SPEAKER: Lukas Humbel

ABSTRACT. Modern computing platforms are inherently complex and diverse: a heterogeneous collection of cores, interconnects, programmable memory transla- tion units, and devices means that there is no single physical address space, and each core or DMA device may see other devices at different physical addresses. This is a problem because correct operation of system software relies on correct configuration of these interconnects, and current operating systems (and asso- ciated formal specifications) make assumptions about global physical addresses which do not hold. We present a formal model in Isabelle/HOL to express this complex addressing hardware that captures the intricacies of different real plat- forms or Systems-on-Chip (SoCs), and demonstrate its expressivity by showing, as an example, the impossibility of correctly configuring a MIPS R4600 TLB as specified in its documentation. Such a model not only facilitates proofs about hardware, but is used to generate correct code at compile time and device config- uration at runtime in the Barrelfish research OS.

10:00
Verified Timing Transformations in Synchronous Circuits with LambdaPi-Ware

ABSTRACT. Modelling digital circuits has been a fertile ground for functional programming and theorem proving. There have been numerous fruitful efforts to describe, simulate, and verify circuit in using functional languages, and such languages have also often been used as the host of hardware embedded DSLs.

When dependently-typed languages are used as host, we can have the hardware models simulated (with an executable specification) and verified in the same language, and properties can be proven not only of specific circuits, but of circuit generators describing entire (infinite) families of circuits.

In this paper we describe a deep embedding of a typed DSL for hardware description and verification, called λπ-Ware, using the dependently-typed language Agda as host. We define several common recursion schemes with the primitives of this DSL, in both combinational and sequential versions, and show how known circuits can be expressed in terms of these recursion patterns.

Finally, we make clear a notion of equivalence between circuits with different levels of parallelism, and prove the core equivalence property between the parallel and sequential versions of our vector iteration primitive. Circuits defined using the common recursion schemes can then easily be implemented in more or less parallel architectures with the guarantee that their behaviour will be equivalent up to timing.

09:00-10:00 Session 60E: LICS Invited Talk: Thierry Coquand (LICS)
Location: Maths LT1
09:00
Univalent Type Theory

ABSTRACT. A formal system for expressing mathematical reasoning, or for a general interactive proof system, should address the issue of representation of collections  of mathematical objects and describe the laws of identifications of these objects. These two questions are closely connected: a collection is itself a mathematical object and the problem of identifying two collections can be quite subtle. The laws of identifications of algebraic, ordered and topological structures were analysed in details by Bourbaki (in his "theory of structures") in the setting of set theory. Category theory suggests new laws of identifications (equivalenceof categories) which may be quite complex to express in this setting.

 Voevoedsky discovered an elegant and uniform way to capture these laws of identifications in the setting of  dependent type theory, with the formulation of the univalence axiom (2009)which generalizes the axiom of extensionality as formulated by Church in simple type theory. The goal of this talk will be to explain this axiom and to survey some formalizations carried outusing it, as well as some metamathematical results (such as models and proof theoretic strength).

 

09:00-10:30 Session 60F: Theory (SAT)
09:00
Sharpness of the Satisfiability Threshold for Non-Uniform Random k-SAT

ABSTRACT. We study non-uniform random k-SAT on n variables with an arbitrary probability distribution p on the variable occurrences. The number t = t(n) of randomly drawn clauses at which random formulas go from asymptotically almost surely (a.a.s.) satisfiable to a.a.s. unsatisfiable is called the satisfiability threshold. Such a threshold is called sharp if it approaches a step function as n increases. We show that a threshold t(n) for random k-SAT with an ensemble (p_n)_{n\in\N} of arbitrary probability distributions on the variable occurrences is sharp if ||p||_2 = O_n(t^(-2/k)) and ||p||_infinity = o_n(t^(-k/(2k-1)) * log^(-(k-1)/(2k-1))(t)).

This result generalizes Friedgut’s sharpness result from uniform to non-uniform random k-SAT and implies sharpness for thresholds of a wide range of random k-SAT models with heterogeneous probability distributions, for example such models where the variable probabilities follow a power-law distribution.

09:30
In Between Resolution and Cutting Planes: A Study of Proof Systems for Pseudo-Boolean SAT Solving
SPEAKER: Stephan Gocht

ABSTRACT. We initiate a proof complexity theoretic study of subsystems of cutting planes (CP) modelling proof search in conflict-driven pseudo-Boolean (PB) solvers. These algorithms combine restrictions such as that addition of constraints should always cancel a variable and/or that so-called saturation is used instead of division. It is known that on CNF inputs cutting planes with cancelling addition and saturation is essentially just resolution. We show that even if general addition is allowed, this proof system is still polynomially simulated by resolution with respect to proof size as long as coefficients are polynomially bounded.

As a further way of delineating the proof power of subsystems of CP, we propose to study a number of easy (but tricky) instances of problems in NP. Most of the formulas we consider have short and simple tree-like proofs in general CP, but the restricted subsystems seem to reveal a much more varied landscape. Although we are not able to formally establish separations between different subsystems of CP -which would require major technical breakthroughs in proof complexity- these formulas appear to be good candidates for obtaining such separations. We believe that a closer study of these benchmarks is a promising approach for shedding more light on the reasoning power of pseudo-Boolean solvers.

10:00
Cops-Robber games and the resolution of Tseitin formulas
SPEAKER: Jacobo Toran

ABSTRACT. We characterize several complexity measures for the resolution of Tseitin formulas in terms of a two person cop-robber game. Our game is a slight variation of the the one Seymour and Thomas used in order to characterize the tree-width parameter. For any undirected graph, by counting the number of cops needed in our game in order to catch a robber in it, we are able to exactly characterize the width, variable space and depth measures for the resolution of the Tseitin formula corresponding to that graph. We also give an exact game characterization of resolution variable space for any formula.

We show that our game can be played in a monotonous way. This implies that the corresponding resolution measures on Tseitin formulas correspond to those under the restriction of regular resolution.

Using our characterizations we improve the existing complexity bounds for Tseitin formulas showing that resolution width, depth and variable space coincide up to a logarithmic factor, and that variable space is bounded by the clause space times a logarithmic factor.

09:00-10:00 Session 60G: SC^2 Invited speaker (SCSC)
09:00
Hard Combinatorial Problems: A Challenge for Satisfiability

ABSTRACT. The theory and practice of satisfiability solvers has experienced dramatic advances in the past couple of decades. This fact attracted the attention of researchers that work with hard combinatorial problems with the hope that if suitable and efficient SAT encodings of these problems can be established, then SAT solvers can be used to solve large instances of such problems effectively. On the other hand SAT researchers have been interested in hard combinatorial problems and produced significant breakthroughs using custom-tailored highly-tuned SAT solvers implementations. It turns out that both these approaches have had a number of successes already and it is safe to assume that a lot more successes are to be expected in the near future. Combinatorics is a vast source of very hard and challenging problems, often containing thousands of discrete variables, and i firmly believe that the interaction between SAT researchers and combinatorialists will continue to be very fruitful.

10:00-10:40 Session 61 (LICS)
Location: Maths LT1
10:00
A sequent calculus with dependent types for classical arithmetic

ABSTRACT. In a recent paper, Herbelin developed a calculus dPAω in which constructive proofs for the axioms of countable and dependent choices could be derived via the encoding of a proof of countable universal quantification as a stream of it components. However, the property of normalization (and therefore the one of soundness) was only conjectured. The difficulty for the proof of normalization is due to the simultaneous presence of dependent dependent types (for the constructive part of the choice), of control operators (for classical logic), of coinductive objects (to encode functions of type N → A into streams (a₀, a₁, ...)) and of lazy evaluation with sharing (for these coinductive objects). Building on previous works, we introduce in this paper a variant of dPAω presented as a sequent calculus. On the one hand, we take advantage of a variant of Krivine classical realizability we developed to prove the normalization of classical call-by-need. On the other hand, we benefit of dL, a classical sequent calculus with dependent types in which type safety is ensured using delimited continuations together with a syntactic restriction. By combining the techniques developed in these papers, we manage to define a realizability interpretation à la Krivine of our calculus that allows us to prove normalization and soundness.

10:20
On Higher Inductive Types in Cubical Type Theory

ABSTRACT. Cubical type theory provides a constructive justification to certain aspects of homotopy type theory such as Voevodsky's univalence axiom. This makes many extensionality principles, like function and propositional extensionality, directly provable in the theory. This paper describes a constructive semantics, expressed in a presheaf topos with suitable structure inspired by cubical sets, of some higher inductive types. It also extends cubical type theory by a syntax for the higher inductive types of spheres, torus, suspensions, truncations and pushouts. All of these types are justified by the semantics and have judgmental computation rules for all constructors, including the higher dimensional ones, and the universes are closed under these type formers.

10:05-10:30 Session 62: Extended Abstracts (SCSC)
10:05
EA: Constraint Systems from Traffic Scenarios for the Validation of Autonomous Driving

ABSTRACT. One does not need the gift of clairvoyance to predict that in the near future autonomously driving cars will occupy a significant place in our everyday life. In fact, in designated and even public test-drive areas it is reality even now. Autonomous driving comes with the ambition to make road traffic safer, more efficient, more economic, and more comfortable -- and thus to ``make the world a bit better''. Recent accidents with automatic cars resulting in severe injuries and death, however, spark a debate on the safety validation of autonomous driving in general. The biggest challenge for autonomous driving to become a reality is thus most likely not the actual development of intelligent vehicles themselves but their rigorous validation that would justify the necessary level of confidence. It is common sense that classical test approaches are by far not feasible in this emerging area of autonomous driving as these would induce billions of kilometers of real-world driving in each release cycle. To cope with the tremendous complexity of traffic situations that a self-driving vehicle must deal with -- without doing any harm to any other traffic participants -- a promising approach to safety validation is virtual simulation, i.e.\ driving the huge amount of test kilometers in a virtual but realistic simulation environment. A particular interest here is in the validation of the behavior of the autonomous car in rather critical traffic scenarios.

In this position paper, we concentrate on one important aspect in virtual safety validation of autonomous driving, namely on the specification and concretization of critical traffic scenarios. This aspect gives rise to complex and challenging constraint systems to be solved, to which we believe the SC$^2$ community may give essential contributions by their rich variety of diverse methods and techniques.

10:30-11:00Coffee Break
10:50-12:30 Session 63A: Formal Synthesis (ADHS)
10:50
Optimal Symbolic Controllers Determinization for BDD storage.

ABSTRACT. Controller synthesis techniques based on symbolic abstractions appeal by producing correct-by-design controllers, under intricate behavioural constraints. Yet, being relations between abstract states and inputs, such controllers are immense in size, which makes them futile for embedded platforms. Control-synthesis tools such as PESSOA, SCOTS, and CoSyMA tackle the problem by storing controllers as binary decision diagrams (BDDs). However, due to redundantly keeping multiple inputs per-state, the resulting controllers are still too large. In this work, we first show that choosing an optimal controller determinization is an NP-complete problem. Further, we consider the previously known controller determinization technique and discuss its weaknesses. We suggest several new approaches to the problem, based on greedy algorithms, symbolic regression, and (muli-terminal) BDDs. Finally, we empirically compare the techniques and show that some of the new algorithms can produce up to ≈ 85% smaller controllers than those obtained with the previous technique.

11:15
Generating Dominant Strategies for Continuous Two-Player Zero-Sum Games

ABSTRACT. Motivated by the synthesis of controllers from high-level temporal specifications, we present two algorithms to compute dominant strategies for continuous two-player zero- sum games based on the Counter-Example Guided Inductive Synthesis (CEGIS) paradigm. In CEGIS, we iteratively propose candidate dominant strategies and find counterexamples. For scalability, past work has constrained the number of counterexamples used to generate new candidates, which leads to oscillations and incompleteness, even in very simple examples. The first algorithm combines Satisfiability Modulo Theory (SMT) solving with optimization to efficiently implement CEGIS. The second abstracts previously refuted strategies, while maintaining a finite counterexample set. We characterize sufficient conditions for soundness and termination, and show that both algorithms are sound and terminate. Additionally, the second approach can be made complete to within an arbitrary factor. We conclude by comparing across different variants of CEGIS.

11:40
Compositional Abstraction-based Synthesis for Cascade Discrete-Time Control Systems

ABSTRACT. Abstraction-based synthesis techniques are limited to systems with moderate size. Thus to contribute towards scalability of these techniques, in this paper we propose a compositional abstraction-based synthesis for cascade interconnected discrete-time control systems. Given a cascade interconnection of several components, we provide results on the compositional construction of finite abstractions based on the notion of approximate cascade composition. Then, we provide a compositional controller synthesis for cascade interconnection. Finally, we demonstrate the applicability and effectiveness of the results using a numerical example and compare it with different abstraction and controller synthesis schemes.

12:05
Nonuniform abstractions, refinement and controller synthesis with novel BDD encodings

ABSTRACT. This paper presents a control synthesis algorithm for dynamical systems to satisfy specifications given in a fragment of linear temporal logic. It is based on an abstraction- refinement scheme with nonuniform partitions of the state space. A novel encoding of the resulting transition system is proposed that uses binary decision diagrams for efficiency. We discuss several factors affecting scalability and present some benchmark results demonstrating the effectiveness of the new encodings. These ideas are also being implemented on a publicly available prototype tool, ARCS, that we briefly introduce in the paper.

10:50-12:30 Session 63B: Switched Systems 1 (ADHS)
10:50
Language constrained stabilization of discrete- time switched linear systems: an LMI approach

ABSTRACT. The goal of this paper is to study sufficient conditions to stabilize an autonomous discrete-time switched system, for which the switching law should belong to a constrained language characterized by a nondeterministic automaton. Based on a decomposition into strongly connected components of the automaton, it is shown that it suffices to consider only a nontrivial strongly connected component. Sufficient conditions are provided as a set of Linear Matrix Inequalities (LMIs) related to the automaton states and associated with a min-switching strategy. Equivalence with the periodic stabilization is investigated. A numerical example is provided to illustrate the main result.

11:15
A switched system approach to optimize mixing of fluids

ABSTRACT. In this paper, we discuss the problem of efficient fluid mixing which is tackled by means of (approximate) dynamic programming from a switched systems perspective. In current practice, typically pre-determined periodic mixing protocols are used. As we will show in this paper, feedback control can be used to improve mixing significantly. To make this control problem tractable, temporal and spatial discretization is used by means of the cell-mapping method on the original infinite-dimensional fluid models. This translates the original control problem into the design of a (sub)optimal switching law that determines discrete mixing actions for a discrete-time switched linear system. Exploiting this switched systems perspective, a novel feedback law for mixing fluids is proposed inspired by suboptimal rollout policies in dynamic programming contexts. By design, this feedback law for mixing guarantees a performance improvement over any given (open-loop) periodic mixing protocol. This new design methodology is validated by means of simulations for the benchmark journal bearing flow showing improved performance with respect to periodic mixing strategies.

11:40
Stability of switched systems on non-uniform time domains with non commuting matrices

ABSTRACT. The time scale theory is introduced to study the stability of a class of switched linear systems on non-uniform time domains, where the dynamical system commutes between a continuous-time linear subsystem and a discrete-time linear subsystem during a certain period of time. Without assuming that matrices are pairwise commuting, some conditions are derived to guarantee the exponential stability of the switched system by considering that the continuous-time subsystem is stable and the discrete-time subsystem can be stable or unstable. Some examples show the effectiveness of the proposed scheme.

12:05
On invariance and reachability on semialgebraic sets for linear dynamics

ABSTRACT. Reachability analysis is a powerful tool which is being used extensively and efficiently for the analysis and control of dynamical systems, especially when linear systems and convex sets are involved. In this note, we investigate whether exact or approximate reachability operations can be performed efficiently for the affine–semialgebraic setting, that is when we are dealing with general affine dynamics and basic semialgebraic sets. We show that it is partially true, we pinpoint the underlying challenges when this is not possible and indicate some directions for this case.

11:00-12:00 Session 64A: CSF Invited Talk: Catuscia Palamidessi (CSF)
Location: Maths LT2
11:00
Local Differential Privacy on Metric Spaces: optimizing the trade-off with utility

ABSTRACT. Local differential privacy (LDP) is a distributed variant of differential privacy (DP) in which the obfuscation of the sensitive information is done at the level of the individual records, and in general it  is used to sanitize data that are collected for statistical purposes.  LDP has the same properties of compositionality and independence from the prior as DP, and it has the further advantages that (a) each user can choose the level of privacy he wishes, (b) it does not need to assume a trusted third party, and (c) since all stored  records are individually-sanitized,  there is no risk of privacy leaks due to security breaches. On the other hand LDP in general requires more noise than DP to achieve the same level of protection, with negative consequences on the utility. In practice, utility becomes acceptable only on very large collections of data, and this is the reason why LDP is especially successful among big companies such as Apple and Google, which can count on a huge number of users.  In this talk, we propose a variant of LDP suitable for metric spaces, such as location data or energy consumption data, and we show that it provides a much higher utility for the same level of privacy. Furthermore, we discuss algorithms to extract the best possible statistical information from the data obfuscated with this metric variant of LDP. 

11:00-12:30 Session 64B: Rewriting (FSCD)
11:00
Narrowing Trees for Syntactically Deterministic Conditional Term Rewriting Systems
SPEAKER: Naoki Nishida

ABSTRACT. A narrowing tree for a constructor term rewriting system and a pair of terms is a finite representation for the space of all possible innermost-narrowing derivations that start with the pair and end with non-narrowable terms. Narrowing trees have grammar representations that can be considered regular tree grammars. Innermost narrowing is a counterpart of constructor-based rewriting, and thus, narrowing trees can be used in analyzing constructor-based rewriting to normal forms. In this paper, using grammar representations, we extend narrowing trees to syntactically deterministic conditional term rewriting systems that are constructor systems. We show that narrowing trees are useful to prove two properties of a normal conditional term rewriting system: one is infeasibility of conditional critical pairs and the other is quasi-reducibility.

11:30
Completion for Logically Constrained Rewriting
SPEAKER: Sarah Winkler

ABSTRACT. We propose an abstract completion procedure for logically constrained term rewrite systems (LCTRSs). This procedure can be instantiated to both standard Knuth-Bendix completion and ordered completion for LCTRSs, and we present a succinct and uniform correctness proof. A prototype implementation illustrates the viability of the new completion procedure.

12:00
Completeness of Tree Automata Completion

ABSTRACT. We consider rewriting of a regular language with a left-linear term rewriting system. We show two completeness theorems on equational tree automata completion. The first one shows that, if the set of reachable terms is regular, then completion can compute it. This was known to be true for some term rewriting system classes preserving regularity, but was still an open question in the general case. The proof is not constructive because it depends on the regularity of the set of reachable terms, which is undecidable. The second theorem states that, if there exists a regular over-approximation of the set of reachable terms then completion can compute it (or safely under-approximate it). To carry out those proofs we generalize and improve two results of completion: the Termination and the Upper-Bound theorems. Those theoretical results provide an algorithmic way to safely explore regular approximations with completion. This has been implemented in Timbuk and used to verify safety properties, automatically and efficiently, on first-order and higher-order functional programs.

11:00-12:30 Session 64C (ITP)
Location: Blavatnik LT1
11:00
Relational Parametricity and Quotient Preservation for Modular (Co)Datatypes

ABSTRACT. Bounded natural functors (BNFs) provide a modular framework for the construction of (co)datatypes in higher-order logic. Their functorial operations, the mapper and relator, are restricted to a subset of the parameters, namely those where recursion can take place. For certain applications, such as free theorems, data refinement, quotients, and generalised rewriting, it is desirable that these operations do not ignore the other parameters. In this paper, we generalise BNFs such that the mapper and relator act on both covariant and contravariant parameters. Our generalisation, BNFCC, is closed under functor composition and least and greatest fixpoints. In particular, every (co)datatype is a BNFCC. We prove that subtypes inherit the BNFCC structure under conditions that generalise those for the BNF case. We also identify sufficient conditions under which a BNFCC preserves quotients. Our development is formalised abstractly in Isabelle/HOL in such a way that it integrates seamlessly with the existing parametricity infrastructure.

11:30
HOL Light QE

ABSTRACT. We are interested in algorithms that manipulate mathematical expressions in mathematically meaningful ways. Expressions are syntactic, but most logics do not allow one to discuss syntax. CTT_qe is a version of Church's type theory that includes quotation and evaluation operators, akin to quote and eval in the Lisp programming language. Since the HOL logic is also a version of Church's type theory, we decided to add quotation and evaluation to HOL Light to demonstrate the implementability of CTT_qe and the benefits of having quotation and evaluation in a proof assistant. The resulting system is called HOL Light QE. Here we document the design of HOL Light QE and the challenges that needed to be overcome. The resulting implementation is freely available.

12:00
Tactics and Certificates in Meta Dedukti

ABSTRACT. Tactics are often featured in proof assistants to simplify the interactive development of proofs by allowing domain-specific automation. Moreover, tactics are also helpful to check the output of automatic theorem provers because the provers often omit details.

We use meta-programming to define a tactic language for the Dedukti logical framework which can be used both for checking certificates produced by automatic provers and for developing Dedukti proofs interactively.

More precisely, we propose a dependently-typed tactic language for first-order logic in Meta Dedukti and an untyped certificate language built on top of the tactic language. We show the expressivity of these languages on three examples: a decision procedure for minimal logic, a transfer tactic, and a certifying resolution certificate checker.

11:00-12:40 Session 64D (LICS)
Chair:
Location: Maths LT1
11:00
Weighted model counting beyond two-variable logic

ABSTRACT. It was recently shown by van den Broeck at al. that the symmetric weighted first-order model counting problem (WFOMC) for sentences of two-variable logic FO2 is in polynomial time, while it is Sharp-P_1 complete for FO3-sentences. We extend the result for FO2 in two independent directions: to sentences of the form "phi and \forall\exists^=1 psi" with phi and psi in FO2, and to sentences formulated in the uniform one-dimensional fragment of FO, a recently introduced extension of two-variable logic with the capacity to deal with relation symbols of all arities. Note that the former generalizes the extension of FO2 with a functional relation symbol. We also identify a complete classification of first-order prefix classes according to whether WFOMC is in polynomial time or Sharp-P_1 complete.

11:20
A universal-algebraic proof of the complexity dichotomy for Monotone Monadic SNP

ABSTRACT. The logic MMSNP is a restricted fragment of existential second-order logic which allows to express many interesting queries in graph theory and finite model theory. The logic was introduced by Feder and Vardi who showed that every MMSNP sentence is computationally equivalent to a finite-domain constraint satisfaction problem (CSP); the involved probabilistic reductions were derandomized by Kun using explicit constructions of expander structures. We present a new proof of the reduction to finite-domain CSPs which does not rely on the results of Kun. Our approach uses the fact that every MMSNP sentence describes a finite union of CSPs for countably infinite ω-categorical structures; moreover, by a recent result of Hubicka and Nesetril, these structures can be expanded to homogeneous structures with finite relational signature and the Ramsey property. This allows us to use the universal-algebraic approach to study the computational complexity of MMSNP. In particular, we verify the more general Bodirsky-Pinsker dichotomy conjecture for CSPs in MMSNP.

11:40
Unary negation fragment with equivalence relations has the finite model property

ABSTRACT. We consider an extension of the unary negation fragment of first-order logic in which arbitrarily many binary symbols may be required to be interpreted as equivalence relations. We show that this extension has the finite model property. More specifically, we show that every satisfiable formula has a model of at most doubly exponential size. We argue that the satisfiability (= finite satisfiability) problem for this logic is \TwoExpTime-complete. We also transfer our results to a restricted variant of the guarded negation fragment with equivalence relations.

12:00
Satisfiability in multi-valued circuits

ABSTRACT. Satisfiability of Boolean circuits is NP-complete in general but becomes polynomial time when restricted either to monotone gates or linear gates. We go outside Boolean realm and consider circuits built of any fixed set of gates on an arbitrary large finite domain. From the complexity point of view this is connected with solving equations over finite algebras. We want to characterize finite algebras A with polynomial time algorithm deciding if an equation over A has a solution. We are also looking for polynomial time algorithms deciding if two circuits over a finite algebra compute the same function. Although we have not managed to solve these problems in the most general setting we have obtained such a characterization (in terms of nice structural algebraic properties) for a very broad class of algebras from congruence modular varieties, including groups, rings, lattices and their extensions.

12:20
Can One Escape Red Chains? Regular Path Queries Determinacy is Undecidable.

ABSTRACT. For a given set of queries (which are formulae in some query language) {Q_1, Q_2, ... Q_k} and for another query Q we say that {Q_1, Q_2, ... Q_k} determines Q if -- informally speaking -- for every database D, the information contained in the views Q_1(D), Q_2(D), ... Q_k(D) is sufficient to compute Q(D).

Query Determinacy Problem is the problem of deciding, for given {Q_1, Q_2, ... Q_k} and Q whether {Q_1, Q_2, ... Q_k} determines Q.

Many versions of this problem, for different query languages, were studied in database theory. In this paper we solve a problem stated in [CGLV02] showing that Query Determinacy Problem is undecidable for the Regular Path Queries -- the paradigmatic query language of graph databases.

--------- [CGLV02] D. Calvanese, G. De Giacomo, M. Lenzerini, and M.Y. Vardi; Lossless regular views; Proc. of the twenty-first ACM SIGMOD-SIGACT- SIGART symposium on Principles of Database Systems, PODS 2002

11:00-12:40 Session 64E (LICS)
Location: Maths LT3
11:00
Dialectica models of type theory
SPEAKER: Sean Moss

ABSTRACT. We present two Dialectica-like constructions for models of intensional Martin-Löf type theory based on Gödel's original Dialectica interpretation and the Diller-Nahm variant, bringing dependent types to categorical proof theory. We set both constructions within a logical predicates style theory for display map categories where we show that 'quasifibred' versions of dependent products and universes suffice to construct their standard counterparts. To support the logic required for dependent products in the first construction, we propose a new semantic notion of finite sum for dependent types, generalizing finitely-complete extensive categories. The second avoids extensivity assumptions using biproducts in a Kleisli category for a fibred additive monad.

11:20
Extensional and Intensional Semantic Universes: A Denotational Model of Dependent Types
SPEAKER: James Laird

ABSTRACT. We describe a dependent type theory, and a denotational model for it, that incorporates both intensional and extensional semantic universes. In the former, terms and types are interpreted as strategies on certain graph games, which are concrete data structures of a generalized form, and in the latter as stable functions on event domains.

The concrete data structures themselves form an event domain, with which we may interpret an (extensional) universe type of (intensional) types. A dependent game corresponds to a stable function into this domain; we use its trace to define dependent product and sum constructions as it captures precisely how unfolding moves combine with the dependency to shape the possible interaction in the game. Since each strategy computes a stable function on CDS states, we can lift typing judgements from the intensional to the extensional setting, giving an expressive type theory with recursively defined type families and type operators.

We define an operational semantics for intensional terms, giving a functional programming language based on our type theory, and prove that our semantics for it is computationally adequate. By extending it with a simple non-local control operator on intensional terms, we can precisely characterize behaviour in the intensional model. We demonstrate this by proving full abstraction and full completeness results.

11:40
A Fixpoint Logic and Dependent Effects for Temporal Property Verification
SPEAKER: Hiroshi Unno

ABSTRACT. Existing approaches to temporal verification of higher-order functional programs have either sacrificed compositionality in favor of achieving automation or vice-versa. In this paper we present a dependent-refinement type \& effect system to ensure that well-typed programs satisfy given temporal properties, and also give an algorithmic approach---based on deductive reasoning over a fixpoint logic---to typing in this system. The first contribution is a novel type-and-effect system capable of expressing \emph{dependent temporal} effects, which are fixpoint logic predicates on event sequences and program values, extending beyond the (non-dependent) temporal effects used in recent proposals. Temporal effects facilitate compositional reasoning whereby the temporal behavior of program parts are summarized as effects and combined to form those of the larger parts. As a second contribution, we show that type checking and typability for the type system can be reduced to solving first-order fixpoint logic constraints. Finally, we present a novel deductive system for solving such constraints. The deductive system consists of rules for reasoning via invariants and well-founded relations, and is able to reduce formulas containing both least and greatest fixpoints to predicate-based reasoning.

12:00
Computability Beyond Church-Turing via Choice Sequences
SPEAKER: Vincent Rahli

ABSTRACT. Church-Turing computability was extended by Brouwer who considered non-lawlike computability in the form of free choice sequences. Those are essentially unbounded sequences whose elements are chosen freely, i.e. not subject to any law. In this work we develop a new type theory BITT, which is an extension of the type theory of the Nuprl proof assistant, that embeds the notion of choice sequences. Supporting the evolving, non-deterministic nature of these objects required major modi cations to the under- lying type theory. Though the construction of a choice sequence is non-deterministic, once certain choices were made, they must remain consistent. To ensure this, BITT uses the underlying library as state and store choices as they are created. Another salient feature of BITT is that it uses a Beth-like semantics to account for the dynamic nature of choice sequences. We formally define BITT and use it to interpret and validate essential axioms governing choice sequences. These results provide a foundation for a fully intuitionistic version of Nuprl.

12:20
The Syntax and Semantics of Quantitative Type Theory

ABSTRACT. We present Quantitative Type Theory, a Type Theory that records usage information for each variable in a judgement, based on a previous system by McBride. The usage information is used to give a realizability semantics using a variant of Linear Combinatory Algebras, refining the usual realizability semantics of Type Theory by accurately tracking resource behaviour. We define the semantics in terms of Quantitative Categories with Families, a novel extension of Categories with Families for modelling resource sensitive type theories.

11:00-12:00 Session 64F: SAT Invited Talk: Marijn Heule (SAT)
11:00
Computable Short Proofs

ABSTRACT. The success of satisfiability solving presents us with an interesting peculiarity: modern solvers can frequently handle gigantic formulas while failing miserably on supposedly easy problems. Their poor performance is typically caused by the weakness of their underlying proof system—resolution. To overcome this obstacle, we need solvers that are based on stronger proof systems. Unfortunately, existing strong proof systems—such as extended resolution or Frege systems—do not seem to lend themselves to mechanization.
We present a new proof system that not only generalizes strong existing proof systems but that is also well-suited for mechanization. The proof system is surprisingly strong, even without the introduction of new variables—a key feature of short proofs presented in the proof-complexity literature. We demonstrate the strength of the new proof system on the famous pigeon hole formulas by providing short proofs without new variables. Moreover, we introduce satisfaction-driven clause learning, a new decision procedure that exploits the strengths of our new proof system and can therefore yield exponential speed-ups compared to state-of-the-art solvers based on resolution.

11:00-12:40 Session 64G: Extended Abstracts: Symbolic Computation (SCSC)
11:00
EA: Wrapping Computer Algebra is Surprisingly Successful for Non-Linear SMT

ABSTRACT. We report on a prototypical tool for Satisfiability Modulo Theory solving for quantifier-free formulas in Non-linear Real Arithmetic or, more precisely, real closed fields, which uses a computer algebra system as the main component. This is complemented with two heuristic techniques, also stemming from computer algebra, viz.~interval constraint propagation and subtropical satisfiability. Our key idea is to make optimal use of existing knowledge and work in the symbolic computation community, reusing available methods and implementations to the most possible extent. Experimental results show that our approach is surprisingly efficient in practice.

11:25
EA: New in CoCoA-5.2.4 and CoCoALib-0.99570 for SC-Square
SPEAKER: John Abbott

ABSTRACT. CoCoALib is a C++ software library offering operations on polynomials, ideals of polynomials, and related objects. The principal developers of CoCoALib are members of the SC-square project. We give an overview of the latest developments of the library, especially those relating to the project SC-square.

The CoCoA software suite includes also the programmable, interactive system CoCoA-5. Most of the operations in CoCoALib are also accessible via CoCoA-5. The programmability of CoCoA-5 together with its interactivity help in fast prototyping and testing conjectures.

11:50
EA: SMT-like Queries in Maple

ABSTRACT. The recognition that Symbolic Computation tools could benefit from techniques from the world of Satisability Checking was a primary motivation for the founding of the SC2 community. These benefits could be further demonstrated by the existence of "SMT-like" queries in legacy computer algebra systems; that is, computations which seek to decide satisfiability or identify a satisfying witness.

The Maple CAS has been under continuous development since the 1980s and its core symbolic routines incorporate many heuristics, some of which are "SMT-like". We describe ongoing work to compose an inventory of such "SMT-like" queries extracted from the built-in Maple library, most of which were added long before the inclusion in Maple of explicit software links to SAT/SMT tools. Some of these queries are expressible in the SMT-LIB format using existing logics, and it hoped that an analysis of those that cannot be so expressed could help inform future development of the SMT-LIB standard.

12:15
EA: Techniques for Natural-style Proofs in Elementary Analysis

ABSTRACT. Combining methods from satisfiability checking with methods from symbolic computation promises to solve challenging problems in various areas of theory and application. We look at the basically equivalent problem of proving statements directly in a non-clausal setting, when additional information on the underlying domain is available in form of specific properties and algorithms. We demonstrate on a concrete example several heuristic techniques for the automation of natural-style proving of statements from elementary analysis. The purpose of this work in progress is to generate proofs similar to those produced by humans, by combining automated reasoning methods with techniques from computer algebra. Our techniques include: the S-decomposition method for formulae with alternating quantifiers, quantifier elimination by cylindrical algebraic decomposition, analysis of terms behaviour in zero, bounding the epsilon-bounds, rewriting of expressions involving absolute value, algebraic manipulations, and identification of equal terms under unknown functions. These techniques are being implemented in the Theorema system and are able to construct automatically natural-style proofs for numerous examples including: convergence of sequences, limits and continuity of functions, uniform continuity, and other.

12:00-12:30 Session 65A: Privacy (CSF)
Location: Maths LT2
12:00
Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting
SPEAKER: Samuel Yeom

ABSTRACT. Machine learning algorithms, when applied to sensitive data, pose a distinct threat to privacy. A growing body of prior work demonstrates that models produced by these algorithms may leak specific private information in the training data to an attacker, either through the models' structure or their observable behavior. However, the underlying cause of this privacy risk is not well understood beyond a handful of anecdotal accounts that suggest overfitting and influence might play a role.

This paper examines the effect that overfitting and influence have on the ability of an attacker to learn information about the training data from machine learning models, either through training set membership inference or attribute inference attacks. Using both formal and empirical analyses, we illustrate a clear relationship between these factors and the privacy risk that arises in several popular machine learning algorithms. We find that overfitting is sufficient to allow an attacker to perform membership inference and, when the target attribute meets certain conditions about its influence, attribute inference attacks. Interestingly, our formal analysis also shows that overfitting is not necessary for these attacks and begins to shed light on what other factors may be in play. Finally, we explore the connection between membership inference and attribute inference, showing that there are deep connections between the two that lead to effective new attacks.

12:00-12:20 Session 65B: Industry Presentation (SAT)
12:00
Satalia's SolveEngine

ABSTRACT. We'll discuss the origins, current status and future plans of the SolveEngine - our optimisation-as-a-service platform that aggregates the latest optimisation algorithms from academia, and uses machine learning to identify which algorithm is best suited to solve any given problem. We'll give an introduction to Satalia, why this conference is the only place people will understand our name, and how the SolveEngine is fueling the optimisation systems of some of the worlds best-known companies.

12:30-14:00Lunch Break
14:00-15:30 Session 66A: FLoC Keynote Lecture: Shafi Goldwasser (FLoC)
Location: Maths LT1
14:00
Pseudo deterministic algorithms and proofs

ABSTRACT. Probabilistic algorithms for both decision and search problems can offer significant complexity improvements over deterministic algorithms. One major difference, however, is that they may output different solutions for different choices of randomness. This makes correctness amplification impossible for search algorithms and is less than desirable in setting where uniqueness of output is important such as generation of system wide cryptographic parameters or distributed setting where different sources of randomness are used. Pseudo-deterministic algorithms are a class of randomized search algorithms, which output a unique answer with high

probability. Intuitively, they are indistinguishable from deterministic algorithms by a polynomial time observer of their input/output behavior. In this talk I will describe what is known about pseudo-deterministic algorithms in the sequential, sub-linear and parallel setting. We will also describe an extension of pseudo-deterministic algorithms to interactive proofs for search problems where the verifier is guaranteed with high probability to output the same output on different executions, regardless of the prover strategies. Based on joint work with Goldreich, Ron, Grossman and Holden.

14:00-15:40 Session 66B: Applications 1 (ADHS)
14:00
Benchmarks for Cyber-Physical Systems: A Modular Model Library for Building Automation Systems

ABSTRACT. Building Automation Systems (BAS) are exemplars of Cyber-Physical Systems (CPS), incorporating digital control architectures over underlying continuous physical processes. We provide a modular model library for BAS drawn from expertise developed on a real BAS setup. The library allows to build models comprising either physical quantities or digital control modules. The structure, operation, and dynamics of the model can be complex, incorporating (i) stochasticity, (ii) non-linearities, (iii) numerous continuous variables or discrete states, (iv) various input and output signals, and (v) a large number of possible discrete configurations. The modular composition of BAS components can generate useful CPS benchmarks. We display this use by means of three realistic case studies, where corresponding models are built and engaged with different analysis goals. The benchmarks, the model library and associated data collected from the BAS setup at the University of Oxford, are kept on-line at https://github.com/ natchi92/BASBenchmarks

14:25
CoasterX: A Case Study in Component-Driven Hybrid Systems Proof Automation

ABSTRACT. Component-driven proof automation (CDPA) exploits component structure to automate deductive verification of large-scale hybrid systems with non-trivial continuous dynamics. We use CDPA to implement a case study CoasterX, which is a toolchain for designing and verifying safety of 2-dimensional roller coaster track designs. Specifically, we verify velocity and acceleration bounds. CoasterX starts with a graphical front-end for point-and-click design of tracks. The CoasterX back-end then automatically specifies and verifies the track in differential dynamic logic (dL) with a custom procedure built in the KeYmaera X theorem prover. We show that the CDPA approach scales, testing real coasters of up to 56 components.

14:50
A controlled sewer system should be treated as a sampled data system with events

ABSTRACT. Arguments are presented in favor of modeling sewer systems and in particular Dutch sewer systems as a sampled data system with events. Basic limitations on controlling these systems when ignoring their hybrid nature are stated. The traditional control scheme for the Dutch systems is given as an example of event driven local control. Basic limitations on systems using a sampled data approach without an event driven component are derived. To provide context a brief description of a sampled data controller for a sewer system based on set-point tracking is given. This is followed by an explanation of how the absence of event driven control limits its effectiveness.

15:15
Mission Planning for Multiple Vehicles with Temporal Specifications using UxAS

ABSTRACT. In this paper, we present extensions to Unmanned Systems Autonomy Services (UxAS) to handle mission specifications that require a synchronization of task execution. UxAS uses Process Algebra (PA) as a formal language to specify mission requirements for unmanned aerial vehicle (UAV) operations. However, the current implementation of PA in UxAS utilizes assigned semantics which does not guarantee the order of task completion and is unable to provide a mission planning required vehicle-synchronization. To enable the capability of UxAS in operating synchronized mission specifications, we introduce a notion of Synchronized Process Algebra (SPA) which extends PA by adding a synchronized composition operator to the syntax of PA. Such an operator allows us to specify the task’s duration and enforce the next task is executed after the previous one has terminated. Moreover, we provide a new service in UxAS, called Temporal Service (TS) to control the flow of the planning process with respect to timing specifications. We apply SPA and TS to specify and operate the mission specification of a forest fire rescue scenario required the synchronized arrivals of multiple UAVs.

14:00-15:40 Session 66C: Stochastic systems 1 (ADHS)
14:00
Temporal Logic Control of General Markov Decision Processes by Approximate Policy Refinement

ABSTRACT. The formal verification and controller synthesis for general Markov decision processes (gMDPs) that evolve over uncountable state spaces are computationally hard and thus generally rely on the use of approximate abstractions. In this paper, we contribute to the state of the art of control synthesis for temporal logic properties by computing and quantifying a less conservative gridding of the continuous state space of linear stochastic dynamic systems and by giving a new approach for control synthesis and verification that is robust to the incurred approximation errors. The approximation errors are expressed as both deviations in the outputs of the gMDPs and in the probabilistic transitions.

14:25
Interacting Particle System-Based Estimation of Reach Probability for a Generalized Stochastic Hybrid System

ABSTRACT. This paper studies estimation of reach probability for a generalized stochastic hybrid system (GSHS). For diffusion processes a well-developed approach in reach probability estimation is to introduce a suitable factorization of the reach probability and then to estimate these factors through simulation of an Interacting Particle System (IPS). The theory of this IPS approach has been extended to arbitrary strong Markov processes, which includes GSHS executions. Because Monte Carlo simulation of GSHS particles involves sampling of Brownian motion as well as sampling of random discontinuities, the practical elaboration of the IPS approach for GSHS is not straightforward. The aim of this paper is to elaborate the IPS approach for GSHS by using complementary Monte Carlo sampling techniques. For a simple GSHS example, it is shown that and why the specific technique selected for sampling discontinuities can have a major influence on the effectiveness of IPS in reach probability estimation.

14:50
Statistical Verification of PCTL Using Stratified Samples

ABSTRACT. In this work, we propose a stratified sampling method to statistically check Probabilistic Computation Tree Logic (PCTL) formulas on discrete-time Markov chains with sequential probability ratio test. Distinct from previous statistical verification methods using independent Monte Carlo sampling, our algorithm uses stratified samples that are negatively correlated, thus give lower variance. The experiments demonstrate that the new algorithm uses a smaller number of samples for a given confidence level on several benchmark examples.

15:15
Approximate Abstractions of Markov Chains with Interval Decision Processes

ABSTRACT. This work introduces a new abstraction technique for reducing the state space of large, discrete-time labelled Markov chains. The abstraction leverages the semantics of interval Markov decision processes and the existing notion of approximate probabilistic bisimulation. Whilst standard abstractions make use of abstract points that are taken from the state space of the concrete model and which serve as representatives for sets of concrete states, in this work the abstract structure is constructed considering abstract points that are not necessarily selected from the states of the concrete model, rather they are a function of these states. The resulting model presents a smaller one-step bisimulation error, when compared to a like-sized, standard Markov chain abstraction. We outline a method to perform probabilistic model checking, and show that the computational complexity of the new method is comparable to that of standard abstractions based on approximate probabilistic bisimulations.

14:00-15:30 Session 66D: Research papers: CAD (SCSC)
14:00
RP: Non-linear Real Arithmetic Benchmarks derived from Automated Reasoning in Economics

ABSTRACT. We consider problems originating in economics that may be solved automatically using mathematical software. We present and make freely available a new benchmark set of such problems. The problems have been shown to fall within the framework of non-linear real arithmetic, and so are in theory soluble via Quantifier Elimination (QE) technology as usually implemented in computer algebra systems. Further, they all can be phrased in prenex normal form with only existential quantifiers and so are also admissible to those Satisfiability Module Theory (SMT) solvers that support the QF_NRA logic. There is a great body of work considering QE and SMT application in science and engineering, but we demonstrate here that there is potential for this technology also in the social sciences.

14:30
RP: Towards Incremental Cylindrical Algebraic Decomposition in Maple

ABSTRACT. Cylindrical Algebraic Decomposition (CAD) is an important tool within computational real algebraic geometry, capable of solving many problems for polynomial systems over the reals. It has long been studied by the Symbolic Computation community, and has found recent interest in the Satisfiability Checking community.

The present report describes a proof of concept implementation of an Incremental CAD algorithm in \textsc{Maple}, where CADs are built and then refined as additional polynomial constraints are added. The aim is to make CAD suitable for use as a theory solver for SMT tools who search for solutions by constantly reformulating logical formula and querying whether a logical solution is admissible.

We describe experiments for the proof of concept, which clearly display the computational advantages compared to iterated re-computation. In addition, the project implemented this work under the recently verified Lazard projection scheme (with corresponding Lazard valuation).

15:00
RP: Evaluation of Equational Constraints for CAD in SMT Solving
SPEAKER: Rebecca Haehn

ABSTRACT. The cylindrical algebraic decomposition is a popular method for quantifier elimination for non-linear real algebra. We use it as a theory solver in the context of satisfiability-modulo-theories solving to solve a sequence of related problems consisting of a conjunction of constraints. To improve our technique in practice, we consider different optimizations in the projection phase based on equational constraints.

We review the existing ideas and propose how to modify an existing implementation. Though one could expect a significant speed-up for large problems, we discuss why our expectations are somewhat lower, in particular, because satisfiable problems tend not to benefit from the modifications at all. Finally, we evaluate several different implementations and demonstrate a significant improvement due to the usage of equational constraints.

15:30-16:00Coffee Break
16:00-17:00 Session 67A: Electronic voting (CSF)
Chair:
Location: Maths LT2
16:00
Alethea: A Provably Secure Random Sample Voting Protocol
SPEAKER: Lara Schmid

ABSTRACT. In random sample voting, only a randomly chosen subset of all eligible voters vote. We propose Alethea, the first random sample voting protocol that satisfies end-to-end verifiability and receipt-freeness. Our protocol makes explicit the distinction between the human voters and their devices. This allows us to make more fine-grained statements about the required capabilities and trust assumptions of each agent than is possible in previous work. Consequently, we can make more precise statements about the adversary models under which our protocol is secure.

Alethea is the first formally verified random sample voting protocol. This necessitates defining new security properties, in particular that the sample group is chosen randomly and that the chosen voters remain anonymous. We model the protocol and its properties in a symbolic model amenable to tool support. We verify most of the properties automatically and complete our analysis using traditional hand-written proofs when automatic analysis is either infeasible or when we desire a more fine-grained analysis than our symbolic abstractions allow.

16:30
Machine-checked proofs for electronic voting: privacy and verifiability for Belenios

ABSTRACT. We present a machine-checked security analysis of Belenios-- a deployed voting protocol which has already been used in more than 200 elections. Belenios extends Helios with an explicit registration authority to obtain eligibility guarantees.

We offer two main results. First, we build upon a recent framework for proving ballot privacy in EasyCrypt. Inspired by our application to Belenios, we adapt and extend the privacy security notions to account for protocols that include a registration phase. Our analysis identifies a trust assumption which is missing in the existing (pen and paper) analysis of Belenios: ballot privacy does not hold if the registrar misbehaves, even if, the role of the registrar is seemingly to provide eligibility guarantees. Second, we develop a novel framework for proving strong verifiability in EasyCrypt and apply it to Belenios. In the process, we clarify several aspects of the pen-and-paper proof, e.g. how to deal with revote policies.

Together, our results yield the first machine-checked analysis of both privacy and verifiability properties for a deployed electronic voting protocol. Last, but not least, we identify several issues regarding the applicability of existing definitions of privacy and verifiability to systems other than Helios. While we show how to adapt the definitions to the particular case of Belenios, our findings indicate a need for the development of less specific security notions for electronic voting protocols with registration authorities.

16:00-17:00 Session 67B: FSCD Invited talk: Grigore Rosu (FSCD)
16:00
Formal Design, Implementation and Verification of Blockchain Languages

ABSTRACT. Many of the recent cryptocurrency bugs and exploits are due to flaws or weaknesses of the underlying blockchain programming languages or virtual machines. The usual post-mortem approach to formal language semantics and verification, where the language is firstly implemented and used in production for many years before a need for formal semantics and verification tools naturally arises, simply does not work anymore. New blockchain languages or virtual machines are proposed at an alarming rate, followed by new versions of them every few weeks, together with programs (or smart contracts) in these languages that are responsible for financial transactions of potentially significant value. Formal analysis and verification tools are therefore needed immediately for such languages and virtual machines. We present recent academic and commercial results in developing blockchain languages and virtual machines that come directly equipped with formal analysis and verification tools. The main idea is to generate all these automatically, correct-by-construction from a formal specification. We demonstrate the feasibility of the proposed approach by applying it to two blockchains, Ethereum and Cardano.

16:00-18:00 Session 67C (ITP)
Location: Blavatnik LT1
16:00
Towards Formal Foundations for Game Theory (Short Paper)

ABSTRACT. Utility functions form an essential part of game theory and theoretical economics. In order to guarantee the existence of these utility functions sufficient properties are assumed in an axiomatic manner. In this paper we discuss these axioms, and a formal proof of the existence of linear utility functions. Furthermore, we consider the von-Neumann-Morgenstern Utility Theorem, its importance for game theory and economics, and a formalization thereof. The formalization includes precise definitions of the underlying concepts including continuity and independence of preferences. We make the dependencies more precise and highlight some consequences for a formalization of game theory.

16:20
MDP + TA = PTA: Probabilistic Timed Automata, Formalized (Short Paper)
SPEAKER: Simon Wimmer

ABSTRACT. We present a formalization of Probabilistic Timed Automata (PTA) for which we try to follow the formula ``MDP + TA = PTA'' as far as possible: our work starts from existing formalizations of Markov Decision Processes (MDP) and Timed Automata (TA) and combines them modularly. We prove the fundamental result for Probabilistic Timed Automata: the region construction that is known from Timed Automata carries over to the probabilistic setting. In particular, this allows us to prove that minimum and maximum reachability probabilities can be computed via a reduction to MDP model checking, including the case where one wants to disregard unrealizable behavior.

16:40
Towards Verified Handwritten Calculational Proofs (Short Paper)

ABSTRACT. Despite great advances in computer-assisted proof systems, writing formal proofs using a traditional computer is still challenging due to mouse-and-keyboard interaction. This leads to scientists often resorting to pen and paper to write their proofs. However, when handwriting a proof, there is no formal guarantee that the proof is correct. In this paper we address this issue and present the initial steps towards a system that allows users to handwrite proofs using a pen-based device and that communicates with an external theorem prover to support the users throughout the proof writing process. We focus on calculational proofs, whereby a theorem is proved by a chain of formulae, each transformed in some way into the next. We present the implementation of a proof-of-concept prototype that can formally verify handwritten calculational proofs without the need to learn the specific syntax of theorem provers.

17:00
Formalizing Ring Theory in PVS (Short Paper)

ABSTRACT. This work presents the ongoing specification and formalization in the PVS proof assistant of some definitions and theorems of ring theory in abstract algebra, and briefly presents some of the results intended to be formalized. So far, some important theorems from ring theory were specified and formally proved, like the First Isomorphism Theorem, the Binomial Theorem and the lemma establishing that every finite integral domain with cardinality greater than one is a field. The goal of the project in progress is to specify and formalize in PVS the main theorems from ring theory presented in undergraduate textbooks of abstract algebra, but in the short term the authors intended to formalize: (i) the Second and the Third Isomorphism Theorems for rings; (ii) the primality of the characteristic of a ring without zero divisors; (iii) definitions of prime and maximal ideals and theorems related with those concepts. The developed formalization applies mainly a part of the NASA PVS library for abstract algebra specified in the theory algebra.

17:20
Software Verification with ITPs Should Use Binary Code Extraction to Reduce the TCB (Short Paper)
SPEAKER: Ramana Kumar

ABSTRACT. LCF-style provers emphasise that all results are secured by logical inference, and yet their current facilities for code extraction or code generation fall short of this high standard. This paper argues that extraction mechanisms with a small trusted computing base (TCB) ought to be used instead, pointing out that the recent CakeML and Œuf projects show that this is possible in HOL and within reach in Coq.

16:00-17:00 Session 67E: MUS (SAT)
16:00
Minimal unsatisfiability and minimal strongly connected digraphs

ABSTRACT. Minimally Unsatisfiable clause-sets (MUs) are building blocks for understanding Minimally Unsatisfiable Sub-clause-sets (MUSs), and they are also interesting in their own right, as hardest unsatisfiable clause-sets. There are two important previously known but isolated characterisations for MUs, both with ingenious but complicated proofs: Characterising 2-CNF MUs, and characterising MUs with deficiency 2 (two more clauses than variables). Via a novel connection to Minimal Strong Digraphs (MSDs), we give short and intuitive new proofs of these characterisations, revealing an underlying common structure. We obtain unique isomorpism type descriptions of normal forms, for both cases.

An MSD is a strongly connected digraph, where removing any arc destroys the strong connectedness. We introduce the new class DFM of special MUs, which are in close correspondence to MSDs. The known relation between 2-CNFs and digraphs is used, but in a simpler and more direct way. The non-binary clauses in DFMs can be ignored. For a binary clause (a or b) normally both arcs (-a -> b) and (-b -> a) need to be considered, but here we have a canonical choice of one of them.

The definition of DFM is indeed very simple: an MU, where all clauses except of two are binary, while the two non-binary clauses are the two "full monotone clauses", the disjunction of all variables and the disjunction of all negated variables. We expect this basic class of MUs to be very useful in the future. Every variables occurs at least twice positively and twice negatively ("non-singularity"), and its isomorphism problem is GI-complete, while we have polytime decision, and likely we have good properties related to MUS-search. Via "saturations" and "marginalisations", adding resp. removing literal occurrences, as well as via "non-singular extensions", adding new variables, this class reaches out to apparently unrelated classes.

16:30
Finding all Minimal Safe Inductive Sets

ABSTRACT. Computing minimal (or even just small) certificates is a central problem in automated reasoning and, in particular, in automated formal verification. For unsatisfiable formulas in CNF such certificates take the form of Minimal Unsatisfiable Subsets (MUSes) and have a wide range of applications. As a formula can have multiple MUSes and different MUSes provide different insights on unsatisfiability, commonly studied problems include computing a smallest MUS (SMUS) or computing all MUSes (AllMUS) of a given unsatisfiable formula. In this paper, we consider certificates to safety properties in the form of Minimal Safe Inductive Invariants (MSISes), and we develop algorithms for exploring such certificates by computing a smallest MSIS (SMSIS) or computing all MSISes (AllMSIS) of a given safe inductive invariant. More precisely, we show how two well-known algorithm CAMUS or MARCO for MUS enumeration can be adapted to MSIS enumeration as well.

16:00-17:30 Session 67F: Research papers (SCSC)
16:00
RP: Refutation of Products of Linear Polynomials
SPEAKER: Jan Horacek

ABSTRACT. In this paper we consider formulas that are conjunctions of linear clauses, i.e., of linear equations. Such formulas are very interesting because they encode CNF constraints that are typically very hard for SAT solvers. We introduce a new proof system SRES that works with linear clauses and show that SRES is implicationally and refutationally complete. Algebraically speaking, linear clauses correspond to products of linear polynomials over a ring of Boolean polynomials. That is why SRES can certify if a product of linear polynomials lies in the ideal generated by some other such products, i.e., the SRES calculus decides the ideal membership problem. Furthermore, an algorithm for certifying inconsistent systems of the above shape is described. We also establish the connection with an another combined proof system R(lin).

16:30
RP: A Practical Polynomial Calculus for Arithmetic Circuit Verification

ABSTRACT. Generating and automatically checking proofs independently increases confidence in the results of automated reasoning tools. The use of computer algebra is an essential ingredient in recent substantial improvements to scale verification of arithmetic gate-level circuits, such as multipliers, to large bit-widths. There is also a large body of work on theoretical aspects of propositional algebraic proof systems in the proof complexity community starting with the seminal paper introducing the polynomial calculus. We show that the polynomial calculus provides a frame-work to define a practical algebraic calculus (PAC) proof format to capture low-level algebraic proofs needed in scalable gate-level verification of arithmetic circuits. We apply these techniques to generate proofs obtained as by-product of verifying gate-level multipliers using state-of-the-art techniques. Our experiments show that these proofs can be checked efficiently with independent tools.

17:00
RP: Unknot Recognition Through Quantifier Elimination

ABSTRACT. Unknot recognition is one of the fundamental questions in low dimensional topology. In this work, we show that this problem can be encoded as a validity problem in the existential fragment of the first-order theory of real closed fields. This encoding is derived using a well-known result on \su representations of knot groups by Kronheimer-Mrowka. We further show that applying existential quantifier elimination to the encoding enables an \unknot algorithm with a complexity of the order $2^{\OO(n)}$, where $n$ is the number of crossings in the given knot diagram. Our algorithm is simple to describe and has the same runtime as the currently best known unknot recognition algorithms. This leads to an interesting class of problems, of interest to both SMT solving and knot theory.

16:10-17:25 Session 68A: Verification (ADHS)
16:10
Sufficient Conditions for Temporal Logic Specifications in Hybrid Dynamical Systems

ABSTRACT. In this paper, we introduce operators, semantics, and conditions that, when possible, are solution-independent to guarantee basic temporal logic specifications for hybrid dynamical systems. Employing sufficient conditions for forward invariance and finite time attractivity of sets for such systems, we derive such sufficient conditions for the satisfaction of formulas involving temporal operators and atomic propositions. Furthermore, we present how to certify formulas that have more than one operator. Academic examples illustrate the results throughout the paper.

16:35
Falsification of Temporal Logic Requirements Using Gradient Based Local Search in Space and Time

ABSTRACT. We study the problem of computing input signals that produce system behaviors that falsify requirements written in temporal logic. We provide a method to automatically search for falsifying time varying uncertain inputs for nonlinear and possibly hybrid systems. The input to the system is parametrized using piecewise constant signals with varying switch times. By applying small perturbations to the system input in space and time, and by using gradient descent approach, we try to converge to the worst local system behavior. The experimental results on non-trivial benchmarks demonstrate that this local search can significantly improve the rate of finding falsifying counterexamples.

17:00
Higher-Dimensional Timed Automata

ABSTRACT. We introduce a new formalism of higher-dimensional timed automata, based on van Glabbeek’s higher-dimensional automata and Alur’s timed automata. We prove that their reachability is PSPACE-complete and can be decided using zone-based algorithms. We also show how to use tensor products to combat state-space explosion and how to extend the setting to higher-dimensional hybrid automata.

16:10-17:25 Session 68B: Stability (ADHS)
16:10
Compositional Analysis of Hybrid Systems Defined Over Finite Alphabets

ABSTRACT. We consider the stability and the input-output analysis problems of a class of large-scale hybrid systems composed of continuous dynamics coupled with discrete dynamics defined over finite alphabets, e.g., deterministic finite state machines (DFSMs). This class of hybrid systems can be used to model physical systems controlled by software. For such classes of systems, we use a method based on dissipativity theory for compositional analysis that allows us to study stability, passivity and input-output norms. We show that the certificates of the method based on dissipativity theory can be computed by solving a set of semi-definite programs. Nonetheless, the formulation based on semi-definite programs become computationally intractable for relatively large number of discrete and continuous states. We demonstrate that, for systems with large number of states consisting of an interconnection of smaller hybrid systems, accelerated alternating method of multipliers can be used to carry out the computations in a scalable and distributed manner. The proposed methodology is illustrated by an example of a system with 60 continuous states and 18 discrete states.

16:35
Razumikhin-type Theorems on Practical Stability of Dynamic Equations on Time Scales

ABSTRACT. In this work, we investigate some Razumikhin-type criteria for the uniform global practical asymptotic stability on arbitrary time domains, for time-varying dynamic equations. Using Lyapunov-type functions on time scales, we develop appropriate inequalities ensuring that trajectories decay to the neighborhood of the trivial solution asymptotically. Some numerical examples are discussed to illustrate our results.

17:00
Switch induced instabilities for stable power system DAE models

ABSTRACT. It is well known that for switched systems the overall dynamics can be unstable despite stability of all individual modes. We show that this phenomenon can indeed occur for a linearized DAE model of power grids. By making certain topological assumptions on the power grid, we can ensure stability under arbitrary switching for the linearized DAE model.

17:00-18:30 Session 69B: LICS Award Session & Martin Hofmann Memorial (LICS)
Chair:
Location: Maths LT1
17:00
LICS Awards: The Kleene Award and the Test of Time Award
17:45
Remembering Martin Hofmann
19:00-21:30 FLoC banquet at Examination Schools (FLoC)

FLoC banquet at Examination Schools. Drinks and food available from 7pm (pre-booking via FLoC registration system required; guests welcome).