View: session overviewtalk overview
11:00 | SPEAKER: Charlie Jacomme ABSTRACT. Passwords are still the most widespread means for authenticating |
11:30 | ABSTRACT. We present composition theorems for security protocols, to compose a key exchange protocol and a symmetric-key protocol that uses the exchanged key. Our results rely on the computational model of cryptography and are stated in the framework of the tool CryptoVerif. They support key exchange protocols that guarantee injective or non-injective authentication. They also allow random oracles shared between the composed protocols. To our knowledge, they are the first composition theorems for key exchange stated for a computational protocol verification tool, and also the first to allow such flexibility. As a case study, we apply our composition theorems to a proof of TLS 1.3 Draft-18. This work fills a gap in a previous paper that informally claims a compositional proof of TLS 1.3, without formally justifying it. |
12:00 | SPEAKER: Giorgia Azzurra Marson ABSTRACT. Cryptographic channels aim to enable authenticated and confidential communication over the Internet. The general understanding seems to be that providing security in the sense of authenticated encryption for every (unidirectional) point-to-point link suffices to achieve this goal. As recently shown (in FSE17/ToSC17), however, even in the bidirectional case just requiring the two unidirectional links to provide security independently of each other does not lead to a secure solution in general. Informally, the reason for this is that the increased interaction in bidirectional communication may be exploited by an adversary. The same argument applies, a fortiori, in a multi-party setting where several users operate concurrently and the communication develops in more directions. In the cryptographic literature, however, the targeted goals for group communication in terms of channel security are still unexplored. Applying the methodology of provable security, we fill this gap by (i) defining exact (game-based) authenticity and confidentiality goals for broadcast communication and (ii) showing how to achieve them. Importantly, our security notions also account for the causal dependencies between exchanged messages, thus naturally extending the bidirectional case where causal relationships are automatically captured by preserving the sending order. On the constructive side we propose a modular and yet efficient protocol that, assuming only reliable point-to-point links between users, leverages (non-cryptographic) broadcast and standard cryptographic primitives to a full-fledged broadcast channel that provably meets the security notions we put forth. |
11:00 | SPEAKER: Willem Heijltjes ABSTRACT. Bi-Intuitionistic Linear Logic (BILL) is an extension of Intuitionistic Linear Logic with a par, dual to the tensor, and subtraction, dual to linear implication. It is the logic of categories with a monoidal closed and a monoidal co-closed structure that are related by linear distributivity, a strength of the tensor over the par. It conservatively extends Full Intuitionistic Linear Logic (FILL), which includes only the par. We give proof nets for the multiplicative, unit-free fragment MBILL-. Correctness is by local rewriting in the style of Danos contractibility. This rewrite relation yields sequentialization into a relational sequent calculus that extends the existing one for FILL. We give a second, geometric correctness condition via Danos-Regnier switching, and demonstrate composition both inductively and as a one-off global operation. |
11:30 | ABSTRACT. This paper establishes a bridge between linear logic and mainstream graph theory, building previous work by Retoré (2003). We show that the problem of correctness for MLL+Mix proof nets is equivalent to the problem of uniqueness of a perfect matching. By applying matching theory, we obtain new results for MLL+Mix proof nets: a linear-time correctness criterion, a quasi-linear sequentialization algorithm, and a characterization of the sub-polynomial complexity of the correctness problem. We also use graph algorithms to compute the dependency relation of Bagnol et al. (2015) and the kingdom ordering of Bellin (1997), and relate them to the notion of blossom which is central to combinatorial maximum matching algorithms. |
12:00 | ABSTRACT. A categorical model of the multiplicative and exponential fragments of intuitionistic linear logic (IMELL), known as a linear category, is a symmetric monoidal closed category with a monoidal coalgebra modality (also known as a linear exponential comonad). Inspired by R. Blute and P. Scott's work on categories of modules of Hopf algebras as models of linear logic, we study Eilenberg-Moore categories of monads as models of IMELL. We define an IMELL lifting monad on a linear category as a Hopf monad -- in the Bruguieres, Lack, and Virelizier sense -- with a mixed distributive law over the monoidal coalgebra modality. As our main result, we show that the linear category structure lifts to Eilenberg-Moore categories of IMELL lifting monads. We explain how monoids in the Eilenberg-Moore of the monoidal coalgebra modality can induce IMELL lifting monads and provide sources for such monoids. Along the way, we also define mixed distributive laws of bimonads over coalgebra modalities and lifting differential category structure to Eilenberg-Moore categories of exponential lifting monads. |
11:00 | SPEAKER: Mikołaj Bojańczyk ABSTRACT. We prove that for every positive integer k, there exists an MSO_1-transduction that given a graph of linear cliquewidth at most k outputs, nondeterministically, some clique decomposition of the graph of width bounded by a function of k. A direct corollary of this result is the equivalence of the notions of CMSO_1-definability and recognizability on graphs of bounded linear cliquewidth. |
11:20 | SPEAKER: Sebastian Siebertz ABSTRACT. We prove that for every class $C$ of graphs with effectively bounded expansion, given a first-order sentence $\varphi$ and an $n$-element structure $A$ whose Gaifman graph belongs to $C$, the question whether $\varphi$ holds in $A$ can be decided by a family of AC-circuits of size $f(\varphi)\cdot n^c$ and depth $f(\varphi)+c\log n$, where $f$ is a computable function and $c$ is a universal constant. This places the model-checking problem for classes of bounded expansion in the parameterized circuit complexity class $paraAC^1$. On the route to our result we prove that the basic decomposition toolbox for classes of bounded expansion, including orderings with bounded weak coloring numbers and low treedepth decompositions, can be computed in $paraAC^1$. |
11:40 | SPEAKER: Dror Fried ABSTRACT. The concept of decomposition in computer science and engineering is considered a fundamental component of computational thinking and is prevalent in design of algorithms, software construction, hardware design, and more. We propose a simple and natural formalization of sequential decomposition,in which a task is decomposed into two sequential sub-tasks, with the first sub-task to be executed out before the second sub-task is executed. These tasks are specified by means of input/output relations. We define and study decomposition problems,which is to decide whether a given specification can be sequentially decomposed. Our main result is that decomposition itself is a difficult computational problem. More specifically, we study decomposition problems in three settings: where the input task is specified explicitly, by means of Boolean circuits, and by means of automatic relations. We show that in the first setting decomposition is NP-complete, in the second setting it is NEXPTIME-complete, and in the third setting there is evidence to suggest that it is undecidable. Our results indicate that the intuitive idea of decomposition as a system-design approach requires further investigation. In particular, we show that adding human to the loop by asking for a decomposition hint lowers the complexity of decomposition problems considerably. |
12:00 | SPEAKER: Yijia Chen ABSTRACT. The complexity of the parameterized halting problem for nondeterministic Turing machines p-Halt is known to be related to the question of whether there are logics capturing various complexity classes [Chen and Flum, 2012]. Among others, if p-Halt is in para-AC^0, the parameterized version of the circuit complexity class AC^0, then AC^0, or equivalently, (+,\times)-invariant FO, has a logic. Although it is widely believed that p-Halt\notin para-AC^0, we show that the problem is hard to settle by establishing a connection to the question in classical complexity of whether NE\not\subseteq LINH. Here, LINH denotes the linear time hierarchy. On the other hand, we suggest an approach toward proving NE\not\subseteq LINH using bounded arithmetic. More specifically, we demonstrate that if the much celebrated MRDP (for Matiyasevich-Robinson-Davis-Putnam) theorem can be proved in a certain fragment of arithmetic, then NE\not\subseteq LINH. Interestingly, central to this result is a para-AC^0 lower bound for the parameterized model-checking problem for FO on arithmetical structures. |
12:20 | SPEAKER: Mikolaj Bojanczyk ABSTRACT. We define two classes of functions, called regular (respectively, first-order) list functions, which manipulate objects such as lists, lists of lists, pairs of lists, lists of pairs of lists, etc. The definition is in the style of regular expressions: the functions are constructed by starting with some basic functions (e.g. projections from pairs, or head and tail operations on lists) and putting them together using four combinators (most importantly, composition of functions). Our main results are that first-order list functions are exactly the same as first-order transductions, under a suitable encoding of the inputs; and the regular list functions are exactly the same as MSO-transductions. |
11:00 | ABSTRACT. Building on recently established enumerative connections between lambda calculus and the theory of embedded graphs (or "maps"), this paper develops an analogy between typing (of lambda terms) and coloring (of maps). Our starting point is the classical notion of an abelian group-valued "flow" on an abstract graph (Tutte, 1954). Typing a linear lambda term may be naturally seen as constructing a flow (on an embedded 3-valent graph with boundary) valued in a more general algebraic structure consisting of a preordered set equipped with an "implication" operation and unit satisfying composition, identity, and unit laws. Interesting questions and results from the theory of flows (such as the existence of nowhere-zero flows) may then be re-examined from the standpoint of lambda calculus and logic. For example, we give a characterization of when the local flow relations (across vertices) may be categorically lifted to a global flow relation (across the boundary), proving that this holds just in case the underlying map has the orientation of a lambda term. We also develop a basic theory of rewriting of flows that suggests topological meanings for classical completeness results in combinatory logic, and introduce a polarized notion of flow, which draws connections to the theory of proof-nets in linear logic and to bidirectional typing. |
11:20 | SPEAKER: Kuen-Bang Hou Favonia ABSTRACT. We present a development of cellular cohomology in homotopy type theory. Cohomology associates to each space a sequence of abelian groups capturing part of its structure, and has the advantage over homotopy groups in that these abelian groups of many common spaces are easier to compute. Cellular cohomology is a special kind of cohomology designed for cell complexes: these are built in stages by attaching spheres of progressively higher dimension, and cellular cohomology defines the groups out of the combinatorial description of how spheres are attached. Our main result is that for finite cell complexes, a wide class of cohomology theories (including the ones defined through Eilenberg-MacLane spaces) can be calculated via cellular cohomology. This result was formalized in the Agda proof assistant. |
11:40 | SPEAKER: Nicolai Kraus ABSTRACT. Given a type A in homotopy type theory (HoTT), we define the free infinity-group on A as the higher inductive type FA with constructors [unit : FA], [cons : A -> FA -> FA], and conditions saying that every cons(a) is an auto-equivalence on FA. Assuming that A is a set (i.e. satisfies the principle of unique identity proofs), we are interested in the question whether FA is a set as well, which is very much related to an open problem in the HoTT book [Ex. 8.2]. In this paper, we show an approximation to the question, namely that the fundamental groups of FA are trivial. |
12:00 | SPEAKER: Floris van Doorn ABSTRACT. We present a development of the theory of higher groups, including infinity groups and connective spectra, in homotopy type theory. An infinity group is simply the loops in a pointed, connected type, where the group structure comes from the structure inherent in the identity types of Martin-Löf type theory. We investigate ordinary groups from this viewpoint, as well as higher dimensional groups and groups that can be delooped more than once. A major result is the stabilization theorem, which states that if an n-type can be delooped n+2 times, then it has the structure of an infinite loop type. Most of the results have been formalized in the Lean proof assistant. |
12:20 | ABSTRACT. A useful connective that has not previously been made to work in focused logic is the strong sum, a form of dependent sum that is eliminated by projection rather than pattern matching. This makes strong sums powerful, but it also creates a problem adapting them to focusing: The type of the right projection from a strong sum refers to the term being projected from, but due to the structure of focused logic, that term is not available. In this work we confirm that strong sums can be viewed as a negative connective in focused logic. The key is to resolve strong sums' dependencies eagerly, before projection can see them, using a notion of selfification adapted from module type systems. We validate the logic by proving cut admissibility and identity expansion. All the proofs are formalized in Coq. |
11:00 | Welcome to SAT 2018 SPEAKER: Olaf Beyersdorff |
11:05 | Dependency Quantified Boolean Formulas: An Overview of Solution Methods and Applications ABSTRACT. Dependency quantified Boolean formulas (DQBFs) as a generalization of quantified Boolean formulas (QBFs) have received considerable attention in research during the last years. Here we give an overview of the solution methods developed for DQBF so far. The exposition is complemented with the discussion of various applications that can be handled with DQBF solving. |
12:00 | SPEAKER: Denis Firsov ABSTRACT. It is common to model inductive datatypes as least fixed points of functors. We show that within the Cedille type theory we can relax functoriality constraints and generically derive an induction principle for Mendler-style lambda-encoded inductive datatypes, which arise as least fixed points of covariant schemes where the morphism lifting is defined only on identities. Additionally, we implement a destructor for these lambda-encodings that runs in constant-time. As a result, we can define lambda-encoded natural numbers with an induction principle and a constant-time predecessor function so that the normal form of a numeral requires only linear space. The paper also includes several more advanced examples. |
14:00 | ABSTRACT. We present a Coq formalization of Alexandre Miquel’s implicative algebras, which aim at providing a general algebraic framework for the study of classical realizability models. We first give a self-contained presentation of the underlying implicative structures, which roughly consists of a complete lattice equipped with a binary law representing the implication. We then explain how these structures can be turned into models by adding separators, giving rise to the so-called implicative algebras. Additionally, we show how they generalize Boolean and Heyting algebras as well as the usual algebraic structures used in the analysis of classical realizability. |
14:30 | SPEAKER: Samuel Balco ABSTRACT. We present a software tool for reasoning in and about propositional sequent calculi for modal logics of actions. As an example, we implement the display calculus D.EAK of dynamic epistemic logic. The tool generates embeddings of the calculus in the theorem prover Isabelle for formalising proofs about D.EAK. Integrating propositional reasoning in D.EAK with inductive reasoning in Isabelle, we verify in Isabelle the solution of the muddy children puzzle for any number of muddy children. There also is a set of meta-tools that allows us to adapt the software for a wide variety of user defined calculi. |
15:00 | SPEAKER: Colm Baston ABSTRACT. We study the coinductive formulation of common knowledge in type theory. We formalise both the traditional relational semantics and an operator semantics, similar in form to the epistemic system S5, but at the level of events on possible worlds rather than as a logical derivation system. We have two major new results. Firstly, the operator semantics is equivalent to the relational semantics: we discovered that this requires a new hypothesis of semantic entailment on operators, not known in previous literature. Secondly, the coinductive version of common knowledge is equivalent to the traditional transitive closure on the relational interpretation. All results are formalised in the proof assistants Agda and Coq. |
14:00 | ABSTRACT. We present a new quasi-polynomial algorithm for solving parity games. It is based on a new bisimulation invariant measure of complexity for parity games, called the register-index, which captures the complexity of the priority assignment. For fixed parameter k, the class of games with register-index bounded by k is solvable in polynomial time. We show that the register-index of parity games of size n is bounded by O(log n) and derive a quasi-polynomial algorithm. Finally we give the first descriptive complexity account of the quasi-polynomial solvability of parity games: The winning regions of parity games with p priorities and register-index k are described by a modal μ formula of which the complexity, as measured by its alternation depth, depends on k rather than p. |
14:20 | SPEAKER: Laure Daviaud ABSTRACT. In a mean-payoff parity game, one of the two players aims both to achieve a qualitative parity objective and to minimize a quantitative long-term average of payoffs (aka. mean payoff). The game is zero-sum and hence the aim of the other player is to either foil the parity objective or to maximize the mean payoff. Our main technical result is a pseudo-quasi-polynomial algorithm for solving mean-payoff parity games. All algorithms for the problem that have been developed for over a decade have a pseudo-polynomial and an exponential factors in their running times; in the running time of our algorithm the latter is replaced with a quasi-polynomial one. Our main conceptual contributions are the definitions of strategy decompositions for both players, and a notion of progress measures for mean-payoff parity games that generalizes both parity and energy progress measures. The former provides normal forms for and succinct representations of winning strategies, and the latter enables the application to mean-payoff parity games of the order-theoretic machinery that underpins a recent quasi-polynomial algorithm for solving parity games. |
14:40 | SPEAKER: Emmanuel Filiot ABSTRACT. In this paper, we study the rational synthesis problem for multi-player non zero-sum games played on finite graphs for omega-regular objectives. Rationality is formalized by the concept of Nash equilibrium (NE). Contrary to previous works, we consider in this work the more general and more practically relevant case where players are imperfectly informed. In sharp contrast with the perfect information case, NE are not guaranteed to exist in this more general setting. This motivates the study of the NE existence problem. We show that this problem is ExpTime-C for parity objectives in the two-player case (even if both players are imperfectly informed) and undecidable for more than 2 players. We then study the rational synthesis problem and show that the problem is also ExpTime-C for two imperfectly informed players and undecidable for more than 3 players. As the rational synthesis problem considers a system (Player 0) playing against a rational environment (composed of k players), we also consider the natural case where only Player 0 is imperfectly informed about the state of the environment (and the environment is considered as perfectly informed). In this case, we show that the ExpTime-C result holds when k is arbitrary but fixed. We also analyse the complexity when k is part of the input. |
15:00 | SPEAKER: M. Praveen ABSTRACT. We introduce two-player games which build words over infinite alphabets, and we study the problem of checking the existence of winning strategies. These games are played by two players, who take turns in choosing valuations for variables ranging over an infinite data domain, thus generating multi-attributed data words. The winner of the game is specified by formulas in the Logic of Repeating Values, which can reason about repetitions of data values in infinite data words. We prove that it is undecidable to check if one of the players has a winning strategy, even in very restrictive settings. However, we prove that if one of the players is restricted to choose valuations ranging over the Boolean domain, the games are effectively equivalent to single-sided games on vector addition systems with states (in which one of the players can change control states but cannot change counter values), known to be decidable and effectively equivalent to energy games. Previous works have shown that the satisfiability problem for various variants of the logic of repeating values is equivalent to the reachability and coverability problems in vector addition systems. Our results raise this connection to the level of games, augmenting further the associations between logics on data words and counter systems. |
15:20 | SPEAKER: Jules Hedges ABSTRACT. We introduce open games as a compositional foundation of economic game theory. A compositional approach potentially allows methods of game theory and theoretical computer science to be applied to large-scale economic models for which standard economic tools are not practical. An open game represents a game played relative to an arbitrary environment and to this end we introduce the concept of coutility, which is the utility generated by an open game and returned to its environment. Open games are the morphisms of a symmetric monoidal category and can therefore be composed by categorical composition into sequential move games and by monoidal products into simultaneous move games. Open games can be represented by string diagrams which provide an intuitive but formal visualisation of the information flows. We show that a variety of games can be faithfully represented as open games in the sense of having the same Nash equilibria and off-equilibrium best responses. |
14:00 | SPEAKER: Roberto Bruni ABSTRACT. Assigning a satisfactory truly concurrent semantics to Petri nets with confusion and distributed decisions is a long standing problem, especially if one wants to resolve decisions by drawing from some probability distribution. Here we propose a general solution based on a recursive, static decomposition of (occurrence) nets in loci of decision, called structural branching cells (s-cells). Each s-cell exposes a set of alternatives, called transactions. Our solution transforms a given Petri net into another net whose transitions are the transactions of the s-cells and whose places are those of the original net, with some auxiliary structure for bookkeeping. The resulting net is confusion-free, and thus conflicting alternatives can be equipped with probabilistic choices, while nonintersecting alternatives are purely concurrent and their probability distributions are independent. The validity of the construction is witnessed by a tight correspondence with the recursively stopped configurations of Abbes and Benveniste. Some advantages of our approach are that: i) s-cells are defined statically and locally in a compositional way; ii) our resulting nets exhibit the complete concurrency property. |
14:20 | SPEAKER: Dan Frumin ABSTRACT. We present ReLoC: a logic for proving refinements of programs in a language with higher-order state, fine-grained concurrency, polymorphism and recursive types. The core of our logic is a judgement e ≾ e' : τ, which expresses that a program e refines a program e' at type τ. In contrast to earlier work on refinements for languages with higher-order state and concurrency, ReLoC provides type- and structure-directed rules for manipulating this judgement, whereas previously, such proofs were carried out by unfolding the judgement into its definition in the model. These more abstract proof rules make it simpler to carry out refinement proofs. Moreover, we introduce logically atomic relational specifications: a novel approach for relational specifications for compound expressions that take effect at a single instant in time. We demonstrate how to formalise and prove such relational specifications in ReLoC, allowing for more modular proofs. ReLoC is built on top of the expressive concurrent separation logic Iris, allowing us to leverage features of Iris such as invariants and ghost state. We provide a mechanisation of our logic in Coq, which does not just contain a proof of soundness, but also tactics for interactively carrying out refinements proofs. We have used these tactics to mechanise several examples, which demonstrates the practicality and modularity of our logic. |
14:40 | SPEAKER: Adrien Durier ABSTRACT. We study Milner's encoding of the call-by-value lambda-calculus in the pi-calculus. We show that, by tuning the encoding to two subcalculi of the pi-calculus (Internal pi and Asynchronous Local pi), the equivalence on lambda-terms induced by the encoding coincides with Lassen's eager normal form bisimilarity, extended to handle eta-equality. As behavioural equivalence in the pi-calculus we consider contextual equivalence and barbed congruence. We also extend the results to preorders. A crucial technical ingredient in the proofs is the recently-introduced technique of unique solutions of equations, further developed in this paper. In this respect, the paper also intends to be an extended case study on the applicability and expressiveness of the technique. |
15:00 | SPEAKER: Ki Yung Ahn ABSTRACT. Quasi-open bisimilarity is the coarsest notion of bisimilarity for the pi-calculus that is also a congruence. This work extends quasi-open bisimilarity to handle mismatch (guards with inequalities). This minimal extension of quasi-open bisimilarity allows fresh names to be manufactured to provide constructive evidence that an inequality holds. The extension of quasi-open bisimilarity is canonical and robust --- coinciding with open barbed bisimilarity (an objective notion of bisimilarity congruence) and characterised by an intuitionistic variant of an established modal logic. The more famous open bisimilarity is also considered, for which the coarsest extension for handling mismatch is identified. Applications to symbolic equivalence checking and symbolic model checking are highlighted, e.g., for verifying privacy properties. Theorems and examples are mechanised using the proof assistant Abella. |
15:20 | SPEAKER: Romain Demangeon ABSTRACT. This paper studies the complexity of pi-calculus processes with respect to the quantity of transitions caused by an incoming message. First we propose a typing system for integrating Bellantoni and Cook's characterisation of polynomially-bound recursive functions into Deng and Sangiorgi's typing system for termination. We then define computational complexity of distributed messages based on Degano and Priami's causal semantics, which identifies the dependency between interleaved transitions. Next we apply a syntactic flow analysis to typable processes to ensure the computational bound of distributed messages. We prove that our analysis is decidable for a given process; sound in the sense that it guarantees that the total number of messages causally dependent of an input request received from the outside is bounded by a polynomial of the content of this request; and complete which means that each polynomial recursive function can be computed by a typable process. |
14:00 | SPEAKER: Rüdiger Ehlers ABSTRACT. The effective use of satisfiability (SAT) solvers requires problem encodings that make good use of the reasoning techniques employed in such solvers, such as unit propagation and clause learning. Propagation completeness has been proposed as a useful property for constraint encodings as it maximizes the utility of unit propagation. Experimental results on using encodings with this property in the context of satisfiability modulo theory (SMT) solving have however remained inconclusive, as such encodings are typically very large, which increases the bookkeeping work of solvers. In this paper, we introduce approximate propagation completeness and approximate conflict propagation as novel SAT encoding property notions. While approximate propagation completeness is a generalization of classical propagation completeness, (approximate) conflict propagation is a new concept for reasoning about how early conflicts can be detected by a SAT solver. Both notions together span a hierarchy of encoding quality choices, with classical propagation completeness as a special case. We show how to compute approximately propagation complete and conflict propagating constraint encodings with a minimal number of clauses using a reduction to MaxSAT. To evaluate the effect of such encodings, we give results on applying them in a case study. |
14:30 | SPEAKER: Tobias Paxian ABSTRACT. In this paper we present a novel cardinality constraint encoding for solving the weighted MaxSAT problem with iterative SAT-based methods based on the Polynomial Watchdog (PW) CNF encoding for Pseudo-Boolean (PB) constraints. The watchdog of the PW encoding indicates whether the bound of the PB constraint holds. In our approach, we lift this static watchdog concept to a dynamic one allowing an incremental convergence to the optimal result. Consequently, we formulate and implement a SAT-based algorithm for our new Dynamic Polynomial Watchdog (DPW) encoding which can be applied for solving the MaxSAT problem. Furthermore, we introduce three fundamental optimizations of the PW encoding also suited for the original version leading to a significantly less encoding size. Our experimental results show that our encoding and algorithm is competitive with state-of-the-art encodings as utilized in QMaxSAT (3rd place in last MaxSAT Evaluation 2017). Our encoding dominates two of the QMaxSAT encodings, and at the same time is able to solve unique instances. We integrated our new encoding into QMaxSAT and adapt the heuristic to choose between the only remaining encoding of QMaxSAT and our approach. This combined version solves 19 (4%) more instances in overall 30% less run time on the benchmark set of the MaxSAT Evaluation 2017. However, for the instances solved by both solvers our encoding is 2X faster than all employed encodings of QMaxSAT used in the evaluation. |
15:00 | ABSTRACT. We explore the relationships between two closely related optimization problems: MaxSAT and Optimization Modulo Bit-Vectors (OBV). Given a bit-vector or a propositional formula F and a target bit-vector T, Unweighted Partial MaxSAT maximizes the number of satisfied bits in T, while OBV maximizes the value of T. We propose a new OBV-based Unweighted Partial MaxSAT algorithm. Our resulting solver–Mrs. Beaver–outscores the state-of-the-art solvers when run with the settings of the Incomplete-60-Second-Timeout Track of MaxSAT Evaluation 2017. Mrs. Beaver is the first MaxSAT algorithm designed to be incremental in the following sense: it can be re-used across multiple invocations with different hard assumptions and target bit-vectors. We provide experimental evidence showing that enabling incrementality in MaxSAT significantly improves the performance of a MaxSAT-based Boolean Multilevel Optimization (BMO) algorithm when solving a new, critical industrial BMO application: cleaning-up weak design rule violations during the Physical Design stage of Computer-Aided-Design. |
15:00 | SPEAKER: Maxime Audinot ABSTRACT. Attack trees are a well-recognized formalism for security modeling and analysis, but in this work we tackle a problem that has not yet been addressed by the security or formal methods community – namely guided design of attack trees. The objective of the framework presented in this paper is to support a security expert in the process of designing a pertinent attack tree for a given system. In contrast to most of existing approaches for attack trees, our framework contains an explicit model of the real system to be analyzed, formalized as a transition system that may contain quantitative information. The leaves of our attack trees are labeled with reachability goals in the transition system and the attack tree semantics is expressed in terms of traces of the system. The main novelty of the proposed framework is that we start with an attack tree which is not fully refined and by exhibiting paths in the system that are optimal with respect to the quantitative information, we are able to suggest to the security expert which parts of the tree contribute to optimal attacks and should therefore be developed further. Such useful parts of the tree are determined by solving a satisfiability problem in propositional logic. |
15:00 | SPEAKER: Giovanni de Felice ABSTRACT. We introduce the fermionic ZW calculus, a string-diagrammatic language for fermionic quantum computing (FQC). After defining a fermionic circuit model, we present the basic components of the calculus, together with their interpretation, and show how the main physical gates of interest in FQC can be represented in the language. We then list our axioms, and derive some additional equations. We prove that the axioms provide a complete equational axiomatisation of the monoidal category whose objects are quantum systems of finitely many local fermionic modes, with operations that preserve or reverse the parity (number of particles mod 2) of states, and the tensor product, corresponding to the composition of two systems, as monoidal product. We achieve this through a procedure that rewrites any diagram in a normal form. We conclude by showing, as an example, how the statistics of a fermionic Mach-Zehnder interferometer can be calculated in the diagrammatic language. |
Short talks by attendees.
The 5-minute talk schedule is available here.
This is a fun session in which you can describe work in progress, crazy-sounding ideas, interesting questions and challenges, research proposals, or anything else within reason! You can use 2-3 slides, or you can just speak without slides.
16:00 | SPEAKER: Alexander Knüppel ABSTRACT. As formal verification of software systems is a complex task comprising many algorithms and heuristics, modern theorem provers offer numerous parameters that are to be selected by a user to control how a piece of software is verified. Evidently, the number of parameters even increases with each new release. One challenge is that default parameters are often insufficient to close proofs automatically and are not optimal in terms of verification effort. The verification phase becomes hardly accessible for non-experts, who typically must follow a time-consuming trial-and-error strategy to choose the right parameters for even trivial pieces of software. To aid users of deductive verification, we apply machine learning techniques to empirically investigate which parameters and combinations thereof impair or improve provability and verification effort. We exemplify our procedure on the deductive verification system KeY 2.6.1 and formulate 38 hypotheses of which only two have been invalidated. We identified parameters that portrait a trade-off between high provability and low verification effort, enabling the possibility to prioritize the selection of a parameter for either direction. Our insights give tool builders a better understanding of their control parameters and constitute a stepping stone towards automated deductive verification and better applicability of verification tools for non-experts. |
16:30 | SPEAKER: Mariano Moscato ABSTRACT. Advances in theorem proving have enabled the emergence of a variety of formal developments that, over the years, have resulted in large corpuses of formalizations. For example, the NASA PVS Library is a collection of 55 formal developments written in the Prototype Verification System (PVS) over a period of almost 30 years and containing more 28000 proofs. Unfortunately, the simple accumulation of formal developments does not guarantee their reusability. In fact, in formal systems with very expressive specification languages, it is often the case that a particular conceptual object is defined in different ways. This paper presents a technique to establish sound connections between formal definitions. Such connections support the possibility of (partial) borrowing of proved results from one formal description into another, improving the reusability of formal developments. The technique is described using concepts from the field of universal algebra and algebraic specification. The technique is illustrated with concrete examples taken from formalizations available in the NASA PVS Library. |
17:00 | SPEAKER: Jinxu Zhao ABSTRACT. Modern functional programming languages such as Haskell support sophisticated forms of type-inference, even in the presence of higher-order polymorphism. Central to such advanced forms of type-inference is an algorithm for polymorphic subtyping. This paper formalizes an algorithmic specification for polymorphic subtyping in the Abella theorem prover. The algorithmic specification is shown to be decidable, and sound and complete with respect to Odersky and Laufer's well-known declarative formulation of polymorphic subtyping. While the meta-theoretical results are not new, as far as we know our work is the first to mechanically formalize them. Moreover, our algorithm differs from those currently in the literature by using a novel approach based on worklist judgements. Worklist judgements simplify the propagation of information required by the unification process during subtyping. Furthermore they enable a simple formulation of the meta-theoretical properties, which can be easily encoded in theorem provers. |
17:30 | SPEAKER: Christine Rizkallah ABSTRACT. Establishing that two programs are contextually equivalent is hard, yet essential for reasoning about semantics preserving program transformations such as compiler optimizations. We adapt Lassen's normal form bisimulations technique to establish the soundness of equational theories for both an untyped call-by-value lambda calculus and a variant of Levy's call-by-push-value language. We demonstrate that our equational theory significantly simplifies the verification of optimizations. |
16:00 | SPEAKER: Salomon Sickert ABSTRACT. We present a unified translation of LTL formulas into deterministic Rabin automata, limit-deterministic Büchi automata, and nondeterministic Büchi automata. The translations yield automata of asymptotically optimal size (double or single exponential, respectively). All three translations are derived from one single Master Theorem of purely logical nature. The Master Theorem decomposes the language of a formula into a positive boolean combination of languages that can be translated into ω-automata by elementary means. In particular, the breakpoint, Safra, and ranking constructions used in other translations are not needed. |
16:20 | SPEAKER: Joel Allred ABSTRACT. Complementation of Büchi automata is well known for being complex, as Büchi automata in general are nondeterministic. In the worst case, a state-space growth of $O((0.76n)^n)$ cannot be avoided. Experimental results suggest that complementation algorithms perform better on average when they are structurally simple. In this paper, we present a simple algorithm for complementing Büchi automata, operating directly on subsets of states, structured into state-set tuples (similar to slices), and producing a deterministic automaton. The second step in the construction is then a complementation procedure that resembles the straightforward complementation algorithm for deterministic Büchi automata, the latter algorithm actually being a special case of our construction. Finally, we prove our construction to be optimal, i.e.\ having an upper bound in $O((0.76n)^n)$, and furthermore calculate the $0.76$ factor in a novel exact way. |
16:40 | ABSTRACT. This paper studies the complexity of languages of finite words using automata theory. To go beyond the class of regular languages, we consider infinite automata and the notion of state complexity defined by Karp. We look at alternating automata as introduced by Chandra, Kozen and Stockmeyer: such machines run independent computations on the word and gather their answers through boolean combinations. We devise a lower bound technique relying on boundedly generated lattices of languages, and give two applications of this technique. The first is a hierarchy theorem, stating that there are languages of arbitrarily high polynomial alternating state complexity, and the second is a linear lower bound on the alternating state complexity of the prime numbers written in binary. This second result strengthens a result of Hartmanis and Shank from 1968, which implies an exponentially worse lower bound for the same model. |
17:00 | SPEAKER: Yariv Shaulian ABSTRACT. Computation Tree Logic (CTL) is widely used in formal verification, however, unlike linear temporal logic (LTL), its connection to automata over words and trees is not yet fully understood. Moreover, the long sought connection between LTL and CTL is still missing; It is not known whether their common fragment is decidable, and there are very limited necessary conditions and sufficient conditions for checking whether an LTL formula is definable in CTL. We provide sufficient conditions and necessary conditions for LTL formulas and omega-regular languages to be expressible in CTL. The conditions are automaton-based; We first tighten the automaton characterization of CTL to the class of Hesitant Alternating Linear Tree Automata (HLT), and then conduct the conditions by relating between the cycles of a word automaton for a given omega-regular language and the cycles of a potentially equivalent HLT. The new conditions allow to simplify proofs of known results on languages that are definable, or not, in CTL, as well as to prove new results. Among which, they allow us to refute a conjecture by Clarke and Draghicescu from 1988, regarding a condition for a CTL* formula to be expressible in CTL. |
17:20 | ABSTRACT. We introduce a flexible class of well-quasi-orderings (WQOs) on words that generalizes the ordering of (not necessarily contiguous) subwords. Each such WQO induces a class of piecewise testable languages (PTLs) as Boolean combinations of upward closed sets. In this way, a range of regular language classes arises as PTLs. Moreover, each of the WQOs guarantees regularity of all downward closed sets. We consider two problems. First, we study which (perhaps non-regular) language classes permit a decision procedure to decide whether two given languages are separable by a PTL with respect to a given WQO. Second, we want to effectively compute downward closures with respect to these WQOs. Our first main result that for each of the WQOs, under mild assumptions, both problems reduce to the simultaneous unboundedness problem (SUP) and are thus solvable for many powerful system classes. In the second main result, we apply the framework to show decidability of separability of regular languages by $\mathcal{B}\Sigma_1[<, \mathsf{mod}]$, a fragment of first-order logic with modular predicates. |
17:40 | SPEAKER: Vrunda Dave ABSTRACT. Functional MSO transductions, deterministic two-way transducers, as well as streaming string transducers are all equivalent models for regular functions. In this paper, we show that every regular function, either on finite words or on infinite words, captured by a deterministic two-way transducer, can be described with a regular transducer expression (RTE). For infinite words, the transducer uses Muller acceptance and omega-regular look-ahead. RTEs are constructed from constant functions using the combinators if-then-else (deterministic choice), Hadamard product, and unambiguous versions of the Cauchy product, the 2-chained Kleene-iteration and the 2-chained omega-iteration. Our proof works for transformations of both finite and infinite words, extending the result on finite words of Alur et al. in LICS'14. In order to construct an RTE associated with a deterministic two-way Muller transducer with look-ahead, we introduce the notion of transition monoid for such two-way transducers where the look-ahead is captured by some backward deterministic Buchi automaton. Then, we use an unambiguous version of Imre Simon's famous forest factorization theorem in order to derive a ``good'' (omega-)regular expression for the domain of the two-way transducer. ``Good'' expressions are unambiguous and Kleene-plus as well as omega-iterations are only used on subexpressions corresponding to idempotent elements of the transition monoid. The combinator expressions are finally constructed by structural induction on the ``good'' (omega-)regular expression describing the domain of the transducer. |
16:00 | SPEAKER: Vladimir Zamdzhiev ABSTRACT. Linear/non-linear (LNL) models, as described by Benton, soundly model a LNL term calculus and LNL logic closely related to intuitionistic linear logic. Every such model induces a canonical enrichment that we show soundly models a LNL lambda calculus for string diagrams, introduced by Rios and Selinger (with primary application in quantum computing). Our abstract treatment of this language leads to simpler concrete models compared to those presented so far. We also extend the language with general recursion and prove soundness. Finally, we present an adequacy result for the diagram-free fragment of the language which corresponds to a modified version of Benton and Wadler's adjoint calculus with recursion. |
16:20 | SPEAKER: Giorgio Bacci ABSTRACT. Markov processes are a fundamental models of probabilistic transition systems and are the underlying semantics of probabilistic programs. We give an algebraic axiomatization of Markov processes using the framework of quantitative equational reasoning introduced in LICS2016. We present the theory in a structured way using work of Hyland et al. on combining monads. We take the interpolative barycentric algebras of LICS16 which captures the Kantorovich metric and combine it with a theory of contractive operators to give the required axiomatization of Markov processes both for discrete and continuous state spaces. This work, apart from its intrinsic interest, shows how one can extend the general notion of combining effects to the quantitative setting. |
16:40 | SPEAKER: Radu Mardare ABSTRACT. The ordinary untyped lambda-calculus has a set-theoretic model proposed in two related forms by Scott and Plotkin in the 1970s. Recently Scott saw how to extend such $\lambda$-calculus models using random variables in a standard way. However, to do reasoning and to add further features, it is better to interpret the construction in a higher-order Boolean- valued model theory using the standard measure algebra. In this paper we develop the semantics of an extended stochastic lambda-calculus suitable for a simple probabilistic programming language, and we exhibit a number of key equations satisfied by the terms of our example language. The terms are interpreted using a continuation-style semantics along with an additional argument, an infinite sequence of coin tosses which serve as a source of randomness. The construction of the model requires a subtle measure-theoretic analysis of the space of coin-tossing sequences. We also introduce a fixed-point operator as a new syntactic construct, as beta-reduction turns out not sound for all terms in our semantics. Finally, we develop a new notion of equality between terms valued by elements of the measure algebra, allowing one to reason about terms that may not be equal almost everywhere. This we hope provides a new framework for developing reasoning about probabilistic programs and their properties of higher type. |
17:00 | SPEAKER: Filippo Bonchi ABSTRACT. Abstract interpretation is a method to automatically find invariants of programs or pieces of code whose semantics is given via least fixed-points. Up-to techniques have been introduced as enhancements of coinduction, an abstract principle to prove properties expressed as greatest fixed-points. While abstract interpretation is always sound by definition, the soundness of up-to techniques needs some ingenuity to be proven. For completeness, the setting is switched: up-to techniques are always complete, while abstract domains are not. In this work we show that, under reasonable assumptions, there is an evident connection between sound up-to techniques and complete abstract domains. |
17:20 | ABSTRACT. Infinite types and formulas are known to have really curious and unsound behaviors. For instance, they allow to type Ω, the auto-autoapplication and they thus do not ensure any form of normalization/productivity. Moreover, in most infinitary frameworks, it is not difficult to define a type R that can be assigned to every λ- term. However, these observations do not say much about what coinductive (i.e. infinitary) type grammars are able to provide: it is for instance very difficult to know what types (besides R) can be assigned to a given term in this setting. We begin with a discussion on the expressivity of different forms of infinite types. Then, using the resource-awareness of sequential intersection types (system S) and tracking, we prove that infinite types are able to characterize the order (arity) of every λ-terms and that, in the infinitary extension of the relational model, every term has a “meaning” i.e. a non-empty denotation. From the technical point of view, we must deal with the total lack of productivity guarantee for typable terms: we do so by importing methods inspired by first order model theory. |
17:40 | ABSTRACT. We study the notion of observational equivalence in the call-by-name probabilistic lambda-calculus, where two terms are said observationally equivalent if under any context, their head reductions converge with the same probability. Our goal is to generalise the separation theorem to this probabilistic setting. To do so we define probabilistic Böhm trees and probabilistic Nakajima trees, and we mix the well-known B\"öhm-out technique with some new techniques to manipulate and separate probability distributions. |
16:00 | SPEAKER: Jan Elffers ABSTRACT. We study cdcl-cuttingplanes, Open-WBO, and Sat4j, three successful solvers from the Pseudo-Boolean Competition 2016, and evaluate them by performing experiments on crafted benchmarks designed to be trivial for the cutting planes (CP) proof system underlying pseudo-Boolean (PB) proof search, but yet potentially tricky for PB solvers. Our results demonstrate severe shortcomings in state-of-the-art PB solving techniques. Despite the fact that our benchmarks have linear-size tree-like CP proofs, the solvers often perform quite badly even for very small instances. Our analysis is that this shows that solvers need to explore stronger methods of pseudo-Boolean reasoning within cutting planes. We make an empirical observation from the competition data that many of the easy crafted instances are also infeasible over the rational numbers, or have small strong backdoors to PB instances without rational solutions. This raises the intriguing question whether the existence of such backdoors can be correlated with easiness/hardness. However, for some of our constructed benchmark families even rationally infeasible instances are completely beyond reach. This indicates that PB solvers need to get better not only at Boolean reasoning but even at linear programming. Finally, we compare CP-based solvers with CDCL and MIP solvers. For those of our benchmarks where the natural CNF encodings admit efficient resolution proofs, we see that the CDCL-based solver Open-WBO is orders of magnitude faster than the CP-based solvers cdcl-cuttingplanes and Sat4j (though it seems very sensitive to the ordering of the input). And the MIP solver Gurobi beats all of these solvers across the board. These experimental results point to several crucial challenges in the quest for more efficient pseudo-Boolean solvers, and we also believe that a further study of our benchmarks could shed important light on the potential and limitations of current state-of-the-art PB solving. |
16:30 | SPEAKER: Jia Liang ABSTRACT. Restarts are a critically important heuristic in most modern conflict-driven clause-learning (CDCL) SAT solvers. The precise reason as to why and how restarts enable CDCL solvers to scale efficiently remains obscure. In this paper we address this question, and provide some answers that enabled us to design a new effective machine learning-based restart policy. Specifically, we provide evidence that restarts improve the quality of learnt clauses as measured by one of best known clause quality metrics, namely, literal block distance (LBD). More precisely, we show that more frequent restarts decrease the LBD of learnt clauses, which in turn improves solver performance. We also note that too many restarts can be harmful because of the computational overhead of rebuilding the search tree from scratch too frequently. With this tradeoff in mind, between that of learning better clauses vs. the computational overhead of rebuilding the search tree, we introduce a new machine learning-based restart policy that predicts the quality of the next learnt clause based on the history of previously learnt clauses. The restart policy erases the solver’s search tree during its run, if it predicts that the quality of the next learnt clause is below some dynamic threshold that is determined by the solver’s history on the given input. Our machine learning-based restart policy is based on two observations gleaned from our study of LBDs of learned clauses. First, we discover that high LBD percentiles can be approximated with z-scores of the normal distribution. Second, we find that LBDs, viewed as a sequence, are correlated and hence the LBDs of past learned clauses can be used to predict the LBD of future ones. With these observations in place, and techniques to exploit them, our new restart policy is shown to effective over a large benchmark from the SAT Competition 2014 to 2017. |
17:00 | SPEAKER: Vadim Ryvchin ABSTRACT. Non-Chronological Backtracking (NCB) has been implemented in every modern CDCL SAT solver since the original CDCL solver GRASP. NCB’s importance has never been questioned. This paper argues that NCB is not always helpful. We show how one can implement the alternative to NCB–Chronological Backtracking (CB)–in a modern SAT solver. We demonstrate that CB improves the performance of the winner of the latest SAT Competition, Maple-LCM-Dist, and the winner of the latest MaxSAT Evaluation, Open-WBO. |
17:30 | SPEAKER: Sima Jamali ABSTRACT. There are many reasons to think that SAT solvers should be able to exploit formula structure, but no standard techniques in modern CDCL solvers make explicit use of structure. We describe modifications to modern decision and clause-deletion heuristics that exploit formula structure by using variable centrality. We show that these improve the performance of Maple LCM Dist, the winning solver from Main Track of the 2017 SAT Solver competition. In particular, using centrality in clause deletion results in solving 9 more formulas from the 2017 Main Track. We also look at a number of measures of solver performance and learned clause quality, to see how the changes affect solver execution. |
FLoC reception at Ashmolean Museum. Drinks and canapés available from 7pm (pre-booking via FLoC registration system required; guests welcome).