View: session overviewtalk overview
09:00 | Resource semantics: substructural logic as a modelling technology (keynote) ABSTRACT. The development of the logic of bunched implications, BI, together with its somewhat characteristic resource semantics, led to the formulation of Separation Logic, which forms the basis of the Infer program analyser deployed in Facebook's code production. This rather successful story sits within a broader, quite systematic logical context. I will review the (family of) logics, including process logics, that are supported by resource semantics, explaining their more-or-less uniform meta-theoretic basis and illustrating their uses in a range of modelling applications, including access control, systems security, and workflow simulation. |
09:50 | Automating the Flow Framework SPEAKER: Siddharth Krishna ABSTRACT. In previous work we presented the flow framework for verifying complex concurrent data structures using separation logic (SL). The framework enables the decomposition of the overall correctness proof of the data structure into (1) a proof of an abstract template algorithm that captures the underlying synchronization protocol between data structure operations, and (2) a refinement proof of this template to the concrete data structure implementation. In this paper, we investigate the automation of proofs expressed in this framework. We use the give-up template for concurrent dictionaries, and an implementation of this template based on B+ trees as a case study. We detail the challenges in automating the framework and present a revised version geared towards automation in a standard SL-based symbolic execution engine. We have implemented such an engine in the deductive verification tool GRASShopper and used it to mechanize the proof of our case study. Our experience suggests that the flow framework can express simple correctness proofs of complex concurrent data structures and can be automated using off-the-shelf verification tools, requiring only minor extensions to standard reasoning engines for SL. |
09:00 | Proving a concurrent program correct by demonstrating it does nothing SPEAKER: Shaz Qadeer ABSTRACT. CIVL is a verifier for concurrent programs implemented as a conservative extension to the Boogie verification system. CIVL allows the proof of correctness of a concurrent program —shared-memory or message-passing— to be described as a sequence of program layers. The safety of a layer implies the safety of the layer just below, thus allowing the safety of the highest layer to transitively imply the safety of the lowest. The central theme in CIVL is reasoning about atomic actions. Different layers of a program describe its behavior using atomic actions, higher layers with coarse-grained and lower layers with fine-grained atomic actions. A proof of correctness amounts to a demonstration that the highest layer is SKIP, the atomic action that does nothing. The formal and automated verification justifying the connection between adjacent layers combines several techniques—linear variables, reduction based on movers, location invariants, and procedure-local abstraction. CIVL is available in the master branch of Boogie together with more than fifty micro-benchmarks. CIVL has also been used to refine a realistic concurrent garbage-collection algorithm from a simple high-level specification down to a highly-concurrent implementation described in terms of individual memory accesses. |
09:00 | Horn Clauses and Beyond for Relational and Temporal Program Verification ABSTRACT. In this talk, we present our recent and ongoing work on constraint solving for verification of higher-order functional programs, where we address two important classes of specifications: (1) relational specifications and (2) dependent temporal specifications. These classes impose a new challenge: validity checking of first-order formulas with least and greatest fixpoints respectively for inductive and coinductive predicates, which generalizes existing variants of constrained Horn clause (CHC) solving. The former class of relational specifications includes functional equivalence, associativity, commutativity, distributivity, monotonicity, idempotency, and non-interference, whose verification often boils down to inferring mutual invariants among inputs and outputs of multiple function calls. To this end, we present a novel CHC solving method based on inductive theorem proving: the method reduces CHC solving to validity checking of first-order formulas with least fixpoints for inductive predicates, which are then checked by induction on the derivation of the predicates. The method thus enables relational verification by expressing and checking mutual invariants as induction hypotheses. The latter class of dependent temporal specifications is used to constrain (possibly infinite) event sequences generated from the target program. We express temporal specifications as first-order formulas over finite and infinite strings that encode event sequences. The use of first-order formulas allows us to express temporal specifications that depend on program values, so that we can specify input-dependent temporal behavior of the target program. Furthermore, we use least and greatest fixpoints to respectively model finite and infinite event sequences generated from the target program. To solve such fixpoint constraints, we present a novel deductive system consisting of rules for soundly eliminating fixpoints via invariants and well-founded relations. |
10:00 | SPEAKER: Adrián Palacios ABSTRACT. Dynamically typed languages, like Erlang, allow developers to quickly write programs without explicitly providing any type information on expressions or function definitions. However, this feature makes dynamic languages less reliable, because many runtime errors would be easily caught in statically typed languages. Besides, adding concurrency to the mix leads to a programming language where some runtime errors only appear for a particular execution of the program. In this paper, we present our preliminary work on a tool that, by using the well-known techniques of metaprogramming and symbolic execution, can be used to perform bounded verification of Erlang programs. In particular, by using Constraint Logic Programming, we develop an interpreter that, given an Erlang program and a symbolic input, returns answer constraints that represent sets of concrete data for which the Erlang program will incur in a runtime error. |
09:00 | Pathway Logic: Symbolic executable models for reasoning about cellular processes |
09:00 | Algorithms for Weighted and Probabilistic Context-Free Grammars and Convex Optimization |
10:00 | Learning Graph Weighted Models on Pictures SPEAKER: Philip Amortila ABSTRACT. Graph Weighted Models (GWMs) have recently been proposed as a natural generalization of weighted automata over strings and trees to arbitrary families of labeled graphs (and hypergraphs). A GWM generically associates a labeled graph with a tensor network and computes a value by successive contractions directed by its edges. In this paper, we consider the problem of learning GWMs defined over the graph family of pictures (or 2-dimensional words). As a proof of concept, we consider the simple Bars and Stripes picture language and provide an experimental study investigating whether this language can be learned in the form of a GWM from positive and negative examples using gradient-based methods. Our results suggest that this is indeed possible and that investigating the use of gradient-based methods to learn picture series and functions computed by GWMs over other families of graphs could be a fruitful direction. |
09:00 | Conditioning and quantiles in Markovian models ABSTRACT. The classical multi-objective analysis of Markov decision processes (MDP) and other stochastic models with nondeterminism seeks for policies to resolve the nondeterministic choices that achieve Pareto-optimality or maximize/minimize a linear goal function, while satisfying a list of probability or expectation constraints. Other approaches to reason about the trade-off between multiple cost and reward functions rely on conditioning or quantiles. The former, for instance, can serve to synthesize policies that maximize the expected utility in energy-critical situations or to minimize the cost of repair mechanism in exceptional error scenarios. Quantiles can, e.g., be used to identify policies with a minimal cost bound for meeting a utility constraint with sufficiently high probability. While algorithms for conditional probabilities in MDPs can rely on a fairly simple model transformation, reasoning about conditional expected accumulated costs and quantiles is computationally more difficult. The talk presents an overview of recent results in these directions. |
10:00 | SPEAKER: Arnd Hartmanns ABSTRACT. We summarize our recent TACAS paper, where we provide an efficient algorithm for multi-objective model-checking problems on Markov decision processes (MDPs) with multiple cost structures. The key problem at hand is to check whether there exists a scheduler for a given MDP such that all objectives over cost vectors are fulfilled. Reachability and expected cost objectives are covered and can be mixed. Empirical evaluation shows the algorithm’s scalability. We discuss the need for output beyond Pareto curves and exploit the available information from the algorithm to support decision makers. |
09:00 | Computing Personalised Treatments through In Silico Clinical Trials. A Case Study on Downregulation in Assisted Reproduction. SPEAKER: Toni Mancini ABSTRACT. ISCT, i.e., clinical experimental campaigns carried out by means of computer simulations, hold the promise to decrease time and cost for the safety and efficacy assessment of pharmacological treatments, reduce the need for animal and human testing, and enable precision medicine. In this paper we present a case study aiming at quantifying, by means of a multi-arm ISCT supervised by intelligent search, the potential impact of precision medicine approaches on a real pharmacological treatment, namely the downregulation phase of a complex clinical protocol for assisted reproduction. |
09:22 | SPEAKER: Toni Mancini ABSTRACT. The operations of many critical infrastructures (e.g., airports) heavily depend on proper functioning of the radio communication network supporting operations. As a result, such a communication network is indeed a mission-critical communication network that needs adequate protection from external electromagnetic interferences. This is usually done through radiogoniometers. Basically, by using at least three suitably deployed radiogoniometers and a gateway gathering information from them, sources of electromagnetic emissions that are not supposed to be present in the monitored area can be localised. Typically, relay nodes are used to connect radiogoniometers to the gateway. As a result, some degree of fault-tolerance for the network of relay nodes is essential in order to offer a reliable monitoring. On the other hand, deployment of relay nodes is typically quite expensive. As a result, we have two conflicting requirements: minimise costs while guaranteeing a given fault-tolerance. In this paper address the problem of computing a deployment for relay nodes that minimises the relay node network cost while at the same time guaranteeing proper working of the network even when some of the relay nodes (up to a given maximum number) become faulty (fault-tolerance). We show that the above problem can be formulated as a MILP as well as a PB optimisation problem and present experimental results comparing the two approaches on realistic scenarios. |
09:44 | SPEAKER: Angelo Oddi ABSTRACT. This paper investigates the performances of a greedy randomized algorithm to optimize the realization of nearest-neighbor compliant quantum circuits. Currrent technological limitations (decoherence effect) impose that the overall duration (makespan) of the quantum circuit realization be minimized. One core contribution of this paper is a lexicographic two-key ranking function for quantum gate selection: the first key acts as a global closure metric to minimize the solution makespan; the second one is a local metric acting as ``tie-breaker'' for avoiding cycling. Our algorithm has been tested on a set of quantum circuit benchmark instances of increasing sizes available from the recent literature. We demonstrate that our heuristic approach outperforms the solutions obtained in previous research against the same benchmark, both from the CPU efficiency and from the solution quality standpoint. |
10:06 | Logic Programming approaches for routing fault-free and maximally-parallel Wavelength Routed Optical Networks on Chip (Application paper) SPEAKER: Marco Gavanelli ABSTRACT. One promising trend in digital system integration consists of boosting on-chip communication performance by means of silicon photonics, thus materializing the so-called Optical Networks-on-Chip (ONoCs). Among them, wavelength routing can be used to route a signal to destination by univocally associating a routing path to the wavelength of the optical carrier. Such wavelengths should be chosen so to minimize interferences among optical channels and to avoid routing faults. As a result, physical parameter selection of such networks requires the solution of complex constrained optimization problems. In previous work, published in the proceedings of the International Conference on Computer-Aided Design, we proposed and solved the problem of computing the maximum parallelism obtainable in the communication between any two end-points while avoiding misrouting of optical signals. The underlying technology, only quickly mentioned in that paper, is Answer Set Programming (ASP). In this work, we detail the ASP approach we used to solve such problem. Another important design issue is to select the wavelengths of optical carriers such that they are spread across the available spectrum, in order to reduce the likelihood that, due to imperfections in the manufacturing process, unintended routing faults arise. We show how to address such problem in Constraint Logic Programming on Finite Domains (CLP(FD)). This paper appears in Theory and Practice of Logic Programming, Volume 17, Issue 5-6 (33rd International Conference on Logic Programming). |
09:00 | Invited talk: Experiments in Universal Logical Reasoning --- How to utilise ATPs and SMT solvers for the exploration of axiom systems for category theory in free logic ABSTRACT. Classical higher-order logic, when utilized as a meta-logic in which various other (classical and non-classical) logics can be shallowly embedded, is suitable as a foundation for the development of a universal logical reasoning engine. Such an engine may be employed, as already envisioned by Leibniz, to support the rigorous formalisation and deep logical analysis of rational arguments on the computer. In my talk I will exemplarily demonstrate how this approach can be utilised to implement "free logic" within the proof assistant Isabelle/HOL. SMT solvers and ATPs, which are integrated with Isabelle/HOL, can be then be used in this framework to (indirectly) automate free logic reasoning. Free logic has many interesting applications, e.g. in natural language processing and as a logic of fiction. In mathematics, free logics are particularly suited in application domains, such as axiomatic category theory, where an adequate and elegant treatment of partiality and undefinedness is required. In my talk I will summarise the results of an experiment with the above free logic reasoner, in which a series of mutually equivalent axiom systems for category theory has been systematically explored. As a side-effect of this work some (minor) issue in a prominent category theory textbook has been revealed. The purpose of this work is not to claim any novel results in category theory, but to demonstrate an elegant way to automate reasoning in free logic, and to present illustrative experiments, which were (indirectly) supported by SMT solvers, ATPs and the model finder Nitpick. |
10:00 | SPEAKER: Daniel El Ouraoui ABSTRACT. Satisfiability modulo theories (SMT) solvers have throughout the years been able to cope with increasingly expressive formulas, from ground logics to full first-order logic modulo theories. Nevertheless, higher-order logic within SMT (HOSMT) is still little explored. In this preliminary report we discuss how to extend SMT solvers to natively support higher-order reasoning without compromising their performances on FO problems. We present a pragmatic extension to the cvc4 solver in which we generalize existing data structures and algorithms to HOSMT, thus leveraging the extensive research and implementation efforts dedicated to efficient FO solving. Our evaluation shows that the initial implementation does not add significant overhead to FO problems and its performance is on par with the encoding-based approach for HOSMT. We also discuss an alternative extension being implemented in veriT, in which new data structures and algorithms are being developed from scratch to best support HOSMT, thus avoiding the inherent difficulties of generalizing in a graceful way existing infrastructure not indented to higher-order reasoning. |
09:00 | Model checking and its applications |
09:45 | Computation and inference |
09:30 | SPEAKER: Mathias Fleury ABSTRACT. Based on our earlier formalization of conflict-driven clause learning (CDCL) in Isabelle/HOL, we refine the CDCL calculus to add a crucial optimization: two watched literals. We formalize the data structure and the invariants. Then we refine the calculus to obtain an executable SAT solver. Through a chain of refinements carried out using the Isabelle Refinement Framework, we target Imperative HOL and extract imperative Standard ML code. Although our solver is not competitive with the state of the art, it offers acceptable performance for some applications, and heuristics can be added to improve it further. |
10:00 | SPEAKER: Andreas Lochbihler ABSTRACT. This paper presents the tool CodeLazy for Isabelle/HOL. It hooks into Isabelle’s code generator such that the generated code evaluates a user-specified set of type constructors lazily, even in target languages with eager evaluation. The lazy type must be algebraic, i.e., values must be built from constructors and a corresponding case operator decomposes them. Every datatype and codatatype is algebraic and thus eligible for lazification. Lazification is transparent to the user: definitions, theorems, and the reasoning in HOL need not be changed. Instead, CodeLazy transforms the code equations for functions on lazy types when code is generated. It thus makes code-generation-based Isabelle tools like evaluation and quickcheck available for codatatypes, where eager evaluation frequently causes non-termination. Moreover, as all transformations are checked by Isabelle’s kernel, they cannot introduce correctness problems. |
11:00 | SPEAKER: Koji Nakazawa ABSTRACT. We propose the strong wand and the Factor rule in a cyclic-proof system for the separation-logic entailment checking problem with general inductive predicates. The strong wand is similar to but stronger than the magic wand and is defined syntactically. The Factor rule, which uses the strong wand, is an inference rule for spatial factorization to expose an implicitly allocated cell in inductive predicates. By this rule, we can void getting stuck in the Unfold-Match-Remove strategy. We show a semi-decision algorithm of proof search in the cyclic-proof system with the Factor rule and the experimental results of its prototype implementation. |
11:30 | SPEAKER: Daniel Mery ABSTRACT. Separation Logic (SL) is a logical formalism for reasoning about programs that use pointers to mutate data structures. SL has proven itself successful in the field of program verification over the past fifteen years as an assertion language to state properties about memory heaps using Hoare triples. Since the full logic is not recursively enumerable, most of the proof-systems and verification tools for SL focus on the decidable but rather restricted symbolic heaps fragment. Moreover, recent proof-systems that go beyond symbolic heaps allow either the full set of connectives, or the definition of arbitrary predicates, but not both. In this work, we present a labelled proof-system called GM SL that allows both the definition of arbitrary inductive predicates and the full set of SL connectives. |
12:00 | SPEAKER: Etienne Lozes ABSTRACT. We consider basic symbolic heaps, i.e. symbolic heaps without any inductive predicates, but within a memory model featuring permissions. We propose a complete proof system for this logic that is entirely independent of the permission model. This is an ongoing work towards a complete proof system for symbolic heaps with lists, and more generally towards a proof theory of permissions in separation logics with recursive predicates and Boolean BI with abstract predicates. |
11:00 | SPEAKER: David Heath ABSTRACT. Recursion-free Constrained Horn Clauses (CHCs) are logic-programming problems that can model safety properties of programs with bounded iteration and recursion. In addition, many CHC solvers which handle recursive systems reduce their inputs to a series of recursion-free CHC systems that can each be solved efficiently. In this paper, we define a novel class of recursion-free systems, named Clause-Dependence Disjoint (CDD), that generalizes classes defined in previous work. The advantage of this class is that many CDD systems are smaller than systems which express the same constraints but are part of a different class. This advantage in size allows CDD systems to be solved more efficiently than their counterparts in other classes. Based on the class of CDD systems, we implemented a CHC solver named Shara. Shara solves arbitrary CHC systems by reducing the input to a series of CDD systems. Our evaluation indicates that Shara outperforms state-of-the-art implementations in many practical cases. |
11:30 | ABSTRACT. We present a recursive formulation of the Horn algorithm for deciding the satisfiability of propositional clauses. The usual presentations in imperative pseudo-code are informal and not suitable for simple proofs of its main properties. By defining the algorithm as a recursive function (computing a least fixed-point), we achieve: 1) a concise, yet rigorous, formalisation; 2) a clear form of visualising executions of the algorithm, step-by-step; 3) precise results, simple to state and with clean inductive proofs. |
12:00 | SPEAKER: Ekaterina Komendantskaya ABSTRACT. Coinduction occurs in two guises in Horn clause logic: in proofs of self-referencing properties and relations, and in proofs involving construction of (possibly irregular) infinite data. Both instances of coinductive reasoning appeared in the literature before, but a systematic analysis of these two kinds of proofs and of their relation was lacking. We propose a general proof-theoretic framework for handling both kinds of coinduction arising in Horn clause logic. To this aim, we propose a coinductive extension of Miller et al's framework of uniform proofs and prove its soundness relative to coinductive models of Horn clause logic. |
11:00 | SPEAKER: Max W. Haslbeck ABSTRACT. The LLL basis reduction algorithm was the first polynomial-time algorithm to compute a reduced basis of a given lattice, and hence also a short vector in the lattice. It thereby approximates an NP-hard problem where the approximation quality solely depends on the dimension of the lattice, but not the lattice itself. The algorithm has several applications in number theory, computer algebra and cryptography. In a recent paper, we presented the first formal soundness proof of the LLL algorithm. However, this proof did not include a formal statement of its complexity. Therefore, in this paper we provide two formal statements on the polynomial runtime. First, we formally prove a polynomial bound on the number of arithmetic operations. And second, we show that the numbers during the execution stay polynomial in size, so that each arithmetic operation can be performed in polynomial time. |
11:30 | SPEAKER: Marco David ABSTRACT. Hilbert's tenth problem, posed in 1900 by David Hilbert, asks for a general algorithm to determine the solvability of any given Diophantine equation. In 1970, Yuri Matiyasevich proved the DPRM theorem which implies such an algorithm cannot exist. This paper will outline our attempt to formally state the DPRM theorem and verify Matiyasevich's proof using the proof assistant Isabelle/HOL. |
12:00 | ABSTRACT. Over 32 years, Isabelle has made a long way from a small experimental proof assistant to a versatile platform for proof document development. There has always been a challenge to keep up with the natural growth of applications, notably the Archive of Formal Proofs (AFP). Can we scale this technology further, towards really big libraries of formalized mathematics? Can the underlying Scala/JVM and Poly/ML platforms cope with the demands? Can we eventually do 10 times more and better? In this paper I will revisit these questions particularly from the perspective of: * Editing: Prover IDE infrastructure and front-ends, * Building: batch-mode tools and background services, * Browsing: HTML views and client-server applications. |
11:00 | SPEAKER: Steven Lindell ABSTRACT. In the presence of arithmetic, order-invariant definability in first-order logic captures constant parallel time, i.e. uniform AC⁰ [I]. The ordering is necessary to replicate the presentation of a structure on the input tape of a machine. What if that ordering was in fact a traversal of the structure? We show that an analogous notion of traversal-invariant definability characterizes deterministic logarithmic space (L) and that breadth-first traversals characterize nondeterministic logarithmic space (NL). The surprising feature of these characterizations is that we can work entirely within the framework of first-order logic without any extensions, other than the traversals themselves. |
11:15 | SPEAKER: Anupam Das ABSTRACT. We extend work of Lautemann, Schwentick and Stewart '96 on characterisations of the 'positive' polynomial-time predicates (posP, also called mP by Grigni and Sipser '92) to function classes. Our main result is the obtention of a function algebra for the positive polynomial-time functions (posFP), by imposing a simple uniformity condition on the bounded recursion operator in Cobham's characterisation of FP. |
11:30 | SPEAKER: Ján Pich ABSTRACT. We ask for feasibly constructive proofs of known circuit lower bounds for explicit functions on bit strings of length $n$. In 1995 Razborov showed that many can be proved in Cook's theory PV, a bounded arithmetic formalizing polynomial time reasoning. He formalized circuit lower bound statements for small $n$ of doubly logarithmic order. A more common formalization, considered in Krajíček's 1995 textbook, assumes $n$ only of logarithmic order. It is open whether PV proves known lower bounds in such succinct formalizations. We give such proofs in Jeřábek's theory of approximate counting $APC_1$, an extension of PV formalizing probabilistic polynomial time reasoning. Specifically, we prove in $APC_1$ lower bounds for the parity function and $AC^0$, for the mod $q$ counting function and $AC^0[p]$ (for some $n$ of intermediate order), and for the $k$-clique function and monotone circuits. We also formalize Razborov and Rudich's natural proof barrier. Further, we ask for feasibly constructible propositional proofs of circuit lower bounds. We discuss two ways to succinctly express circuit lower bounds by propositional formulas of polynomial size $n^{O(1)}$ or at least much smaller than size $2^{O(n)}$ of the common formula based on the truth table of the function: one via feasible functions witnessing errors of circuits trying to compute some hard function, and one via the anticheckers of Lipton and Young 1994 or partial truth tables. Our $APC_1$ formalizations can be applied to derive a conditional upper bound on succinct propositional formulas obtained by witnessing extracted from the $APC_1$ proofs. Namely, we show these formulas have short Extended Frege EF proofs from general circuit lower bounds expressed by the common``truth-table" formulas. We also show how to construct in quasipolynomial time propositional proofs of quasipolynomial size tautologies expressing $AC^0[p]$ quasipolynomial size lower bounds; these proofs are in Jeřábek's proof system WF. The last result is proved by formalizing a variant of Razborov's and Rudich's naturalization of Smolensky's proof for partial functions on the propositional level. |
11:45 | ABSTRACT. Working with a particular notion of computability over general, first- order structures, we argue that computability classes over certain locally finite structures should capture Turing machine complexity classes. We exhibit this phenomenon for some locally finite structures from algebra. |
12:00 | SPEAKER: Pedro Lopez-Garcia ABSTRACT. For some applications, standard resource analyses do not provide the information required. Such analyses estimate the total resource usage of a program (without executing it) as functions on input data sizes. However, some applications require knowing how such total resource usage is distributed over selected parts of a program. We propose a novel, general, and flexible framework for setting up cost equations/relations which can be instantiated for performing a wide range of resource usage analyses, including both static profiling and the inference of the standard notion of cost. We extend and generalize standard resource analysis techniques, so that the relations generated include additional Boolean control variables for switching on or off different terms in the relations, as required by the desired resource usage profile. We also instantiate our framework to perform static profiling of accumulated cost (also parameterized by input data sizes). Such information is much more useful to the software developer than the standard notion of cost: it identifies the parts of the program that have the greatest impact on the total program cost, and which therefore should be optimized first. We also report on an implementation of our framework within the CiaoPP system, and its instantiation for accumulated cost, and provide some experimental results. In addition to generality, our new method brings important advantages over our previous approach based on a program transformation, including support for non-deterministic programs, better and easier integration in the compiler, and higher efficiency. |
12:15 | ABSTRACT. In the mid-90's Neil Immerman and José Medina initiated the search for syntactic tools to prove NP-completeness. They conjectured that if a problem defined by the conjunction of an Existential Second Order sentence and a First Order Formula is NP-complete then the Existential Second Order formula denes an NP-complete problem. This property was called superfluity of First Order Logic with respect to NP. Few years later it was proved by Nerio Borges and Blai Bonet that superfluity is true for the universal fragment of First Order Logic and with respect not only to NP but for a major rank of complexity classes. Our work extends that result to the Second Level of the Polynomial-Time Hierarchy |
11:00 | Counterfactual resimulation for causal analysis of rule-based models |
11:45 | Bio-curation and rule-based modelling of protein interaction networks in KAMI |
11:00 | TBA |
12:00 | Learning Tree Distributions by Hidden Markov Models SPEAKER: Daniele Castellana ABSTRACT. Hidden tree Markov models allow learning distributions for tree structured data while being interpretable as nondeterministic automata. We provide a concise summary of the main approaches in literature, focusing in particular on the causality assumptions introduced by the choice of a specic tree visit direction. We will then sketch a novel non-parametric generalization of the bottom-up hidden tree Markov model with its interpretation as a nondeterministic tree automaton with innite states. |
11:00 | SPEAKER: Aline Goeminne ABSTRACT. We study n-player turn-based games played on a finite directed graph. For each play, each player receives a gain that he wants to maximise. Instead of the well-known notions of Nash equilibrium (NE) and subgame perfect equilibrium (SPE), we focus on the recent notion of weak subgame perfect equilibrium (weak SPE), a refinement of SPE. In this setting, players who deviate cannot use the full class of strategies but only a subclass with a finite number of deviation steps. We are interested in the constraint problem: given an upper and a lower thresholds x, y in {0,1}^n, we want to determine if there exists a weak SPE such the gain of each player is componentwise between the two bounds. The goal of our ongoing research is to characterise the complexity of the constraint problem for the class of games with Boolean omega-regular objectives such that Muller, Büchi, Reachability ... |
11:30 | SPEAKER: Quentin Hautem ABSTRACT. In recent years, two-player zero-sum games with multiple objectives have received a lot of interest as a model for the synthesis of complex reactive systems. In this framework, Player 1 wins if he can ensure that all objectives are satisfied against any behavior of Player 2. When this is not possible to satisfy all the objectives at once, an alternative is to use some preorder on the objectives according to which subset of objectives Player 1 wants to satisfy. For example, it is often natural to provide more significance to one objective over another, a situation that can be modelled with lexicographically ordered objectives for instance. Inspired by recent work on concurrent games with multiple omega-regular objectives by Bouyer et al., we investigate in detail turned-based games with monotonically ordered and omega-regular objectives. We study the threshold problem which asks whether Player 1 can ensure a payoff greater than or equal to a given threshold w.r.t. a given monotonic preorder. As the number of objectives is usually much smaller than the size of the game graph, we provide a parametric complexity analysis and we show that our threshold problem is fixed parameter tractable for all monotonic preorders and all classical types of omega-regular objectives. We also provide polynomial time algorithms for Buchi, coBuchi and explicit Muller objectives for a large subclass of monotonic preorders that includes among others the lexicographic preorder. In the particular case of lexicographic preorder, we also study the complexity of computing the values and the memory requirements of optimal strategies. |
12:00 | Comfort and Energy Optimization for Floor Heating Systems SPEAKER: Marco Muniz ABSTRACT. Controller synthesis for stochastic hybrid switched systems, like e.g. a floor heating system in a house, is a complex computational task that cannot be solved by an exhaustive search though all the control options. The state-space to be explored is in general uncountable due to the presence of continuous variables (e.g. temperature readings in the different rooms) and even after discretization, the state-space remains huge and cannot be fully explored. In previous work, we proposed an on-line synthesis methodology, where we periodically compute the controller only for the near future based on the current sensor readings. This computation is itself done by employing machine learning in the tool Uppaal Stratego. We focused on optimizing a single objective, the comfort of the users. Experiments have shown that our approach can lead to enormous improvement over the current controller. Our approach is now being implemented in a real house in Aalborg, Denmark. In this work we focus on a more ambitious multi-objective problem, optimize the comfort of the users while reducing energy consumption. We apply our on-line synthesis methodology in this setting. We show the new challenges that arise, and propose alternatives to address these challenges. |
11:00 | ABSTRACT. Proving a theorem in intuitionistic propositional logic, with {\em implication} as its only primitive (also called minimal logic), is known as one of the simplest to state PSPACE-complete problem. At the same time, via the Curry-Howard isomorphism, it is instrumental to finding lambda terms that may inhabit a given type. However, as hundreds of papers witness it, all starting with Gentzen's {\bf LJ} calculus, conceptual simplicity has not come in this case with comparable computational counterparts. Implementing such theorem provers faces challenges related not only to soundness and completeness but also to termination and scalability problems. Can a simple solution, in the tradition of "lean theorem provers" be uncovered that matches the conceptual simplicity of the problem, while being able to handle also large randomly generated formulas? In search for an answer, starting from Dyckhoff's {\bf LJT calculus}, we derive a sequence of minimalist theorem provers using declarative program transformations steps, while highlighting connections, via the Curry-Howard isomorphism, to type inference mechanisms for the simply typed lambda calculus. We chose Prolog as our meta-language. Being derived from essentially the same formalisms as those we are covering reduces the semantic gap and results in surprisingly concise and efficient declarative implementations. Our implementation is available at: {\bf \url{https://github.com/ptarau/TypesAndProofs}}. |
11:22 | SPEAKER: Carmine Dodaro ABSTRACT. Answer Set Programming (ASP) is a well-established formalism for nonmonotonic reasoning. While incoherence, the non-existence of answer sets for some programs, is an important feature of ASP, it has frequently been criticised and indeed has some disadvantages, especially for query answering. Paracoherent semantics have been suggested as a remedy, which extend the classical notion of answer sets to draw meaningful conclusions also from incoherent programs. In this paper we present an alternative characterization of the two major paracoherent semantics in terms of (extended) externally supported models. This definition uses a transformation of ASP programs that is more parsimonious than the classic epistemic transformation used in recent implementations. A performance comparison carried out on benchmarks from ASP competitions shows that the usage of the new transformation brings about performance improvements that are independent of the underlying algorithms. |
11:44 | SPEAKER: Alfredo Milani ABSTRACT. Crossover operators are very important tools in Evolutionary Computation. Here we are interested in crossovers for the permutation representation that find applications in combinatorial optimization problems such as the permutation flowshop scheduling and the traveling salesman problem. We introduce three families of permutation crossovers based on algebraic properties of the permutation space. In particular, we exploit the group and lattice structures of the space. A total of 14 new crossovers is provided. Algebraic and semantic properties of the operators are discussed, while their performances are investigated by experimentally comparing them with known permutation crossovers on standard benchmarks from four popular permutation problems. Three different experimental scenarios are considered and the results clearly validate our proposals. |
12:06 | An Enhanced Genetic Algorithm with the BLF2G Guillotine Placement Heuristic for the Orthogonal Cutting-Stock Problem SPEAKER: Slimane Abou-Msabah ABSTRACT. The orthogonal cutting-stock problem tries to place a given set of items into a minimum number of identically sized bins. As a part of solving this problem with the guillotine constraint, the authors propose combining the new BLF2G, Bottom Left Fill 2 direction Guillotine, placement heuristic with an advanced genetic algorithm. According to the item order, the BLF2G routine creates a direct placement of items in bins to give a cutting format. The genetic algorithm exploits the search space to find the supposed optimal item order. Other methods try to guide the evolutionary process by introducing a greedy heuristic to the initial population to enhance the results. The authors propose enriching the population via qualified individuals, without disturbing the genetic phase, by introducing a new enhancement to guide the evolutionary process. The evolution of the GA process is controlled, and when no improvements after some number of iterations are observed, a qualified individual is injected to the population to avoid premature convergence to a local optimum. To enrich the evolutionary process with qualified chromosomes a set of order-based individuals are generated. Our method is compared with other heuristics and metaheuristics found in the literature on existing data sets. |
11:00 | SPEAKER: François Bobot ABSTRACT. MCSAT is an approach to SMT-solving that uses assignments of values to first-order variables in order to try and build a model of the input formula. When different theories are combined, as formalized in the CDSAT system, equalities between variables and terms play an important role, each theory module being required to understand equalities and which values are equal to which. This paper broaches the topic of how to reason about equalities in a centralized way, so that the theory reasoners can avoid replicating equality reasoning steps, and even benefit from a centralized implementation of equivalence classes of terms, which is based on a equality graph (Egraph). This paper sketches the design of a prototype based on this architecture. |
11:30 | Proofs in conflict-driven theory combination SPEAKER: Maria Paola Bonacina ABSTRACT. Search-based satisfiability procedures try to construct a model of the input formula by simultaneously proposing candidate models and deriving new formulae implied by the input. When the formulae are satisfiable, these procedures generate a model as a witness. Dually, it is desirable to have a proof when the formulae are unsatisfiable. Conflict-driven procedures perform nontrivial inferences only when resolving conflicts between the formulae and assignments representing the candidate model. CDSAT (Conflict-Driven SATisfiability) is a method for conflict-driven reasoning in combinations of theories. It combines solvers for individual theories as theory modules within a solver for the union of the theories. In this paper we endow CDSAT with lemma learning and proof generation. For the latter, we present two techniques. The first one produces proof objects in memory: it assumes that all theory modules produce proof objects and it accommodates multiple proof formats. The second technique adapts the LCF approach to proofs from interactive theorem proving to conflict-driven SMT-solving and theory combination, by defining a small kernel of reasoning primitives that guarantees that CDSAT proofs are correct by construction. |
12:00 | Selfless Interpolation for Infinite-State Model Checking SPEAKER: Tanja Schindler ABSTRACT. We present a new method for interpolation in satisfiability modulo theories (SMT) that is aimed at applications in model-checking and invariant inference. The new method allows us to control the finite-convergence of interpolant sequences and, at the same time, provides expressive invariant-driven interpolants. It is based on a novel integration of model-driven quantifier elimination and abstract interpretation into existing SMT frameworks for interpolation. We have integrated the new approach into the Sally model checker and we include experimental evaluation showing its effectiveness. |
11:00 | Modularity in Ontologies ABSTRACT. Modularity in Ontologies |
11:45 | Discussion 1 ABSTRACT. Discussions on the previous talks. |
11:00 | SPEAKER: Jan Gorzny ABSTRACT. Resolution and superposition are common techniques which have seen widespread use with propositional and first-order logic in modern theorem provers. In these cases, resolution proof production is a key feature of such tools; however, the proofs that they produce are not necessarily as concise as possible. For propositional resolution proofs, there are a wide variety of proof compression techniques. There are fewer techniques for compressing first-order resolution proofs generated by automated theorem provers. This paper describes an approach to compressing first-order logic proofs based on lifting proof compression ideas used in propositional logic to first-order logic. One method for propositional proof compression is partial regularization, which removes an inference n when it is redundant in the sense that its pivot literal already occurs as the pivot of another inference in every path from n to the root of the proof. This paper describes the generalization of the partial-regularization algorithm RecyclePivotsWithIntersection [10] from propositional logic to first-order logic. An empirical evaluation of the generalized algorithm and its combinations with the previously lifted GreedyLinearFirstOrderLowerUnits algorithm [12] is also presented. |
11:30 | Adding Text-Based Interaction to a Direct-Manipulation Interface for Program Verification -- Lessons Learned SPEAKER: Sarah Grebing ABSTRACT. Interactive program verification is characterized by iterations of unfinished proof attempts. For proof construction, many interactive program verification systems offer either text-based interactions, in using a proof scripting language, or a form of direct-manipulation interaction. We have combined a direct-manipulation program verification system with a text-based interface to leverage the advantages of both interaction paradigms. To give the user more insight when scripting proofs we have adapted well-known interaction concepts from the field of software debugging. In this paper we report on our experiences in combining the direct-manipulation interface of the interactive program verification system KeY with a text-based user interface to construct program verification proofs leveraging interaction concepts from the field of software-debugging for the proof process. |
12:00 | SPEAKER: Rustam Zhumagambetov ABSTRACT. We present a chat bot interface for the Coq proof assistant system. The bot provides a new modality of interaction with Coq that functions across multiple devices and platforms. Our system is particularly suitable for mobile platforms, Android and iOS. The basic architecture of the bot software is reviewed as are the outcomes of several rounds of beta-testing. Potential implications of the system are discussed, such as the possibility of a seamless collaborative proof environment. |
11:00 | Verification of infinite-state systems using decidable logic |
11:45 | Should I join a startup? |
14:00 | Weak memory made easy by separation logic (keynote) ABSTRACT. To adequately define the semantics of multithreaded programs, modern revisions of programming languages, such as C/C++ and Java, have adopted weak memory consistency models, which allow more program behaviours than can be observed by simply interleaving the threads of the program. Most computer scientists consider weak memory consistency as being an extremely complicated concept, and one better left only to a small circle of experts to master. Yet, we show that weak memory is not that complicated. Using separation logic triples, we can not only provide concise explanations of the various constructs of the C/C++11 memory models, but these explanations are also sufficient for verifying the most important synchronisation patterns appearing in real-world concurrent programs. |
14:50 | SPEAKER: James Brotherston ABSTRACT. We investigate the complexity consequences of adding pointer arithmetic to separation logic. Specifically, we study an extension of the points-to fragment of symbolic-heap separation logic with sets of simple "difference constraints" of the form x <= y + k, where x and y are pointer variables and k is an integer offset. This extension can be considered a practically minimal language for separation logic with pointer arithmetic. Most significantly, we find that, even for this minimal language, polynomial-time decidability is already impossible: satisfiability becomes NP-complete, while quantifier-free entailment becomes CoNP-complete and quantified entailment becomes \Pi^P_2-complete (where \Pi^P_2 is the second class in the polynomial-time hierarchy). However, the language does satisfy the small model property, meaning that any satisfiable formula A has a model of size polynomial in A, whereas this property fails when richer forms of arithmetical constraints are permitted. |
15:20 | SPEAKER: Radu Iosif ABSTRACT. We first show that infinite satisfiability can be reduced to finite satisfiability for all prenex formulas of Separation Logic with $k\geq1$ selector fields ($\seplogk{k}$). Second, we show that this entails the decidability of the finite and infinite satisfiability problem for the class of prenex formulas of $\seplogk{1}$, by reduction to the first-order theory of one unary function symbol and unary predicate symbols. We also prove that the complexity is not elementary, by reduction from the first-order theory of one unary function symbol. Finally, we prove that the Bernays-Sch\"onfinkel-Ramsey fragment of prenex $\seplogk{1}$ formulae with quantifier prefix in the language $\exists^*\forall^*$ is \pspace-complete. The definition of a complete (hierarchical) classification of the complexity of prenex $\seplogk{1}$, according to the quantifier alternation depth is left as an open problem. |
14:00 | Foundations and Techniques for the Static Analysis of Ethereum Smart Contracts SPEAKER: Matteo Maffei ABSTRACT. The recent growth of the blockchain technology market puts its main cryptocurrencies in the spotlight. Among them, Ethereum stands out due to its virtual machine (EVM) supporting smart contracts, i.e., distributed programs that control the flow of the digital currency Ether. Being written in a Turing complete language, Ethereum smart contracts allow for expressing a broad spectrum of financial applications. The price for this expressiveness, however, is a significant semantic complexity, which increases the risk of programming errors. Recent attacks exploiting bugs in smart contract implementations call for the design of formal verification techniques for smart contracts. This, however, requires rigorous semantic foundations, a formal characterization of the expected security properties, and dedicated abstraction techniques tailored to the specific EVM semantics. This tutorial will overview the state-of-the-art in smart contract verification, covering formal semantics, security definitions, and verification tools. We will then focus on EtherTrust, a framework for the static analysis of Ethereum smart contracts that we recently introduced, which includes the first complete small-step semantics of EVM bytecode, the first formal characterization of a large class of security properties for smart contracts, and the first static analysis for EVM bytecode that comes with a proof of soundness. |
14:00 | Tree dimension in verification of constrained Horn clauses ABSTRACT. I will introduce the notion of tree dimension and how it can be used in the verification of constrained Horn clauses. The dimension of a tree is a numerical measure of its branching complexity and the concept here applies to Horn clause derivation trees. I will show how to reason about the dimensions of derivations and how to filter out derivation trees below or above some dimension bound using clause transformations. I will then present algorithms using these constructions to decompose a constrained Horn clauses verification problem. Finally I will report on implementations and experimental results. |
15:00 | SPEAKER: Giorgio Delzanno ABSTRACT. We show that Cubicle,an SMT-based infinite-state model checker, can be applied as a verification engine for GLog, a logic-based specification language for topology-sensitive distributed protocols with asynchronous communication. Existential coverability queries in GLog can be translated into verification judgements in Cubicle by encoding relational updates rules as unbounded array transitions. We apply the resulting framework to automatically verify a distributed. version of the Dining Philosopher mutual exclusion protocol formulated for an arbitrary number of nodes and communication buffers. |
14:00 | A logical framework for systems biology and biomedicine |
14:45 | Introducing iota, a logic for biological modelling |
14:00 | Weak and Strong learning of Probabilistic Context-Free Grammars: theoretical and empirical perspectives ABSTRACT. The problem of learning context-free grammars from a finite sample of their yields is a classic problem in grammatical inference; in this talk I will look at this problem and a modern variant: namely the problem of learning probabilistic grammars (PCFGs) from a sample of strings drawn i.i.d. from the distribution over strings defined by that grammar. We will consider both the problem of learning an accurate approximation of the distribution over strings (weak learning) and the much harder problem of learning the underlying unobserved parse trees as well (strong learning) which is related to the task of unsupervised parsing in the NLP community. I will discuss some theoretical results using distributional learning approaches and present some empirical results on both synthetic and natural examples, motivated by the problem of first language acquisition. |
15:00 | Decision problems for Clark-congruential languages SPEAKER: Tobias Kappé ABSTRACT. When trying to learn a language using the Minimally Adequate Teacher model, one assumes the presence of a teacher that can answer equivalence queries for the class of languages under study --- i.e., queries of the form ``is the language equivalent to L?''. This begs the question: is it feasible to implement such a teacher, or, more precisely, is equivalence for this class of languages decidable? We consider the Clark-congruential languages, a class of context-free languages of particular interest to MAT-based learning efforts, and show that, for this class, equivalence as well as congruence are decidable. |
14:00 | A journey through negatively-weighted timed games: undecidability, decidability, approximability ABSTRACT. Weighted timed games are zero-sum games played by two players on a timed automaton equipped with weights, where one player wants to minimise the accumulated weight while reaching a target. Used in a reactive synthesis perspective, this quantitative extension of timed games allows one to measure the quality of controllers in real-time systems. Weighted timed games are notoriously difficult and quickly undecidable, even when restricted to non-negative weights. However, for a few years now, we explored, and we continue to explore, the world of weighted timed games with negative weights too, in order to get a more useful modelling mechanism. This gave rise to stronger undecidability results, but we also discovered new decidable fragments. In this talk, I will survey these results: decidability when limiting the number of distinct weights in the game, using corner-point abstraction techniques; decidability for a large fragment of one-clock weighted timed games, and for the so-called divergent weighted timed games, using value iteration schemes; approximability in the case of the so-called almost-divergent weighted timed games. |
15:00 | SPEAKER: Thomas Brihaye ABSTRACT. Timed automata and hybrid systems are important frameworks for the analysis of continuous-time systems. In this ongoing work, we study a stochastic extension of the latter. There are clear challenges regarding decidability: (i) the reachability problem is already quickly undecidable for non-stochastic hybrid systems; (ii) even in the simpler setting of timed automata, the finite abstraction known as the region graph does not preserve its correctness when lifting it to the stochastic setting. Our goal is to define the class of stochastic o-minimal hybrid systems (SoHSs) and to show that it ensures good properties while being reasonably rich (e.g., it encompasses continuous-time Markov chains). We look for decisiveness of SoHSs (as introduced by Adbulla et al.) and definability in our o-minimal structure. Hopefully, approximation of reachability probabilities could be obtainable in appropriate (and still quite rich) structures. |
14:00 | Replaceability for constraint satisfaction problems: Algorithms, Inferences, and Complexity Patterns ABSTRACT. Replaceability is a form of generalized substitutability whose features make it potentially of great importance for problem simplification. It differs from simple substitutability in that it only requires that substitutable values exist for every solution containing a given value without requiring that the former always be the same. This is also the most general form of substitutability that allows inferences from local to global versions of this property. Building on earlier work, this study first establishes that algorithms for localized replaceability (called consistent neighbourhood replaceability or CNR algorithms) based on all-solutions neighbourhood search outperform other replaceability algorithms by several orders of magnitude. It also examines the relative effectiveness of different forms of depth-first CNR algorithms. Secondly, it demonstrates an apparent complexity ridge, which does not occur at the same place in the problem space as the complexity areas for consistency or full search algorithms. Thirdly, it continues the study of methods for inferring replaceability in structured problems in order to improve efficiency. This includes correcting an oversight in earlier work and extending the formal analysis. It is also shown that some strategies for inferring replaceable values can be extended to disjunctive constraints in scheduling problems. |
14:22 | ABSTRACT. Algorithms based on singleton arc consistency (SAC) show considerable promise for improving backtrack search algorithms for constraint satisfaction problems (CSPs). The drawback is that even the most efficient of them is still comparatively expensive. Even when limited to preprocessing, they give overall improvement only when problems are quite difficult to solve with more typical procedures such as maintained arc consistency (MAC). The present work examines a form of partial SAC and neighbourhood SAC (NSAC) in which a subset of the variables in a CSP are chosen to be made SAC-consistent or neighbourhood-SAC-consistent. These consistencies are well-characterized in that algorithms have unique fixpoints and there are well-defined dominance relations. Heuristic strategies for choosing an effective subset of variables are described and tested, in particular a strategy of choosing by constraint weight after random probing. Experimental results justify the claim that these methods can be nearly as effective as full (N)SAC in terms of values discarded while significantly reducing the effort required. |
14:44 | SPEAKER: Neng-Fa Zhou ABSTRACT. Pseudo-Boolean constraints constitute an important class of constraints. Despite extensive studies of SAT encodings for PB constraints, there are no generally accepted SAT encodings for PB constraints. In this paper we revisit encoding PB constraints into SAT via binary adders and BDDs. For the binary adder encoding, we present an optimizing compiler that incorporates preprocessing, decomposition, constant propagation, and common subexpression elimination techniques tailored to PB constraints. For encoding via BDDs, we compare three methods for converting BDDs into SAT, namely, path encoding, 6-clause node encoding, and 2-clause node encoding. We experimentally compare these encodings on three sets of benchmarks. Our experiments revealed surprisingly good and consistent performance of the optimized adder encoder in comparison with other encoders. |
15:06 | ABSTRACT. In this work we investigate how the performance of SAT solvers can be improved by SAT formulae configuration. We introduce a fully automated approach for this configuration task, that considers a number of criteria for optimising the order in which clauses and, within clauses, literals, are listed in a formula expressed using the CNF. Our experimental analysis, involving three state-of-the-art SAT solvers and six different benchmark sets, shows that this configuration can have a significant impact on solvers’ performance. |
14:00 | SPEAKER: Arie Gurfinkel ABSTRACT. The logic of Constrained Horn Clauses (CHC) provides an effective logical characterization of many problems in (software) verification. For example, CHC naturally capture inductive invariant discovery for sequential programs, compositional verification of concurrent and distributed systems, and verification of program equivalence. CHC is used as an intermediate representation by several state-of-the-art program analysis tools, including SeaHorn and JayHorn. Existing CHC solvers, such as Spacer and Eldarica, are limited to discovering quantifier free solutions. This severely limits their applicability in software verification where memory is modelled by arrays and quantifiers are necessary to express program properties. In this work, we extend the IC3 framework (used by Spacer CHC solver) to discover quantified solutions to CHC. Our work addresses two major challenges: (a) finding a sufficient set of instantiations that guarantees progress, and (b) finding and generalizing quantified candidate solutions. We have implemented our algorithm in Z3 and describe experiments with it in the context of software verification of C programs. |
14:30 | SPEAKER: Giles Reger ABSTRACT. Many areas of automated reasoning are goal-oriented i.e the aim is to prove a goal from a set of axioms. Some methods, including the method of saturation-based reasoning explored in this paper, do not rely on having an explicit goal but can employ specific heuristics if one is present. SMT-LIB problems do not record a specific goal, meaning that we cannot straightforwardly employ goal-oriented proof search heuristics. In this work we examine methods for identifying the potential goal in an SMT-LIB problem and evaluate (using the Vampire theorem prover) whether this can help theorem provers based saturation-based proof search. |
15:00 | SMT-COMP 2018 Report |
14:00 | Modularity in Proof Checking ABSTRACT. Modularity in Proof Checking |
14:45 | Modularity in Large Proofs ABSTRACT. Modularity in Large Proofs |
14:00 | ABSTRACT. The beginnings of the Isabelle Prover IDE framework (Isabelle/PIDE) go back to the year 2008. This is a report on the project after 10 years, with system descriptions for various PIDE front-ends, namely (1) Isabelle/jEdit, the main PIDE application and default Isabelle user-interface, (2) Isabelle/VSCode, an experimental application of the Language Server Protocol in Visual Studio Code (by Microsoft), and (3) a headless PIDE server with JSON protocol messages over TCP sockets. |
14:30 | SPEAKER: Steffen Frerix ABSTRACT. The System for Automated Deduction (SAD) by Andrei Paskevich proof-checks texts written in nearly natural mathematical language, without explicit system commands or prover commands. Natural language features like soft typing by noun phrases guide efficient text processing and proof checking. We have implemented practical improvements of the checking algorithm and extensions of the input language, so that we are now able to process considerably longer and more complicated texts. These are immediately readable by mathematicians, as demonstrated by subsequent examples. This suggests an approach to formal mathematics through collections or libraries of texts in (a controlled) natural language. |
15:00 | Drawing Trees SPEAKER: Anders Schlichtkrull ABSTRACT. We formally prove in Isabelle/HOL two properties of an algorithm for laying out trees visually. The first property states that removing layout annotations recovers the original tree. The second property states that nodes are placed at least a unit of distance apart. We have yet to formalize three additional properties: That parents are centered above their children, that drawings are symmetrical with respect to reflection and that identical subtrees are rendered identically. |
14:00 | How to get a paper accepted? |
14:30 | How to attend a conference? |
15:00 | Hands-on: Elevator pitch |
14:00 | Towards Word Sense Disambiguation by Reasoning SPEAKER: Javier Álvez ABSTRACT. In this proposal we describe a practical application of Vampire for Word Sense Disambiguation (WSD). In particular, we propose a method for the automatic disambiguation of the semantic relations in BLESS, which is a dataset designed to evaluate models of distributional semantics. It compiles 200 concrete nouns (100 animate and 100 inanimate nouns) from different classes and each noun is linked to other words via the semantic relations hyperonymy, cohyponymy, meronymy, attributes, events and random. For the disambiguation of the word pairs and semantic relations of BLESS, we have to map the words to WordNet synsets. WordNet is a large lexical database of English where nouns, verbs, adjectives and adverbs are grouped into sets of synonyms (synsets). Each synset denotes a distinct concept and they are interlinked among them by means of lexical-semantic relations such as synonymy, antonymy, hyponymy, meronymy or morphosemantic relations. Since a word can have different meanings, we need to choose which synset it belongs to, that is, the one that better fits the semantics of the word in the given context. To that end, we plan to use the knowledge in Adimen-SUMO, which is obtained by means of a suitable transformation of the knowledge in the core of SUMO into first-order logic (FOL) and enables its use by FOL automated theorem provers such as Vampire. WordNet and SUMO (and therefore Adimen-SUMO) are connected in a semantic mapping by means of three semantic relations: equivalence, subsumption and instance. By exploiting this mapping, we will automatically create a set of conjectures for each word pair by considering the semantic relations provided by BLESS. Then, these conjectures will be evaluated using Vampire. According to the outcomes, each word will be connected to a single synset and, consequently, disambiguated. Finally, we plan to compare the results provided by our proposal and different disambiguation systems that can be found in the literature. |
14:30 | Testing ATP folklore: a statistical analysis of Vampire proofs. SPEAKER: Michael Rawson ABSTRACT. There are a number of well-known assumptions about automated theorem proving such as "proofs usually only use a small proportion of the given axioms to prove a goal". In this talk we present a statistical analysis of a large number of Vampire proofs and use this to investigate a number of these assumptions coming from ATP folklore. |
15:00 | Foundations of SPECTRE SPEAKER: Simon Robillard ABSTRACT. ~ |
16:00 | SPEAKER: Jørgen Villadsen ABSTRACT. Substitution under quantifiers is non-trivial and may obscure a proof system for newcomers. Monk (Arch. Math. Log. Grundl. 1965) successfully eliminates substitution via identities and also uses a so-called normalization of formulas as a further simplification. We formalize the substitutionless proof system in Isabelle/HOL, spelling out its side conditions explicitly and verifying its soundness. |
16:15 | ABSTRACT. This is an introduction to the remote_build tool for transparent remote session builds. The intended workflow for a user is to locally issue a build command for some session heap images and then continue working, while the actual build runs on a remote machine and the resulting heap images are synchronized incrementally as soon as they are available. |
16:30 | SPEAKER: Yutaka Nagashima ABSTRACT. Deciding which sub-tool to use for a given proof state requires expertise specific to each ITP. To mitigate this problem, we present PaMpeR, a proof method recommendation system for Isabelle/HOL. Given a proof state, PaMpeR recommends proof methods to discharge the proof goal and provides qualitative explanations as to why it suggests these methods. PaMpeR generates these recommendations based on existing hand-written proof corpora, thus transferring experienced users' expertise to new users. Our evaluation shows that PaMpeR correctly predicts experienced users' proof methods invocation especially when it comes to special purpose proof methods. |
17:00 | SPEAKER: Burkhart Wolff ABSTRACT. We verify functional correctness of insertion sort as well as the partition function of quicksort. We use Isabelle/UTP and its denotational semantics for imperative programs as a verification framework. We propose a forward Hoare VCG for our reasoning and we discuss the different technical challenges encountered while using Isabelle/UTP. |
16:00 | My Journey through Observational Equivalence Research with Martin Hofmann |
16:30 | Memories of Martin |
16:00 | A logic modelling workflow for systems pharmacology |
16:00 | The Learnability of Symbolic Automata ABSTRACT. Symbolic automata allow transitions to carry predicates over rich alphabet theories, such as linear arithmetic, and therefore extend classic automata to operate over infinite alphabets, such as the set of rational numbers. In this paper, we study the problem of learning symbolic automata and exactly characterize under what conditions symbolic automata can be learned efficiently. First, we present \alg, an $L^*$-style algorithm for learning symbolic automata using membership and equivalence queries, which treats the predicates appearing on transitions as their own learnable entities. The main novelty of \alg is that it can take as input an algorithm $\Lambda$ for learning predicates in the underlying alphabet theory and it uses $\Lambda$ to infer the predicates appearing on the transitions in the target automaton. Using this idea, \alg is able to learn automata operating over alphabets theories in which predicates are efficiently learnable using membership and equivalence queries. Furthermore, we prove that a necessary condition for efficient learnability of an \SFA is that predicates in the underlying algebra are also efficiently learnable using queries. Our results settle the learnability of a large class of \SFA instances, leaving open only a subclass of \SFAs over efficiently learnable alphabet theories. We implement \alg in an open-source library and show that it can efficiently learn automata that cannot be learned using existing algorithms and significantly outperform existing automata learning algorithms over large alphabets. |
16:30 | Automatic clustering of a network protocol with weakly-supervised clustering ABSTRACT. Abstraction is a fundamental part when learning behavioral models of systems. Usually the process of abstraction is manually defined by domain experts. This paper presents a method to perform automatic abstraction for network protocols. In particular a weakly supervised clustering algorithm is used to build an abstraction with a small vocabulary size for the widely used TLS protocol. To show the effectiveness of the proposed method we compare the resultant abstract messages to a manually constructed (reference) abstraction. With a small amount of side-information in the form of a few labeled examples this method finds an abstraction that matches the reference abstraction perfectly. |
17:00 | Toward Learning Boolean Functions from Positive Examples SPEAKER: Nicolas Basset ABSTRACT. In this article we investigate learning of DNF representations of boolean functions with two special focuses: (i) we consider that a sample with only positive examples is given to the learner (and hence no membership queries are allowed); and (ii) the function we consider are true on a sparse set, namely its cardinality is far smaller than $2^n$ where $n$ is the number of variables of the problem instance. After motivating this problem, we provide a conceptual solution of how much to generalize beyond the sample and where. We introduce an empirical greedy algorithm to address the problem under study and illustrate it on a randomly sampled DNF. |
17:30 | Learning Several Languages from Labeled Strings: State Merging and Evolutionary Approaches ABSTRACT. The problem of learning pairwise disjoint deterministic finite automata (DFA) from positive examples has been recently addressed. In this paper, we address the problem of identifying a set of DFAs from labeled strings and come up with two methods. The first is based on state merging and a heuristic related to the size of each state merging iteration. State merging operations involving a large number of states are extracted, to provide sub-DFAs. The second method is based on a multi-objective evolutionary algorithm whose fitness function takes into account the accuracy of the DFA w.r.t. the learning sample, as well as the desired number of DFAs. We evaluate our methods on a dataset originated from industry. |
16:00 | ABSTRACT. Markov decision processes (MDPs) are a well-known formalism for mod- eling discrete event systems. Several extensions of the model have been proposed in order to encompass additional model-level uncertainty in MDPs. Here, we introduce a multi-scenario uncertainty model which has been proposed in the author’s PhD thesis and possible future research. |
16:30 | SPEAKER: Shibashis Guha ABSTRACT. We consider a stochastic scheduling problem with both hard and soft tasks. Each task is described by a discrete probability distribution over possible execution times, a deadline and a distribution over inter-arrival times. Our scheduling problem is non-clairvoyant in the sense that the execution and inter-arrival times are subject to uncertainty modeled by stochastic distributions. Given an instance of our problem, we want to synthesize a schedule that is safe and efficient: safety imposes that deadline of hard tasks are never violated while efficient means that soft tasks should be completed as often as possible. To enforce that property, we impose a cost when a soft task deadline is violated and we are looking for a schedule that minimizes the expected value of the limit average of this cost. First, we show that the dynamics of such a system can be modeled as a finite Markov Decision Process (MDP). Second, we show that the problem of deciding the existence of a safe and efficient scheduler is PP-hard and in EXPTIME. Third, we have implemented our synthesis algorithm that partly relies on the probabilistic model-checker Storm to analyze the underlying MDP. Given a set of both hard and soft tasks, we first compute the set of safe schedules, i.e., the ones in which the hard tasks always finish their execution, then we compute the scheduler that minimizes the limit average cost among the set of all safe schedules. Finally, we compare the optimal solutions obtained with our procedure against an adaptation of the earliest deadline first (EDF) algorithm to account for the soft task. We show that this EDF strategy can be arbitrarily worse in terms of the expected limit average cost for missing the deadlines of the soft tasks when compared to our optimal procedure. |
17:00 | SPEAKER: Mohammadhosein Hasanbeig ABSTRACT. We propose a Reinforcement Learning (RL) algorithm to synthesize policies for a Markov Decision Process (MDP), such that a linear time property is satisfied. We convert the property into a Limit Deterministic Buchi Automaton (LDBA), then construct a product MDP between the automaton and the original MDP. A reward function is then assigned to the states of the product automaton, according to accepting conditions of the LDBA. With this reward function, our algorithm synthesizes a policy that satisfies the linear time property: as such, the policy synthesis procedure is "constrained" by the given specification. Additionally, we show that the RL procedure sets up an online value iteration method to calculate the maximum probability of satisfying the given property, at any given state of the MDP - a convergence proof for the procedure is provided. Finally, the performance of the algorithm is evaluated via a set of numerical examples. We observe an improvement of one order of magnitude in the number of iterations required for the synthesis compared to existing approaches. |
16:00 | Modularity in Proof Assistants ABSTRACT. Modularity in Proof Assistants |
16:45 | Discussion 2 ABSTRACT. Discussions on the previous talks |
16:00 | Experimenting with Theory Instantiation in Vampire SPEAKER: Martin Riener ABSTRACT. Theory instantiation tackles the problem of theory reasoning with quantifiers in Vampire using an SMT solver. In contrast to AVATAR modulo theories it works locally by instantiating a clause such that its pure theory part becomes inconsistent and can be deleted. We present our experiments with different clause filtering strategies. This achieves two goals, better interaction with unification with abstraction as well as restricting the problem passed to the SMT solver to well supported theories like linear integer arithmetic. |
16:30 | Giving AVATAR Quantifiers to Play With SPEAKER: Martin Suda ABSTRACT. We investigate an extension of the AVATAR architecture which exposes first-order variables to the underlying SMT-solver. This enables (a controlled amount of) quantifier instantiation effort from within the SMT solver to complement the first-order prover's native support for quantifier reasoning. We report on experimental results with Vampire combined with the SMT solver CVC4. |
17:00 | Developments in Higher-order Theorem proving within Vampire SPEAKER: Ahmed Bhayat ABSTRACT. This talk presents on-going work on extending Vampire to higher-order reasoning. Currently, two strategies are being explored. A translation from higher to first-order logic has been implemented and a mixture of heuristics and special inference rules have been implemented to control the search procedure. Alternatively, we are working on extending Vampire's data structures to support higher-order features and then pragmatically introducing higher-order reasoning. |
16:10 | SPEAKER: Jens Katelaan ABSTRACT. We present Sloth, a solver for separation logic modulo theory constraints specified in the separation logic SL∗, a propositional separation logic that we recently introduced in our IJCAR'18 paper "A Separation Logic with Data: Small Models and Automation." SL∗ admits NP decision procedures despite its high expressiveness; features of the logic include support for list and tree fragments, universal data constraints about both lists and trees, and Boolean closure of spatial formulas. Sloth solves SL* constraints via an SMT encoding of SL* that is based on the logic's small-model property. We argue that, while clearly still a work in progress, Sloth already demonstrates that SL* lends itself to the automation of nontrivial examples. These results complement the theoretical work presented in the IJCAR'18 paper. |
16:40 | Presentation of Results from SL-COMP 2018 ABSTRACT. TBA |
Panel session preceded by an introductory talk.
16:40 | What Just Happened in AI ABSTRACT. I will discuss in this talk my perspective on recent developments in artificial intelligence, which have been triggered by the successes of deep learning. The discussion will be based on a position paper that will appear in CACM with the title “Human-Level Intelligence or Animal-Like Abilities?” My presentation will be hinged on a distinction between model-based and function-based approaches to AI, and will address a spectrum of issues and considerations, while placing current developments in a historical perspective. I will also share thoughts on how best to capitalize on current progress as we move forward, touching on both educational and research needs. |
17:00 | Panel session SPEAKER: Adnan Darwiche ABSTRACT. The panel session will be moderated by:
and the panelists are:
|
Workshops dinner at Keble College. Drinks reception from 7pm, to be seated by 7:30 (pre-booking via FLoC registration system required; guests welcome).