FLOC 2018: FEDERATED LOGIC CONFERENCE 2018
PROGRAM FOR THURSDAY, JULY 19TH
Days:
previous day
all days

View: session overviewtalk overview

09:00-10:30 Session 130A: Invited talk: Antoine Miné (AVOCS)

Experiences with thread-modular static analysis of concurrent embedded systems

09:00
Experiences with thread-modular static analysis of concurrent embedded systems

ABSTRACT. More and more software systems, including in the critical industry,  exploit concurrency and multi-core architectures. Verifying concurrent  systems is challenging due to the huge number of possible executions  spawned by the (inherently highly non-deterministic) scheduler. Yet,  sound analyses covering all possible executions are necessary to  certify software and discover subtle concurrency bugs.
In this talk, we present several static analyses of such systems that  avoid considering explicitly all possible thread interleavings, and  use instead thread-modular techniques: we analyze separately each  thread as well as their interactions, performing a fixpoint to  propagate interactions between threads. The methods are based on  abstract interpretation, which ensures soundness by construction and  allows a wide range of cost versus precision trade-offs while  leveraging for free all the abstractions developed to analyze  sequential programs.
We present several analyses built on this model, including: an  extension to concurrent programs of the Astrée analyzer to prove the  absence of run-time errors in embedded critical C code, an analysis of  the functional correctness of device drivers in TinyOS, and an  analysis inferring numeric invariants in low-level concurrent programs  in the presence of weak hardware memory consistency models.

09:00-10:30 Session 130B (EICNCL)
09:00
Title TBA - Invited talk

ABSTRACT. Abstract TBA

10:00
Effective Translations between Display and Labelled Proofs for Tense Logics
SPEAKER: Tim Lyon

ABSTRACT. We begin with display and labelled calculi for a certain class of tense logics. These tense logics are extensions of Kt with any finite number of general path axioms, which are axioms of the form Xp -> Yp, where X and Y represent sequences of black and white diamonds.

For the minimal tense logic Kt, I provide a bi-directional embedding between display calculus proofs and labelled calculus proofs. It is then shown how to translate display proofs for Kt extended with general path axioms into labelled proofs.

Providing a reverse translation--translating labelled proofs into display proofs for Kt extended with general path axioms--has proven to be much more difficult because certain labelled structural rules do not correspond well to the display variants of such rules. Nevertheless, for a certain subclass of general path axioms a translation from labelled calculus proofs to display calculus proofs can be provided. Through the addition of special rules, referred to as "propagation rules," we can prove that the problematic structural rules are admissible in the resulting calculus. Every proof of a formula in this calculus contains sequents whose structure corresponds to a tree with two types of edges. This specific structure allows us to translate these proofs into display proofs, which gives an answer to an open question of translating labelled proofs to display proofs for extensions of the minimal tense logic Kt.

09:00-10:30 Session 130C (MLP)
Location: Blavatnik LT1
09:00
Why is Software Natural? and how can Naturalness be exploited?

ABSTRACT. Sometime during the Summer of 2011, several of us at UC Davis were quite puzzled and shocked
to discover that software is "natural", viz., just as repetitive and predictable as
natural language corpora; In fact much more so!  By now, this early experiment has been
replicated many times, in many ways, and various applications  of naturalness have been
developed. But why is this? Is it just because of programming language  syntax? or is it due to
something else,  like conscious programmer  choice? How can we study this question?
Are there are other  "natural" corpora (other than software)  that  are similar to software?

09:45
Program Synthesis as High-Level Machine Learning

ABSTRACT. The area of deep learning has had many remarkable successes in the
recent past. However, deep neural networks have some well-documented
weaknesses: they are data-hungry, brittle, opaque to users, hard to
analyze, and not easily constrained with axiomatic knowledge about the
world. In this talk, I will argue that ideas from programming
languages and program synthesis can offer a way of overcoming these
weaknesses. Specifically, one can use higher-level programming
languages, possibly with access to neural "subroutines", to describe
statistical models normally described neurally. Such programmatic
models are more easily understood by humans, can more easily
accommodate axiomatic knowledge about the world, and are easier to
analyze using symbolic techniques. The use of "neurosymbolic", as
opposed to purely symbolic, models can lead to lower complexity of
learning. The compositionality inherent in modern languages can allow
transfer of knowledge across learning tasks. Discovering such
programmatic models from data is a form of program synthesis, and can
perhaps also be described as "high-level machine learning". Early
experience with the problem suggests that the literature on
language-integrated program synthesis can offer powerful tools for
solving this problem. At the same time, the problem is different from
traditional synthesis in key ways, and opens up many new technical
challenges.

09:00-10:30 Session 130D: Welcome and Invited Talk (PAAR)
Location: Maths LT3
09:00
The CakeML Verified Compiler and Toolchain
10:00
Dynamic Strategy Priority: Empower the strong and abandon the weak

ABSTRACT. Automated theorem provers are often used in other tools as black-boxes for discharging proof obligations. One example where this has enjoyed a lot of success is within interactive theorem proving. A key usability factor in such settings is the time taken for the provers to complete. Automated theorem provers run lists of proof strategies sequentially, which can cause proofs to be found more slowly than necessary. We show that it is possible to predict which strategies are likely to succeed while they are running using an artificial neural network. We also implement a run-time strategy scheduler in the Vampire theorem prover which improves average proof search time, and hence increases usability as a black-box solver.

09:00-10:30 Session 130E: Welcome & invited talk I (PRUV)
09:00
Welcome to PRUV

ABSTRACT. A warm welcome to everybody interested in reasoning with preferences, uncertainty and vagueness!

09:10
Quantitative Logic Reasoning -- Combining logical reasoning with probabilities and counting (invited talk)

ABSTRACT. We present a research program which investigates the intersection of deductive reasoning with explicit quantitative capabilities. These capabilities encompass probabilistic reasoning, counting and counting quantifiers, and similar systems. The need to have a combined reasoning system that enables a unified way of reasoning with quantities has always been recognized in modern logic, as proposals of probabilistic logic reasoning are present since the work of Boole [1854]. Equally ubiquitous is the need to deal with cardinality restrictions on finite sets. More recently, a well-founded probabilistic theory has been developed for non-classical settings as well, such as probabilistic reasoning over Lukasiewicz infinitely-valued logic.

We show that there is a common way to deal with these several deductive quantitative capabilities, involving a framework based on Linear Algebra and Linear Programming. The distinction between classical and non-classical reasoning on the one hand, and probabilistic and cardinality reasoning on the other hand, comes from the different family of algebras employed. The quantitative logic systems also allow for the introduction of inconsistency measurements, which quantify the degree of inconsistency of a given quantitative logic theory, following some basic principles of inconsistency measurements.

On the practical level, we aim at exploring quantitative logic systems in which the complexity of reasoning is "only NP-complete". We provide open-source implementations for solvers operating over those systems and study some notable empirical properties, such as the present of a phase transition.

09:00-10:30 Session 130F: Opening and First Presentation Round (ReMOTE)
09:00
Welcome and Introduction
09:15
Understanding Ethical Reasoning
09:45
Human morality, robots’ moral competence, and the deepest kind of trust

ABSTRACT. As robots increasingly engage in society as educators, rescuers, and caretakers, they face the uniquely human domain of morality. Could robots ever master this domain and acquire moral competence? In the first part of this talk I offer theoretical arguments and empirical evidence to propose that moral competence consists primarily of a massive web of norms, decisions in light of these norms, judgments when such norms are violated, and a vocabulary for moral communication. Affect and emotion—so central in many other models of morality—may be important, but not in the way people commonly assume. In the second part I examine how formal system verification fits into this model of moral competence and offer an optimistic view: If verification provides justification and is properly communicated, it can provide convincing evidence of a machine’s moral competence. Finally, I propose that trust is multi-dimensional and that humans can have different kinds of trust in a robot (e.g., in its reliability or capacity) but that the deepest kind is trust in an agent’s moral integrity. Robots that provably and convincingly have moral competence would deserve such deep moral trust.

10:15
Elements of a Model of Trust in Technology

ABSTRACT. Trust is discussed in the context of other factors influencing the decision to utilize a technology and the overt and covert costs, risks and side effects incurred by that decision. We outline possible steps towards the quantification of trust in artificial autonomous systems and discuss some implications regarding the design and verification of such systems.

09:00-10:00 Session 130G: Invited talk: A. Turberfield (VEMDP)
Chair:
Location: Maths L6
09:00
Controlling Assembly and Function of DNA Nanostructures and Molecular Machinery

ABSTRACT. TBA

09:00-10:30 Session 130H: Certification & Formalisation III (VSTTE)

Certification & Formalisation III

Location: Maths LT2
09:00
A Formalization of the ABNF Notation and a Verified Parser of ABNF Grammars

ABSTRACT. Augmented Backus-Naur Form (ABNF) is a standardized formal grammar notation used in several Internet syntax specifications. This paper describes (i) a formalization of the syntax and semantics of the ABNF notation and (ii) a verified parser that turns ABNF grammar text into a formal representation usable in declarative specifications of correct parsing of ABNF-specified languages. This work has been developed in the ACL2 theorem prover.

09:00-10:30 Session 130I: Invited Speaker; Robotics I (VaVAS)

Invited Speaker

Contributed Talks: Robotics I

Location: Maths LT1
09:00
Reproducibility in Robotics - The Big 5 Issues

ABSTRACT. The range of difficulties that can occur when running an experimental study in robotics is broad.
Subjects can behave in unforeseen ways, rendering the experimental scenario inconclusive, or
misunderstand questionnaire items or instructions. The robot may exhibit seemingly random
physical behaviors, or break down in the middle of the experiment. To increase the intricacy, imagine
the experiment being reproduced in another lab, research field, or country by other researchers with a
regular research publication as their only starting point.

Furthermore, assume that the publication is in an interdisciplinary research field, like social robotics or
HRI, in which authors from different fields often carry different levels of knowledge and disciplinary
interest about the social science methodologies, the technical system, and the conceptual background of the work.

Based on my experience in the field, I will introduce and discuss the "big five" issues of reproducible robotics
from both, a developer's and also from a reviewer's perspective. The discussed topics range from technical, to
methodological, to social aspects of reproducible robotics and are motivated by applied "real life"
projects and scenarios.

10:00
RoboTool: Modelling and Verification with RoboChart

ABSTRACT. We propose to demonstrate the application of RoboChart and its associated tool, RoboTool, for the verification and validation of robotic applications. In our demonstration, we will consider a few small examples for illustration, and the larger example of a transporter. It is part of a swarm and cooperates with other identical robots to push an object to a target location.

09:00-10:30 Session 130J: Invited talk (WST)
09:00
Towards a Unified Method for Termination

ABSTRACT. The question of how to ensure that programs terminate has been for decades attracting remarkable attention of computer scientists, resulting in a great number of techniques for proving termination of term rewriting and other models of computation. Nowadays it has become hard for new-comers to come up with new termination techniques/tools, since there are so many to learn/implement before inventing a new one. In this talk, I present my past and on-going works towards unified method for termination, that allow one to learn/implement a single idea and obtain many well-known techniques as instances.

10:00-10:30 Session 131 (VEMDP)
Chair:
Location: Maths L6
10:00
Computing properties of stable configurations of thermodynamic binding networks
SPEAKER: Keenan Breik
10:30-11:00Coffee Break
11:00-12:30 Session 132A: AVoCS Regular Papers 3 (AVOCS)
11:00
Using SMT engine to generate Symbolic Automata

ABSTRACT. Open pNets are used to model the behaviour of open systems, both synchronous or asynchronous, expressed in various calculi or languages. They are endowed with a symbolic operational semantics in terms of so-called “Open Automata”. This allows to check properties of such systems in a compositional manner. We implement an algorithm computing this semantics, building predicates expressing the synchronization conditions between the events of the pNet sub-systems. Checking such predicates requires symbolic reasoning over first order logics, but also over application-specific data. We use the Z3 SMT engine to check satisfiability of the predicates, and prune the open automaton of its unsatisfiable transitions. As an industrial oriented use-case, we use so-called "architectures" for BIP systems, that have been used in the framework of an ESA project and to specify the control software of a nanosatellite at the EPFL Space Engineering Center. We use pNets to encode a BIP architecture extended with explicit data, and compute its open automaton semantics. This automaton may be used to prove behavioural properties; we give 2 examples, a safety and a liveness property.

11:30
Climbing the Software Assurance Ladder - Practical Formal Verification for Reliable Software
SPEAKER: Yannick Moy

ABSTRACT. There is a strong link between software quality and software reliability. By decreasing the probability of imperfection in the software, we can augment its reliability guarantees. At one extreme, software with one unknown bug is not reliable. At the other extreme, perfect software is fully reliable. Formal verification with SPARK has been used for years to get as close as possible to zero-defect software. We present the well-established processes surrounding the use of SPARK at Altran UK, as well as the deployment experiments performed at Thales to fine-tune the gradual insertion of formal verification techniques in existing processes. Experience of both long-term and new users helped us define adoption and usage guidelines for SPARK based on five levels of increasing assurance that map well with industrial needs in practice.

12:00
A Framework for the Formal Verification of Networks of Railway Interlockings - Application to the Belgian Railways

ABSTRACT. Modern railway stations are controlled by computerized systems called interlockings. In fact the middle size and large size stations usually required to use several interlockings then forming a network. Many researches propose to verify the safety properties of such system by mean of model checking. Our approach goes a step further and proposes a method to extend the verification process to interconnected interlockings. This process is known as compositional verification. Each interlocking is seen as the component of a larger system (i.e.; station) and interacts with its neighbours by mean of interfaces. Our first contribution comes in the form of a catalogue of formal definitions covering all the interfaces used by the Belgian Railways. Each interface can then be bound to a formal contract allowing its verification by the ocra tool. Our second contribution is to propose an algorithm able to split the topology controlled by a single interlocking into components. This allows to reduce the size of the model to be verified thereby reducing the state space explosion while providing guarantees on the whole interlocking.

11:00-12:30 Session 132B: Modal logics (EICNCL)
11:00
Toward intuitionistic non-normal modal logic and its calculi

ABSTRACT. We propose a minimal non-normal intuitionistic modal logic. We present it by Hilbert axiomatisation and an equivalent cut-free calculus. We then propose a semantic characterisation of this logic in terms of bi-neighbourhood models with respect to which the logic is sound. We finally present a cut-free labelled calculus matching with the bi-neighbourhood semantics.

11:30
Proof theory for quantified monotone modal logics

ABSTRACT. This paper provides the first proof-theoretic study of quantified non-normal modal logics. It introduces labelled sequent calculi for the first order extension, both with free and with classical quantification, of all the monotone non-normal modal logics, as well as of some interesting extensions thereof, and it studies the role of the Barcan Formulas in these calculi. It will be shown that the calculi introduced have good structural properties: they have invertibility of the rules, height-preserving admissibility of weakening and contraction, and syntactic cut elimination. It will also be shown that each of the calculi introduced is sound and complete with respect to the appropriate class of neighbourhood frames with either constant or varying domains. In particular the completeness proof constructs a formal proof for derivable sequents and a countermodel for underivable ones, and it gives a semantic proof of the admissibility of cut. Finally, some preliminay results on the extension of our approach to the non-monotonic case are discussed.

12:00
Sequent Calculi for Logic that Includes an Infinite Number of Modalities

ABSTRACT. Multimodal logic has been studied for various purposes.
Although some studies have considered an infinite number of modalities,
propositions for the quantification of modalities, such as
``For all modalities [i] in an infinite set of modalities S, [i]p is true''
have not been discussed in general.
In this paper, a simple method for expressing these propositions is discussed and deduction systems for new logic are established.
The conditions on Kripke frames that include these notions are also discussed.

11:00-12:30 Session 132C (MLP)

Deep Learning for Code

Location: Blavatnik LT1
11:00
code2vec: Learning Distributed Representations of Code
11:45
Understanding & Generating Source Code with Graph Neural Networks

ABSTRACT. The rich structure of source code presents an interesting challenge for machine learning methods. Recently, Graph Neural Networks (GNN) have shown promising results in code understanding and code generation tasks. In this talk, I will briefly discuss two neural models that employ GNNs: one of them for catching variable misuse bugs and the other for generating code expressions. Finally, I will discuss some of the open challenges that GNNs face on many source code-related tasks.

11:00-12:30 Session 132D: Automated Reasoning I (PAAR)
Location: Maths LT3
11:00
A Verified Simple Prover for First-Order Logic

ABSTRACT. We present a simple prover for first-order logic with certified soundness and completeness in Isabelle/HOL, taking formalizations by Tom Ridge and others as the starting point, but with the aim of using the approach for teaching logic and verification to computer science students at the bachelor level. The prover is simple in the following sense: It is purely functional and can be executed with rewriting rules or as code generation to a number of functional programming languages. The prover uses no higher-order functions, that is, no function takes a function as argument or returns a function as its result. This is advantageous when students perform rewriting steps by hand. The prover uses the logic of first-order logic on negation normal form with a term language consisting of only variables. This subset of the full syntax of first-order logic allows for a simple proof system without resorting to the much weaker propositional logic.

11:30
Proof Search Optimizations for Non-clausal Connection Calculi

ABSTRACT. The paper presents several proof search optimization techniques for non-clausal connection calculi. These techniques are implemented and integrated into the non-clausal connection prover nanoCoP. Their effectiveness is evaluated on the problems in the TPTP library. Together with a fixed strategy scheduling, these techniques are the basis of the new version of nanoCoP.

12:00
TFX: The TPTP Extended Typed First-order Form

ABSTRACT. The TPTP world is a well established infrastructure that supports research, development, and deployment of Automated Theorem Proving systems for classical logics. The TPTP language is one of the keys to the success of the TPTP world. Originally the TPTP world supported only first-order clause normal form (CNF). Over the years support for full first-order form (FOF), monomorphic typed first-order form (TF0), rank-1 polymorphic typed first-order form (TF1), monomorphic typed higher-order form (TH0), and rank-1 polymorphic typed higher-order form (TH1), have been added. TF0 and TF1 together form the TFF language family; TH0 and TH1 together form the THF language family. This paper introduces the eXtended Typed First-order form (TFX), which extends TFF to include boolean terms, tuples, conditional expressions, and let expressions.

11:00-12:30 Session 132E: PRUV regular papers (PRUV)

PRUV regular papers

11:00
Measuring Disagreement among Knowledge Bases

ABSTRACT. When combining beliefs from different sources, often not only new knowledge but also conflicts arise. In this paper, we investigate how we can measure the disagreement among sources. We start our investigation with disagreement measures that can be induced from inconsistency measures in an automated way. After discussing some problems with this approach, we propose a new measures that is inspired by the eta-inconsistency measure. Roughly speaking, it measures how well we can satisfy all sources simultaneously. We show that the new measure satisfies desirable properties, scales well with respect to the number of sources and illustrate its applicability in inconsistency-tolerant reasoning.

11:30
Evidential Group Decision Making Model with Belief-Based Preferences

ABSTRACT. In a large number of problems such as product configuration, automated recommendation and combinatorial auctions, decisions stem from agents subjective preferences and requirements in the presence of uncertainty. In this paper, we introduce a framework based on the belief function theory to deal with problems in group decision settings where the preferences of the agents may be uncertain, imprecise, incomplete, conflicting and possibly distorted. A case study is conducted on product recommendation to illustrate the applicability and validity of the proposed framework.

12:00
A new logic for jointly representing hard and soft constraints
SPEAKER: Jan Maly

ABSTRACT. Soft constraints play a major rule in AI, since they allow to restrict the set of possible worlds (obtained from hard constraints) to a small fraction of preferred or most plausible states. Only a few formalisms fully integrate soft and hard constraints. A prominent examples is Qual- itative Choice Logic (QCL), where propositional logic is augmented by a dedicated connective and preferred models are discriminated via ac- ceptance degress determined by this connective. In this work, we follow an analogous approach in terms of syntax but propose an alternative semantics. The key idea is to assign to formulas a set of models plus a partial relation on these models. Preferred models are then obtained from this partial relation. We investigate properties of our logic which demon- strate that our semantics shows some favorable behavior compared to QCL. Moreover, we provide a partial complexity analysis of our logic.

11:00-12:30 Session 132F: Second Presentation Round (ReMOTE)
11:00
How to Free Yourself from the Curse of Bad Equilibria

ABSTRACT. A standard problem in game theory is the existence of equilibria with undesirable properties. To pick a famous example, in the Prisoners Dilemma, the unique dominant strategy equilibrium leads to the only Pareto inefficient outcome in the game, which is strictly worse for all players than an alternative outcome. If we are interested in applying game theoretic solution concepts to the analysis of distributed and multi-agent systems, then these undesirable outcomes correspond to undesirable system properties. So, what can we do about this? In this talk, I describe the work we have initiated on this problem. I discuss three approaches to dealing with this problem in multi-agent systems: the design of norms or social laws; the use of communication to manipulate the beliefs of players; and in particular, the use of taxation schemes to incentivise desirable behaviours.

11:30
Specification and Verification for Robots that Learn
12:00
Moral Permissibility of Actions in Smart Home Systems

ABSTRACT. With this poster, we present ongoing work of how to operate a smart home via a Hybrid Ethical Reasoning Agent (HERA). This work is part of the broader scientific effort to implement ethics on computer systems known as machine ethics. We showcase an everyday example involving a mother and a child living in the smart home. Our formal theory and implementation allows us to evaluate actions proposed by the smart home from different ethical points of view, i.e. utilitarianism, Kantian ethics and the principle of double effect. We discuss how formal verification, in the form of model-checking, can be used to check that the modeling of a problem for reasoning by HERA conforms to our intuitions about ethical action.

11:00-12:30 Session 132G (VEMDP)
Location: Maths L6
11:00
Rule-Based Gillespie Simulation of Chemical Systems
11:20
Rule-based design of computational DNA devices
11:40
Noise control for molecular computing

ABSTRACT. With the advancement in nucleic-acid-based technology in general, and strand-displacement DNA computing in particular, 
a large class of abstract biochemical networks may be physically realized using nucleic acids. 
Mathematical and experimental methods for designing abstract biochemical circuits, and then physically realizing them, respectively, 
have been predominantly developed at the (less-detailed) deterministic level, when the circuits involve molecules in high-abundance and 
operate in well-mixed environments. A proof-of-concept is a recently in-vitro man-made 
chemical oscillator, called the displacillator. However, molecular circuits involving species in low-abundance, 
and operating in fluctuating environments, are increasingly found to play an important role in applications, 
such as when molecular computers are interfaced with cell-like vesicles, and when they are utilized in 
nanotechnology. In such circumstances, 
methods for controlling the intrinsic noise in the system are necessary for a successful network design 
at the (more-detailed) stochastic level. 

To bridge the gap, the noise-control algorithm for designing biochemical networks will be presented in
this talk. The algorithm structurally modifies any given reaction network under mass-action kinetics, 
in such a way that (i) controllable state-dependent noise is introduced into the stochastic dynamics 
(the chemical master equation), while (ii) the deterministic dynamics (reaction-rate equations) are preserved.  
The structural modification involves appropriately enlarging the input network, by adding suitable
auxiliary species and chemical reactions. This allows for a hybrid approach when constructing reaction networks: 
the deterministic model may be used to guide the design, while the noise-control algorithm may be applied to 
favorably reprogram the intrinsic noise in the stochastic model. The deterministic-stochastic hybrid approach 
allows one to reshape the probability distributions of target chemical species, and gain control over their sample-paths, 
while inheriting the fixed mean-field behaviors. The capabilities of the algorithm are demonstrated by redesigning 
test reaction systems, enriching them with stochastic phenomena, such as noise-induced multimodality/multistability 
(coexistenceof multiple maxima in the probability distributions) and oscillations.

12:00
Molecular Filters for Noise Reduction
SPEAKER: Luca Laurenti
11:00-12:30 Session 132H: Security (VSTTE)

Security

Location: Maths LT2
11:00
Constructing Independently Verifiable Privacy-Compliant Type Systems for Message Passing between Black-Box Components
SPEAKER: Robin Adams

ABSTRACT. Privacy by design (PbD) is the principle that privacy should be considered at every stage of the software engineering process. It is increasingly both viewed as best practice and required by law. It is therefore desirable to have formal methods that provide guarantees that certain privacy-relevant properties hold. We propose an approach that can be used to design a privacy-compliant architecture without needing to know the source code or internal structure of any individual component.

We model an architecture as a set of agents or components that pass messages to each other. We present in this paper algorithms that take as input an architecture and a set of privacy constraints, and output an extension of the original architecture that satisfies the privacy constraints.

11:30
SideTrail: Verifying Time-Balancing of Cryptosystems

ABSTRACT. Timing-based side-channel attacks are a serious security risk for modern cryptosystems. The time-balancing countermeasure used by several TLS implementations (e.g. s2n, gnuTLS) ensures that execution timing is negligibly influenced by secrets, and hence no attacker-observable timing behavior depends on secrets. These implementations can be difficult to validate, since time-balancing countermeasures depend on global properties across multiple executions. In this work we introduce the tool SideTrail, which we use to prove the correctness of time-balancing countermeasures in s2n, the open-source TLS implementation used across a range of products from AWS, including S3. SideTrail is used in s2n’s continuous integration process, and has detected three side-channel issues that the s2n team confirmed and repaired before the affected code was deployed to production systems.

12:00
Towards verification of Ethereum smart contracts: a formalization of core of Solidity

ABSTRACT. Solidity is the most popular programming language for writing smart contracts on the Ethereum platform. Given that smart contracts often manage large amounts of valuable digital assets, considerable interest has arisen in formal verification of Solidity code. Designing verification tools requires good understanding of language semantics. Acquiring such an understanding in case of Solidity is difficult as the language lacks even an informal specification.

In this work, we evaluate the feasibility of formalization of Solidity and propose a formalization of a small subset of Solidity that contains its core data model and some unique features, such as function modifiers.

11:00-11:30 Session 132I: Robotics II (VaVAS)

Contributed Talks: Robotics II

Location: Maths LT1
11:00
Policy Generation with Probabilistic Guarantees for Long-term Autonomy of a Mobile Service Robot
SPEAKER: Nick Hawes

ABSTRACT. In this demo, we will illustrate our work on integrating formal verification techniques, in particular probabilistic model checking, to enable long term deployments of mobile service robots in everyday environments. Our framework is based on generating policies for Markov decision process (MDP) models of mobile robots, using (co-safe) linear temporal logic specifications. More specifically, we build MDP models of robot navigation and action execution where the probability of successfully navigating between two locations and the expected time to do so are learnt from experience. For a specification over these models, we maximise the probability of the robot satisfying it, and minimise the expected time to do so. The policy obtained for this objective can be seen as a robot plan with attached probabilistic performance guarantees.

Our proposal is to showcase this framework live during the workshop, deploying our robot in the workshop venue and having it perform tasks throughout the day (the robot is based in the Oxford Robotics Institute, hence it can be easily moved to the workshop venue). In conjunction with showing the live robot behaviour, we will, among other things, provide visualisation of the generated policies on a map of the environment; showcase how the robot keeps track of the performance guarantees calculated offline during policy execution; and show how these guarantees can be used for execution monitoring.

11:00-12:30 Session 132J: Complexity (WST)
11:00
Complexity Analysis for Bitvector Programs
SPEAKER: Jürgen Giesl

ABSTRACT. In earlier work, we developed approaches for automated termination analysis of several different programming languages, based on back-end techniques for termination proofs of term rewrite systems and integer transition systems. In the last years, we started adapting these approaches in order to analyze the complexity of programs as well. However, up to now a severe drawback was that we assumed the program variables to range over mathematical integers instead of bitvectors. This eases mathematical reasoning but is unsound in general. While we recently showed how to handle fixed-width bitvector integers in termination analysis, we now present the first technique to analyze the runtime complexity of programs with bitvector arithmetic. We implemented our contributions in the tool AProVE and evaluate its power by extensive experiments.

11:30
A Perron-Frobenius Theorem for Jordan Blocks for Complexity Proving

ABSTRACT. We consider complexity proofs for rewrite systems that involve matrix interpretations. In order to certify these proofs, we have to validate polynomial bounds on the matrix growth of A^n for some non-negative real-valued square matrix A. Whereas our earlier certification criterion used algebraic number arithmetic in order to compute all maximal Jordan blocks, in this paper we present a Perron-Frobenius like theorem. Based on this theorem it suffices to just compute the Jordan Blocks of eigenvalue 1, which can easily be done. This not only helps to improve the efficiency of our certifier CeTA, but might also be used to actually synthesize suitable matrix interpretations.

12:00
Inference of Linear Upper-Bounds on the Expected Cost by Solving Cost Relations
SPEAKER: Alicia Merayo

ABSTRACT. In this extended abstract, we describe a preliminary work on inferring linear upper-bounds on the expected cost for control-flow graphs as via cost relations, with the goal of integrating this process in the SACO tool, whose cost analyzer is based on the use of cost relations as well.

11:30-12:30 Session 133: Panel Session (VaVAS)

Panel session focusing on key issues relating to the verification and validation of autonomous systems. Panelists include Calin Belta (Boston University), Jérémie Guiochet  (University of Toulouse III), Florian Lier  (Bielefeld University)  and Alice Miller (University of Glasgow).

Location: Maths LT1
12:30-14:00Lunch Break
14:00-15:30 Session 134A: Invited talk: Yannick Moy (AVOCS)

Mostly harmless - luring programmers into proof with SPARK

14:00
Mostly harmless - luring programmers into proof with SPARK

ABSTRACT. SPARK is a technology for mathematically proving that programs are correct, that has completed its 30th birthday in 2017, making it one of the oldest such technology alive! Its longevity can be explained by a stubborn focus on the development of tools usable by software engineers and applicable to industrial software. The "tools" include here a programming language, development and verification tools, and processes around these. These tools have evolved considerably in three decades, with even a complete overhaul between 2010 and 2014 to unify the logical and executable views of specifications. We are seeing now adoption progressing at a faster rate, mostly because the level of automation has decreased entry costs for users. Normal engineers can start proving the correctness of their programs on their personal computers. An apparent paradox is that program proving itself, understood as the full functional correctness of programs, is not much easier today. What is much easier thanks to automation however is the proof of simpler yet critical properties of programs, like the absence of runtime errors. An unavoidable consequence is that, as engineers try to prove more complex properties on their programs, automation is not the panacea. We tell them: Don't Panic! SPARK offers a continuum from automatic proof to interactive proof, allowing users to decide where to set the target of their journey. In this talk, we will provide the hitchhiker's guide to the SPARK proof galaxy.

14:00-15:30 Session 134B (EICNCL)
14:00
Cyclic proofs, hypersequents and Kleene algebra - Invited Talk

ABSTRACT. Logics arising from automaton theory, relational algebra and formal language theory are still not well understood in proof-theoretic terms. However, there is increasing evidence to suggest that emerging trends in structural proof theory, namely richer proof calculi and non-wellfounded proofs, possess the capacity to capture these fundamental frameworks, and even deliver important metalogical results of independent interest. In this talk I will speak about a recent line of work that uses a combination of internal calculi and cyclic proofs (arguably an external feature) to recover proof theoretic treatments for Kleene algebras (KA) and action lattices (AL).

Starting with KA, I show how a natural sound and complete non-wellfounded system can be made regular via enriching the syntax of proof lines to hypersequents. This takes advantage of several basic techniques on the cyclic proof side as well as known normal forms from automaton theory, morally corresponding to the use of hypersequents. This calculus exhibits optimal proof-search complexity and, as an application, I will briefly present work in progress on how to recover an alternative proof of the completeness of KA with respect to inclusions of rational languages. I will also speak about a proof theoretic account of AL, which is of natural interest to structural proof theorists since it extends the full Lambek calculus by a single fixed point: the Kleene star. The main result here is cut-elimination, which requires a sophisticated computational interpretation into a higher-order language. As an application we are able to recover an (optimal) complexity bound for *-continuous action lattices.

The aim of this talk is to give a concrete showcase of modern trends structural proof theory, and moreover to leave the audience with some motivational problems for the area at large. At a high-level, I also hope to explore a moral connection, in certain settings, between the richness of a proof calculus and succinct representations arising from automaton theory, which could direct a line of future research for the area.

 

15:00
Proof Translations in BI logic
SPEAKER: Daniel Mery

ABSTRACT. In order to study proof translations in BI logic, we consider first the bunched sequent calculus LBI and then define a new labelled sequent calculus, called GBI. Therefore we propose a procedure that translates LBI proofs into GBI proofs and prove its soundness. Moreover we discuss some steps towards the reverse translation of GBI proofs into LBI proofs and propose an algorithm that is illustrated by some examples. Further work will be devoted to its proof of correctness and also to the design of translations of LBI proofs to TBI (labelled-tableaux) proofs.

14:00-15:30 Session 134C (MLP)
Location: Blavatnik LT1
14:00
TBA
14:45
Program synthesis and its connections to AGI

ABSTRACT. In this talk, I will address the questions of

  1. how we specify arbitrary tasks to a learning system
  2. how we interpret its behaviour, and finally
  3. how do we verify or debug it to ensure that its behaviour is consistent with the task specification.

I will also describe my initial attempts to make progress on these questions through program synthesis and verification.

14:00-15:30 Session 134D: Automated Reasoning II (PAAR)
Location: Maths LT3
14:00
Efficient translation of sequent calculus proofs into natural deduction proofs

ABSTRACT. We present a simple and efficient translation of the classical multi-succedent sequent calculus LK to natural deduction. This translation aims to produce few excluded middle inferences, and to keep the quantifier complexity of the instances of excluded middle low. We evaluate the translation on proofs imported from resolution-based first-order theorem provers, and show that it achieves these goals in an efficient manner.

14:30
Set of Support for Higher-Order Reasoning
SPEAKER: Giles Reger

ABSTRACT. Higher-order logic (HOL) is utilised in numerous domains from program verification to the formalisation of mathematics. However, automated reasoning in the higher-order domain lags behind first-order automation. Many higher-order automated provers translate portions of HOL problems to first-order logic (FOL) and pass them to FOL provers. However, FOL provers are not optimised for dealing with these translations. One of the reasons for this is that the axioms introduced during the translation (e.g. those defining combinators) may combine with each during proof search, deriving consequences of the axioms irrelevant to the goal. In this work we evaluate the extent to which this issue affects proof search and introduce heuristics based on the set-of-support strategy for minimising the effects. Our experiments are carried out within the Vampire theorem prover and show that limiting how axioms introduced during translation can improve proof search with higher-order problems.

15:00
Evaluating Pre-Processing Techniques for the Separated Normal Form for Temporal Logics

ABSTRACT. We consider the transformation of propositional linear time temporal logic formulae into a clause normal form, called Separated Normal Form, suitable for resolution calculi. In particular, we investigate the effect of applying various pre-processing techniques on characteristics of the normal form and determine the best combination of techniques on a large collection of benchmark formulae.

14:00-15:30 Session 134E: Invited talk II & a short paper (PRUV)
14:00
An abstraction-refinement framework for automated reasoning -- a pragmatic approach (invited talk)

ABSTRACT. Abstraction-refinement is widely used in verification but, with some exceptions, is largely overlooked in the state-of-the-art automated theorem proving. One of our main motivations comes from the problem of reasoning with large theories where usually only a small part of the theory is needed for proving the conjecture. Efficient reasoning with large theories is one of the main challenges in automated theorem proving arising in many applications including reasoning with ontologies, large mathematical theories and verification. Current methods for reasoning with large theories are mainly based on axiom selection which we believe is limited in the scope. We propose a general approach to automated reasoning based on abstraction-refinement which aims at over and under approximation of theories in order to speed up reasoning. In particular, in this approach axiom selection can be seen as a special case of under approximation. In this talk we present a general abstraction-refinement framework for automated reasoning, draw connections with current approaches and discuss some practical abstractions.

This is a joint work with Julio Cesar Lopez Hernandez.

15:10
Reasoning about exceptions in ontologies: from the lexicographic closure to the skeptical closure

ABSTRACT. Reasoning about exceptions in ontologies is nowadays one of the challenges the description logics community is facing. The paper describes a preferential approach for dealing with exceptions in Description Logics, based on the rational closure. The rational closure has the merit of providing a simple and efficient approach for reasoning with exceptions, but it does not allow independent handling of the inheritance of different defeasible properties of concepts. In this work we outline a possible solution to this problem by introducing a variant of the lexicographical closure, that we call {\em skeptical closure}, which requires to construct a single base. This work is based on the extended abstract presented at CILC/ICTCS 2017.

14:00-15:30 Session 134F: Third Presentation Round (ReMOTE)
Chair:
14:00
TBA
14:30
The computational neuroscience of human-robot relationships
15:00
Trust, Failure, and Self-Assessment During Human-Robot Interaction
14:00-15:00 Session 134G: Invited talk: SJ Dunn (VEMDP)
Location: Maths L6
14:00
Uncovering the Biological Programs that Govern Development

ABSTRACT. TBA

14:00-15:30 Session 134H: New Applications (VSTTE)

New Applications

Location: Maths LT2
14:00
Relational Equivalence Proofs Between Imperative and MapReduce Algorithms

ABSTRACT. Distributed programming frameworks like MapReduce, Spark and Thrill, are widely used for the implementation of algorithms operating on large datasets. However, implementing in these frameworks is more demanding than coming up with sequential implementations. One way to achieve correctness of an optimized implementation is by deriving it from an existing imperative sequential algorithm description through a sequence of behavior-preserving transformations.

We present a novel approach for proving equivalence between imperative and MapReduce algorithms based on partitioning the equivalence proof into a sequence of equivalence proofs between intermediate programs with smaller differences. Our approach is based on the insight that proofs are best conducted using a combination of two kinds of steps: (1) uniform context-independent rewriting transformations; and (2) context-dependent flexible transformations that can be proved using relational reasoning with coupling invariants.

We demonstrate the feasibility of our approach by evaluating it on two prototypical algorithms commonly used as examples in MapReduce frameworks: k-means and PageRank. To carry out the proofs, we use a higher-order theorem prover with partial proof automation. The results show that our approach and its prototypical implementation enable equivalence proofs of non-trivial algorithms and could be automated to a large degree.

14:30
Practical Methods for Reasoning about Java 8's Functional Programming Features
SPEAKER: David Cok

ABSTRACT. We describe new capabilities added to the Java Modeling Language and the OpenJML deductive program verification tool to support functional programming features introduced in Java 8. We also report on the application of the extensions to a secure streaming protocol library developed by Amazon Web Services and used as a foundation by services it provides. We found that the application under study used a small set of functional programming idioms; methods using these idioms could be verified by techniques that used only first-order logic and did not need all the features that might be required for full generality of functional programming.

15:00
Verification of Binarized Neural Networks via Inter-Neuron Factoring

ABSTRACT. Binarized Neural Networks (BNN) have recently been proposed as an energy-efficient alternative to more traditional learning networks. Here we study the problem of formally verifying BNNs by reducing it to a corresponding hardware verification problem. The main step in this reduction is based on factoring computations among neurons within a hidden layer of the BNN in order to make the BNN verification problem more scalable in practice. The main contributions of this paper include results on the NP-hardness and hardness of PTAS approximability of this essential optimization and factoring step, and we design polynomial-time search heuristics for generating approximate factoring solutions. With these techniques we are able to scale the verification problem to moderately-sized BNNs for embedded devices with thousands of neurons and inputs.

14:00-15:30 Session 134I: Complexity / Applications (WST)
14:00
Control-Flow Refinement via Partial Evaluation

ABSTRACT. In this extended abstract we explored the use of partial evaluation as a control-flow refinement technique in the context for termination and cost analysis. Our preliminary experiments show that partial evaluation can improve both analyses.

14:30
Verification of Rewriting-based Query Optimizers

ABSTRACT. We report on our ongoing work on automated verification of rewriting-based query optimizers. Rewriting-based query optimizers are a widely adapted in relational database architecture however, designing these rewrite systems remains a challenge. In this paper, we discuss automated termination analysis of optimizers where rewrite-rules are expressed in HoTTSQL. We discuss how it is not sufficient to reason about rule specific (local) properties such as semantic equivalence, and it is necessary to work with set-of-rules specific (global) properties such as termination and loop-freeness to prove correctness of the optimizer. We put forward a way to translate the rules in HoTTSQL to Term Rewriting Systems, opening avenues for the use of termination tools in the analysis of rewriting-based transformations of HoTTSQL database queries. 

15:00
TTT2 with Termination Templates for Teaching
SPEAKER: Jonas Schöpf

ABSTRACT. On the one hand, checking specific termination proofs by hand, say using a particular collection of matrix interpretations, can be an arduous and error-prone task. On the other hand, automation of such checks would save time and help to establish correctness of exam solutions, examples in lecture notes etc. To this end, we introduce a template mechanism for the termination tool TTT2 that allows us to restrict parameters of certain termination methods. In the extreme, when all parameters are fixed, such restrictions result in checks for specific proofs.

15:00-15:30 Session 135 (VEMDP)
Location: Maths L6
15:00
Bayesian Verification of Chemical Reaction Networks
15:10
Computational Approaches for the Programmed Assembly of Nanocellulose Meshes
SPEAKER: Corina Itcus
15:30-16:00Coffee Break
16:00-18:00 Session 136A: AVoCS Regular Papers 4 (AVOCS)
16:00
Rule-Based Synthesis of Chains of Security Functions for Software-Defined Networks

ABSTRACT. Software-defined networks (SDN) offer a high degree of programmability for handling and forwarding packets. In particular, they allow network administrators to combine different security functions, such as firewalls, intrusion detection systems, and external services, into security chains designed to prevent or mitigate attacks against end user applications. Because of their complexity, the configuration of these chains can benefit from formal techniques for their automated construction and verification. We propose in this paper a rule-based system for automating the composition and configuration of chains of security functions for Android applications. Given a characterization of the network traffic generated by an application and the set of permissions it requires, our rules construct an abstract representation of a custom security chain. This representation is then translated into a concrete implementation of the chain in Pyretic, a domain-specific language for programming SDN controllers. We prove that the chains produced by our rules satisfy a number of correctness properties such as the absence of black holes or loops, and shadowing freedom, and that they are coherent with the underlying security policy.

16:30
Proof-Oriented Design of a Separation Kernel with Minimal Trusted Computing Base
SPEAKER: Narjes Jomaa

ABSTRACT. The development of provably secure OS kernels represents a fundamental step in the creation of safe and secure systems. To this aim, we propose the notion of protokernel and an implementation --- the Pip protokernel --- as a separation kernel whose trusted computing base is reduced to its bare bones, essentially providing separation of tasks in memory, on top of which non-influence can be proved. This proof-oriented design allows us to formally prove separation properties on a concrete executable model very close to its automatically extracted C implementation. Our design is shown to be realistic as it can execute isolated instances of a real-time embedded system that has moreover been modified to isolate its own processes through the Pip services.

16:00-17:30 Session 136B: Intuitionistic and belief logics (EICNCL)
16:00
Intuitionistic multi-agent subatomic natural deduction for belief and knowledge

ABSTRACT. We outline a proof-theoretic approach to the semantics of intensional operators for intuitionistic belief and knowledge, which does neither intend to capture a given target modal logic, nor to import ideas from relational model-theoretic semantics (e.g., truth at indices, accessibility between indices) via labelled and relational formulae into the language. On this approach, the meaning of these intensional operators is explained exclusively by appeal to the structure of proofs which are unaided by such means. The proof-theoretic framework chosen for this purpose is subatomic natural deduction proposed by the author in earlier work. The resulting systems normalize and enjoy the subexpression property (a refinement of the subformula property). They can be used for the logical analysis of reasoning patterns which involve complex multi-agent belief constructions (e.g., reciprocating or universal beliefs).

 

16:30
Hypersequent calculus for the logic of conditional belief: preliminary results

ABSTRACT. The logic of Conditional Beliefs (CDL) has been introduced by Board, Baltag and Smets to reason about knowledge and revisable beliefs in a multi-agent setting. Our aim is to develop standard internal calculi for this logic. As a preliminary result we propose an internal hypersequent calculus for it in the single agent case.

17:00
A Forward Calculus for Countermodel Construction in IPL

ABSTRACT. The inverse method is a saturation based theorem proving technique; it relies on a forward proof-search strategy and can be applied to cut-free calculi enjoying the subformula property. Here we apply this method to derive the unprovability of a goal formula G in Intuitionistic Propositional Logic. To this aim we design a forward calculus FRJ(G) for Intuitionistic unprovability which is prone to constructively ascertain the unprovability of a formula G by providing a concise countermodel for it; in particular we prove that the generated countermodels have minimal height. Moreover, we clarify the role of the saturated database obtained as result of a failed proof-search in FRJ(G) by showing how to extract from such a database a derivation witnessing the Intuitionistic validity of the goal.

16:00-18:00 Session 136C (MLP)
Location: Blavatnik LT1
16:00
Difflog: Beyond Deductive Methods in Program Analysis

ABSTRACT. Building effective program analysis tools is a challenging endeavor: analysis designers must balance multiple competing objectives, including scalability, fraction of false alarms, and the possibility of missed bugs. Not all of these design decisions are optimal when the analysis is applied to a new program with different coding idioms, environment assumptions, and quality requirements. Furthermore, the alarms produced are typically accompanied by limited information such as their location and abstract counter-examples. We present a framework Difflog that fundamentally extends the deductive reasoning rules that underlie program analyses with numerical weights. Each alarm is now naturally accompanied by a score, indicating quantities such as the confidence that the alarm is a real bug, the anticipated severity, or expected relevance of the alarm to the programmer. To the analysis user, these techniques offer a lens by which to focus their attention on the most important alarms and a uniform method for the tool to interactively generalize from human feedback. To the analysis designer, these techniques offer novel ways to automatically synthesize analysis rules in a data-driven style. Difflog shows large reductions in false alarm rates and missed bugs in large, complex programs, and it advances the state-of-the-art in synthesizing non-trivial analyses.

16:30
Deep Learning On Code with an Unbounded Vocabulary

ABSTRACT. A major challenge when using techniques from Natural Language Processing for supervised learning on computer program source code is that many words in code are neologisms. Reasoning over such an unbounded vocabulary is not something NLP methods are typically suited for. We introduce a deep model that contends with an unbounded vocabulary (at training or test time) by embedding new words as nodes in a graph as they are encountered and processing the graph with a Graph Neural Network.

17:00
Splitting source code identifiers using Bidirectional LSTM Recurrent Neural Network

ABSTRACT. Programmers make rich use of natural language in the source code they write through identifiers and comments. Source code identifiers are selected from a pool of tokens which are strongly related to the meaning, naming conventions, and context. These tokens are often combined to produce more precise and obvious designations. Such multi-part identifiers count for 97% of all naming tokens in the Public Git Archive - the largest dataset of Git repositories to date. We introduce a bidirectional LSTM recurrent neural network to detect subtokens in source code identifiers. We trained that network on 41.7 million distinct splittable identifiers collected from 182,014 open source projects in Public Git Archive, and show that it outperforms several other machine learning models. The proposed network can be used to improve the upstream models which are based on source code identifiers, as well as improving developer experience allowing writing code without switching the keyboard case.

17:30
Open Business Meeting for Future of Machine Learning for Programming
16:00-18:00 Session 136D: PRUV regular papers (PRUV)
16:00
A Model-Theoretic View on Preferences in Declarative Specifications of Search Problems
SPEAKER: Alireza Ensan

ABSTRACT. Automated decision making often requires solving difficult and primarily NP-hard problems. In many AI applications (e.g., planning, robotics, recommender systems), users can assist decision making by specifying their preferences over some domain of interest. To take preferences into account, we take a model-theoretic approach to both computations and preferences. Computational problems are characterized as model expansion, that is the logical task of expanding an input structure to satisfy a specification. The uniformity of the model-theoretic approach allows us to link preferences and computations by introducing a framework of a prioritized model expansion. The main technical contribution is an analysis of the impact of preferences on the computational complexity of model expansion. We also study an application of our framework for the case where some information about preferences is missing. Finally, we discuss how prioritized model expansion can be related to other preference-based declarative programming paradigms.

16:30
A New Probabilistic Algorithm for Approximate Model Counting
SPEAKER: Cunjing Ge

ABSTRACT. t. Constrained counting is important in domains ranging from artificial intelligence to software analysis. There are already a few approaches for counting models over various types of constraints. Recently, hashing-based approaches achieve success but still rely on solution enumeration. In the IJCAR version, a probabilistic approximate model counter is proposed, which is also a hashingbased universal framework, but with only satisfiability queries. A dynamic stopping criteria, for the algorithm, is presented, which has not been studied yet in previous works of hashing-based approaches. Although the algorithm lacks theoretical guarantee, it works well in practice. In this paper, we further extend our approach to SMT(LIA) formulas with a new encoding technique.

17:00
Leveraging Probabilistic Existential Rules for Adversarial Deduplication

ABSTRACT. The entity resolution problem in traditional databases, also known as deduplication, seeks to map multiple virtual objects to its corresponding set of real-world entities. Though the problem is challenging, it can be tackled in a variety of ways by means of leveraging several simplifying assumptions, such as the fact that the multiple virtual objects appear as the result of name or attribute ambiguity, clerical errors in data entry or formatting, missing or changing values, or abbreviations. However, in cyber security domains the entity resolution problem takes on a whole different form, since malicious actors that operate in certain environments like hacker forums and markets are highly motivated to remain semi-anonymous---this is because, though they wish to keep their true identities secret from law enforcement, they also have a reputation to keep with their customers. The above simplifying assumptions cannot be made, and we therefore coin the term "adversarial deduplication" to refer to this setting. In this paper, we propose the use of probabilistic existential rules (also known as Datalog+/-) to model knowledge engineering solutions to this problem; we show that tuple-generating dependencies can be used to generate probabilistic deduplication hypotheses, and equality-generating dependencies can later be applied to leverage existing data towards grounding such hypotheses. The main advantage with respect to existing deduplication tools is that our model operates under the open-world assumption, and thus is capable of modeling hypotheses over unknown objects, which can later become known if new data becomes available.

17:30
VolCE: An Efficient Tool for Solving #SMT(LA) Problems
SPEAKER: Cunjing Ge

ABSTRACT. We present VolCE, a tool for computing and estimating the size of the solution space of SMT(LA) constraints. VolCE supports the SMT-LIB format. It integrates SMT solving with volume computation/estimation and integer solution counting for convex polytopes. Several effective techniques are adopted, which enable the tool to deal with high-dimensional problem instances efficiently.

16:00-18:00 Session 136F (VEMDP)
Location: Maths L6
16:00
Equivalence of chemical reaction networks in a CRN-to-DNA compiler framework.
SPEAKER: Stefan Badelt
16:20
Using Transitivity and Modularity in Chemical Reaction Network Bisimulation
16:40
Conservative approximations of polymers
SPEAKER: Jerome Feret

ABSTRACT. We propose a systematic approach to approximate the behaviour of models of polymers synthesis/degradation. Our technique consists in discovering time-dependent lower and upper bounds for the concentration of some patterns of interest.  These bounds are obtained by approximating the state of the system by a hyper-box, with differential equations defining the evolution of the coordinates of each hyper-face. The equation of each hyper-face is obtained by pessimistically bounding the derivative with respect to the corresponding coordinate when the system state ranges over this hyper-face.

In order to synthesise these bounds, we use Kappa to describe our models of polymers. This provides symbolic equalities and inequalities which intentionally may be understood as algebraic constructions  over patterns, and extensionally as sound properties about the concentration of the bio-molecular species that contain these patterns.
 

17:00
Executable Biochemical Space for Specification and Analysis of Biochemical Systems
17:20
Formal Verification of Network-Based Biocomputation Circuits
SPEAKER: Hillel Kugler
16:00-18:00 Session 136G: Off the beaten track (VSTTE)

Off the beaten track

Chair:
Location: Maths LT2
16:00
The Map Equality Domain

ABSTRACT. We present a method that allows us to infer expressive in- variants for programs that manipulate arrays and, more generally, data that are modeled using maps (including the program memory which is modeled as a map over integer locations). The invariants can express, for example, that memory cells have changed their contents only at lo- cations that have not been previously allocated by another procedure. The motivation for the new method stems from the fact that, although state-of-the-art SMT solvers are starting to be able to check the validity of more and more complex invariants, there is not much work yet on their automatic inference. We present our method as a static analysis over an abstract domain that we introduce, the map equality domain.

16:30
Loop Detection by Logically Constrained Term Rewriting
SPEAKER: Sarah Winkler

ABSTRACT. Logically constrained rewrite systems constitute a very general rewriting formalism that can capture simplification processes in various domains as well as computation in imperative programs. In both of these contexts, nontermination is a critical source of errors. We present new criteria to find loops in logically constrained rewrite systems which are implemented in the tool Ctrl. We illustrate the usefulness of these criteria in three example applications: to find loops in LLVM peephole optimizations, to detect looping program executions of C programs, and to establish nontermination of integer transition systems.

17:00
Store Buffer Reduction in The Presence of Mixed-Size Accesses and Misalignment

ABSTRACT. Naive programmers believe that a multi-threaded execution of their program is some simple interleaving of steps of individual threads. To increase performance, modern Intel and AMD processors make use of store buffers, which cause unexpected behaviors that can not be explained by the simple interleaving model.

Programs that in the simple interleaving model obey one of various programming disciplines do not suffer from these unexpected behaviors in the presence of store buffers. These disciplines require that the program does not make use of several concrete features of modern processors, such as mixed-size/misaligned memory accesses and inter-processor interrupts. A common assumption is that this requirement is posed only to make the formal description and soundness proof of these disciplines tractable, but that the disciplines can be extended to programs that make use of these features with a lot of elbow grease and straightforward refinements of the programming discipline.

In this paper we discuss several of such features where that assumption is correct and two such features where it is not, namely mixed-size/misaligned accesses and inter-processor interrupts. We base our discussion on two programming disciplines from the literature. We present non-trivial extensions of the more efficient of the two programming disciplines that work for programs that use these features.

Our work is based directly on the 500 page PhD thesis of the author, which includes a formal treatment of the extensions and a detailed soundness proof.

16:00-17:00 Session 136H: Higher-Order (WST)
16:00
Termination of Lambda-Pi modulo rewriting using the size-change principle

ABSTRACT. The Size-Change Principle was first introduced to study the termination of first-order functional programs. In this work, we show that it can also be used to study the termination of higher-order rewriting in a system of dependent types extending LF.

16:30
Improving Static Dependency Pairs for Higher-Order Rewriting
SPEAKER: Carsten Fuhs

ABSTRACT. We revisit the static dependency pair method for termination of higher-order term rewriting. In this extended abstract, we generalise the applicability criteria for the method. Moreover, we propose a static dependency pair framework based on an extended notion of computable dependency chains that harnesses the computability-based reasoning used in the soundness proof of static dependency pairs. This allows us to propose a new termination proving technique to use in combination with static dependency pairs: the static subterm criterion.