FLOC 2018: FEDERATED LOGIC CONFERENCE 2018
FM FMCOMPLETE: PAPERS WITH ABSTRACTS

Editors: Jan Peleska, Klaus Havelund and Bill Roscoe

Authors, Title and AbstractPaperTalk

ABSTRACT. In this paper we introduce a new method for detecting deadlock in the coroutine mode of the method execution of a single actor. The underlying actor-based language features asynchronous method calls and supports coroutines which allow for the cooperative scheduling of the tasks belonging to an actor. We model the local behavior of an actor as a well-structured transition system by means of predicate abstraction and derive the decidability of the occurrence of deadlocks caused by the coroutine mode of method execution.

Jul 15 09:00

ABSTRACT. This paper concerns the analysis of information leaks in security systems. We address the problem of specifying and analyzing large systems in the (standard) channel model used in quantitative information flow (QIF). We propose several operators which match typical interactions between system components. We explore their algebraic properties with respect to the security-preserving refinement relation defined by Alvim et al. and McIver et al. We show how the algebra can be used to simplify large system specifications in order to facilitate the computation of information leakage bounds. We demonstrate our results on the specification and analysis of the Crowds Protocol. Finally, we use the algebra to justify a new algorithm to compute leakage bounds for this protocol.

Jul 15 09:30

ABSTRACT. Modeling and verifying real-world cyber-physical systems are challenging, especially so for complex systems where manually modeling is infeasible. In this work, we report our experience on combining model learning and abstraction refinement to analyze a challenging system, i.e., a real-world Secure Water Treatment (SWaT) system. Given a set of safety requirements, the objective is to either show that the system is safe with a high probability (so that a system shutdown is rarely triggered due to safety violation) or otherwise. As the system is too complicated to be manually modelled, we apply latest automatic model learning techniques to construct a set of Markov chains through abstraction and refinement, based on two long system execution logs (one for training and the other for testing). For each probabilistic property, we either report it does not hold with a certain level of probabilistic confidence, or report that it holds by showing the evidence in the form of an abstract Markov chain. The Markov chains can subsequently be implemented as runtime monitors in SWaT. This is the first case study of applying model learning techniques to a real-world system as far as we know.

Jul 15 10:00

ABSTRACT. Inferring a minimal finite state machine (FSM) from a given set of traces is a fundamental problem in computer science. Although the problem is known to be NP-complete, it can be solved efficiently with SAT solvers when the given set of traces is relatively small. On the other hand, to infer an FSM equivalent to a machine which generates traces, the set of traces should be sufficiently representative and hence large. However, the existing SAT-based inference techniques do not scale well when the length and number of traces increase. In this paper, we propose a novel approach which processes long traces incrementally. The experimental results indicate that it scales sufficiently well and time it takes grows slowly with the size of traces.

Jul 15 12:00

ABSTRACT. In spite of the theoretical and algorithmic developments for system synthesis in recent years, little effort has been dedicated to quantifying the quality of the specifications used for synthesis. When dealing with unrealizable specifications, finding the weakest environment assumptions that would ensure realizability is typically a desirable property; in such context the weakness of the assumptions is a major quality parameter. The question of whether one assumption is weaker than another is commonly interpreted using implication or, equivalently, language inclusion. However, this interpretation does not provide any further insight into the weakness of assumptions when implication does not hold. To our knowledge, the only measure that is capable of comparing two formulae in this case is entropy, but even it fails to provide a sufficiently refined notion of weakness in case of GR(1) formulae, a subset of linear temporal logic formulae which is of particular interest in controller synthesis. In this paper we propose a more refined measure of weakness based on the Hausdorff dimension, a concept that captures the notion of size of the omega-language satisfying a linear temporal logic formula. We identify the conditions under which this measure is guaranteed to distinguish between weaker and stronger GR(1) formulae. We demonstrate through instances of application the usefulness of the weakness measure in computing GR(1) assumptions refinements.

Jul 15 15:00

ABSTRACT. One of the claimed advantages of model checking is its capability to provide a counter-example explaining why a property is violated by a given system. Nevertheless, branching logics such as Computation Tree Logic and its extensions have complex branching counter-examples, and standard model checkers such as NuSMV do not produce complete counter-examples and are limited to single executions. Many branching logics can be translated into the mu-calculus. To solve this problem of producing complete and complex counter-examples for branching logics, we propose a mu-calculus-based framework with rich explanations. It integrates a mu-calculus model checker that produces complete explanations, and several functionalities to translate them back to the original logic. In addition to the framework itself, we describe its implementation in Python and illustrate its applicability with Alternating Temporal Logic.

Jul 15 16:00

ABSTRACT. Imprecision in timing can sometimes be beneficial. Metric interval temporal logic (MITL), by disabling the expression of punctuality constraints, was shown to translate to timed automata, yielding an elementary decision procedure. We show how this principle extends to other forms of dense-time specification using regular expressions. By providing a clean, automaton-based formal framework for non-punctual languages, we are able to recover and extend several results in timed systems. Metric interval regular expressions (MIRE) are introduced, providing regular expressions with non-singular duration constraints. We obtain that MIRE are expressively complete relative to a class of one-clock timed automata, which can be determinized using additional clocks. Metric interval dynamic logic (MIDL) is then defined using MIRE as temporal modalities. We show that MIDL generalizes known extensions of MITL, while translating to timed automata at comparable cost.

Jul 15 16:30

ABSTRACT. Design and verification of modern systems requires diverse models, which often come from a variety of disciplines, and it is challenging to manage their heterogeneity -- especially in the case of cyber-physical systems. To check consistency between models, recent approaches map these models to flexible static abstractions, such as architectural views. This model integration approach, however, comes at a cost of reduced expressiveness because complex behaviors of the models are abstracted away. As a result, it may be impossible to automatically verify important behavioral properties across multiple models, leaving systems vulnerable to subtle bugs. This paper introduces the Integration Property Language (IPL) that improves integration expressiveness using modular verification of properties that depend on detailed behavioral semantics while retaining the ability for static system-wide reasoning. We prove that the verification algorithm is sound and analyze its termination conditions. Furthermore, we perform a case study on a mobile robot to demonstrate IPL is practically useful and evaluate its performance.

Jul 15 17:00

ABSTRACT. We present an epistemic logic equipped with time-stamps in atoms and epistemic operators, which enables reasoning about the moments at which events happen and knowledge is acquired or deduced. Our logic includes both an epistemic operator K and a belief operator B, to capture the disclosure of inaccurate information. Our main motivation is to describe rich privacy policies in online social networks (OSNs). Most of today's privacy policy mechanisms in existing OSNs allow only static policies. In our logic, it is possible to express rich dynamic policies in terms of the knowledge available to the different users and the precise time of actions and deductions. Our framework can be instantiated for different OSNs, by specifying the effect of the actions in the evolution of the social network and in the knowledge disclosed to each user. We present an algorithm for deducing knowledge and propagating beliefs, which can also be instantiated with different variants of how the epistemic information is preserved through time. Policies are modelled as formulae in the logic, which are interpreted over timed traces. Finally, we show that model-checking is decidable.

Jul 15 17:30

ABSTRACT. In this paper, we propose a novel framework for the synthesis of robust and optimal energy-aware controllers. The framework is based on energy timed automata, allowing for easy expression of timing-constraints and variable energy-rates. We prove decidability of the energy-constrained infinite-run problem in settings with both certainty and uncertainty of the energy-rates. We also consider the optimization problem of identifying the minimal upper bound that will permit existence of energy-constrained infinite runs. Our algorithms are based on quantifier elimination for linear real arithmetic. Using Mathematica and Mjollnir, we illustrate our framework through a real industrial example of a hydraulic oil pump. Compared with previous approaches our method is completely automated and provides improved results.

Jul 16 10:00

ABSTRACT. Concurrent program refinement algebra provides a suitable basis for supporting mechanised reasoning about shared-memory concurrent programs in a compositional manner, for example, it supports the rely/guarantee approach of Jones. The algebra makes use of a synchronous parallel operator motivated by Aczel’s trace model of concurrency and with similarities to Milner’s SCCS. This paper looks a defining a form of fairness within the program algebra. The encoding allows one to reason about the fair execution of a single process in isolation as well as define fair parallel in terms of base parallel operator, of which no fairness properties are assumed. An algebraic theory to support fairness and fair parallel is developed.

Jul 16 11:00

ABSTRACT. Modern processors deploy a variety of weak memory models, which for efficiency reasons may (appear to) execute instructions in an order different to that specified by the program text. The consequences of instruction reordering can be complex and subtle, and can impact on ensuring correctness. Previous work on the semantics of weak memory models has focussed on the behaviour of assembler-level programs. In this paper we utilise that work to extract some general principles underlying instruction reordering, and apply those principles to a wide-spectrum language encompassing abstract data types as well as low-level assembler code. The goal is to support reasoning about implementations of data structures for modern processors with respect to an abstract specification.

Specifically, we define an operational semantics, from which we derive some properties of program refinement, and encode the semantics in the rewriting engine Maude as a model checking tool. The tool is used to validate the semantics against the behaviour of a set of litmus tests (small assembler programs) reported on hardware, and also to model check implementations of data structures from the literature against their abstract specifications.

Jul 16 11:30

ABSTRACT. A multitude of weak memory models exists supporting various types of relaxations and different synchronization primitives. On one hand, they must be lax enough to allow for hardware and compiler optimizations; on the other, the more lax the model, the harder it is to reason about programs. Though the right balance is up for debate, a memory model should provide what is known as the SC-DRF guarantee: data-race free programs behave in a sequentially consistent manner. Memory models are often defined axiomatically where the notion of a program is abstracted away (often as a graph with memory events as nodes) and orderings of memory events are defined via constraints. In this work we present a weak memory model for a calculus inspired by Go. Different from previous approaches, we thus focus on a memory model with buffered channel communication as the sole synchronization primitive. We formalize the memory model via an operational semantics and prove the SC-DRF guarantee using a standard simulation technique. The proof highlights meaningful invariants and gives insight into the workings of the memory model. Finally, we provide a concrete implementation in K, a rewrite-based executable semantic framework, which allows us to derive an interpreter for the proposed language.

Jul 16 12:00

ABSTRACT. This paper considers the challenge of designing and verifying control protocols for geographically distributed railway interlocking systems. It describes for a real-world case study how this can be tackled by stepwise development and model checking of state transition models in an extension of the RAISE Specification Language (RSL). This method also allows different variants of the control protocols to be explored.

Jul 16 11:00

ABSTRACT. Reliability is the most important design issue for current autonomous vehicles. How to guarantee reliability and reduce hardware cost is key for the design of such complex control systems intertwined with scenario-related multi-period timing behaviors. The paper presents a reliability and resource-aware design framework for embedded implementation of such autonomous applications, where each scenario may have its own timing constraints. The constraints are formalized with the consideration of different redundancy based fault-tolerant techniques and software to hardware allocation choices, which capture the static and various causality relations of such systems. Both exact and heuristic-based methods have been implemented to derive the lower bound of hardware usage, in terms of processor, for the given reliability requirement. The case study on a realistic autonomous vehicle controller demonstrates the effectiveness and feasibility of the framework.

Jul 16 11:30

ABSTRACT. This paper presents our experience with formal verification of C code that is automatically generated from Simulink open-loop controller models. We apply the state-of-the-art commercial model checker BTC EmbeddedPlatform to two industrial case studies: Driveline State Request and E-Clutch Control. These case studies contain various features (decision logic, floating-point arithmetic, rate limiters and state-flow systems) implemented in discrete-time logic. The diverse features and the extensive use of floating-point variables make the formal code verification more challenging. The paper reports our findings, identifies shortcomings and strengths of formal verification when adopted in an automotive setting. We also provide recommendations to tool developers and requirement engineers so as to integrate formal code verification into the automotive mass product development.

Jul 16 12:00

ABSTRACT. QFLan offers modeling and analysis of highly reconfigurable systems, like product lines, which are characterized by combinatorially many system variants (or products) that can be obtained via different combinations of installed features. The tool offers a modern integrated development environment for the homonym probabilistic feature-oriented language. QFLan allows the specification of a family of products in terms of a feature model with quantitative attributes, which defines the valid feature combinations, and probabilistic behavior subject to quantitative constraints. The language's behavioral part enables dynamic installation, removal and replacement of features. QFLan has a discrete-time Markov chain semantics, permitting quantitative analyses. Thanks to a seamless integration with the statistical model checker MultiVeStA, it allows for analyses like the likelihood of specific behavior or the expected average value of non-functional aspects related to feature attributes.

Jul 16 16:00

ABSTRACT. Modern systems have grown in complexity, and the attack surface has increased accordingly. Even though system components are generally carefully designed, and even verified, by different groups of people, the composition of these components is often regarded with less attention. This paves the way for “architectural attacks,” a class of security vulnerabilities where the attacker is able to threaten the security of the system even if each of its component continues to act according to their specifications. In this article, we introduce FreeSpec, a Coq framework built upon the key idea that components can be modelled as programs with algebraic effects to be realised by other components. FreeSpec allows for the modular modelling of a complex system, by defining idealised components connected together, and the modular verification of the properties of their composition. In doing so, we propose a novel approach for the Coq proof assistant to reason about programs with effects in a modular way.

Jul 16 16:30

ABSTRACT. Recent renewed interest in optimization and analysis of floating-point programs has lead to a diverse array of new tools for numerical programs. These tools are often complementary, each focusing on a distinct aspect of numerical programming. Building reliable floating point applications typically requires addressing several of these aspects, which makes easy composition essential. This paper describes the composition of two recent floating-point tools: Herbie, which performs accuracy optimization, and Daisy, which performs accuracy verification. We find that the combination provides numerous benefits to users, such as being able to use Daisy to check whether Herbie's unsound optimizations improved the worst-case roundoff error, as well as benefits to tool authors, including uncovering a number of bugs in both tools. The combination also allowed us to compare the different program rewriting techniques implemented by these tools for the first time. The paper lays out a road map for combining other floating-point tools and for surmounting common challenges.

Jul 16 17:00

ABSTRACT. The Automatic Dependent Surveillance-Broadcast (ADS-B) system allows aircraft to communicate their current state, including position and velocity information, to other aircraft in their vicinity and to ground stations. The Compact Position Reporting (CPR) algorithm is the ADS-B module responsible for the encoding and decoding of aircraft positions. CPR is highly sensitive to computer arithmetic since it heavily relies on functions that are intrinsically unstable such as floor and modulo. In this paper, a formally-verified double-precision floating-point implementation of the CPR algorithm is presented. The verification proceeds in three steps. First, an alternative version of CPR, which reduces the floating-point rounding error is proposed. Then, the Prototype Verification System (PVS) is used to formally prove that the ideal real-number counterpart of the improved algorithm is mathematically equivalent to the standard CPR definition. Finally, the static analyzer Frama-C is used to verify that the double-precision implementation of the improved algorithm is correct with respect to its operational requirement. The alternative algorithm is currently being considered for inclusion in the revised version of the ADS-B standards document as the reference implementation of the CPR algorithm.

Jul 16 17:30

ABSTRACT. The automotive industry makes increasing usage of Simulink-based software development. Typically, automotive Simulink designs are analyzed using non-formal test methods, which do not guarantee the absence of errors. In contrast, formal verification techniques aim at providing formal guarantees or counterexamples that the analyzed designs fulfill their requirements for all possible input and parameters. Therefore, the safety standard ISO 26262 recommends the usage of formal methods for the software development process for safety-critical systems.

In this paper, we report on the application of formal verification to check discrete-time properties of a previously tested Simulink model for a park assistant feature using the commercial Simulink Design Verifier tool. During our evaluation, we experienced a gap between the offered functionalities and typical industrial needs, which hindered the successful application of this tool in the context of model-based development. We discuss these issues and propose solutions related to system development, requirements specification and verification tools, in order to prepare the ground for the effective integration of computer-assisted formal verification in automotive Simulink-based development.

Jul 16 16:00

ABSTRACT. Robot applications are increasingly based on teams of robots that collaborate to perform a desired mission. Such applications ask for decentralized techniques that allow for tractable automated planning. Another aspect that current robot applications must consider is \partial knowledge about the environment in which the robots are operating and the uncertainty associated with the outcome of the robots' actions.

Current planning techniques used for teams of robots that perform complex missions do not systematically address these challenges: they are either based on centralized solutions and hence not scalable, they consider rather simple missions, such as A-to-B travel, they do not work in partially known environments. We present a planning solution that decomposes the team of robots into subclasses, considers high-level missions given in temporal logic, and at the same time works when only partial knowledge of the environment is available. We prove the correctness of the solution and evaluate its effectiveness on a set of realistic examples.

Jul 16 16:30

ABSTRACT. Vector Lyapunov functions are a multi-dimensional extension of the more familiar (scalar) Lyapunov functions, commonly used to prove stability properties in systems described by non-linear ordinary differential equations (ODEs). This paper explores an analogous vector extension for so-called barrier certificates used in safety verification. As with vector Lyapunov functions, the approach hinges on constructing appropriate comparison systems, i.e., related differential equation systems from which properties of the original system may be inferred. The paper presents an accessible development of the approach, demonstrates that most previous notions of barrier certificate are special cases of comparison systems, and discusses the potential applications of vector barrier certificates in safety verification and invariant synthesis.

Jul 16 17:00

ABSTRACT. Vacuity is a leading sanity check in model-checking, applied when the system is found to satisfy the specification. The check detects situations where the specification passes in a trivial way, say when a specification that requires every request to be followed by a grant is satisfied in a system with no requests. Such situations typically reveal serious problems in the modelling of the system or the specification, and indeed vacuity detection is a part of most industrial model-checking tools. Existing research and tools for vacuity concern discrete-time systems and specification formalisms. We introduce real-time vacuity, which aims to detect problems with real-time modelling. Real-time logics are used for the specification and verification of systems with a continuous-time behavior. We study vacuity for the branching real-time logic TCTL, and focus on vacuity with respect to the time constraints in the specification. Specifically, the logic TCTL includes a single temporal operator U^J, which specifies real-time eventualities in real-time systems: the parameter J ⊆ IR≥0 is an interval with integral boundaries that bounds the time in which the eventuality should hold. We define several tightenings for the U^J operator. These tightenings require the eventuality to hold within a strict subset of J. We prove that vacuity detection for TCTL is PSPACE-complete, thus it does not increase the complexity of model-checking of TCTL. Our contribution involves an extension, termed TCTL+, of TCTL, which allows the interval J not to be continuous, and for which model-checking stays in PSPACE. Finally, we discuss ways to rank vacuity results by their significance, and extend the framework of ranking vacuity to TCTL.

Jul 16 17:30

ABSTRACT. With the rapid development of software and distributed computing, Cyber- Physical Systems (CPS) are widely adopted in many application areas, e.g., smart grid, autonomous automobile. It is difficult to detect defects in CPS models due to the complexities involved in the software and physical systems. To find defects in CPS models efficiently, robustness guided falsification of CPS is introduced. Existing methods use several optimization techniques to generate counterexamples, which falsify the given properties of a CPS. However those methods may require a large number of simulation runs to find the counterexample and is far from practical. In this work, we explore state-of-the-art Deep Reinforcement Learning (DRL) techniques to reduce the number of simulation runs required to find such counterexamples. We report our method and the preliminary evaluation results.

Jul 17 09:00

ABSTRACT. The success of dynamic verification techniques for C MPI (Message Passing Interface) programs rest on their ability to address communication non-determinism. As the number of processes in the pro- gram grows, the dynamic verification techniques suffer from the problem of an exponential growth in the size of the reachable state space. In this work, we provide a hybrid verification technique that combines explicit- state dynamic verification with symbolic analysis of message passing pro- grams. The dynamic verification component deterministically replays the execution runs of the program, while the symbolic component encodes a set of interleavings of the observed run of the program in a quantifier-free first order logic formula and verifies the formula for property violations such as assertions and absence of communication deadlocks. In the ab- sence of property violations, it performs analysis to generate a different run of the program that does not fall in the set of already verified runs of the program. We demonstrate the effectiveness of our methodology by our prototype tool Hermes. Our evaluation indicates that for non-single- path MPI programs, the verification time reduces by at least 3 times when Hermes is compared against the state-of-the-art verification tools.

Jul 17 09:30

ABSTRACT. To combat state space explosion several compositional verification approaches have been proposed. One such approach is compositional aggregation, where a given system consisting of a number of parallel components is iteratively composed and minimised. Compositional aggregation has shown to perform better (in the size of the largest state space in memory at one time) than classical monolithic composition in a number of cases. However, there are also cases in which compositional aggregation performs much worse. It is unclear when one should apply compositional aggregation in favor of other techniques and how it is affected by action hiding and the scale of the model. This paper presents a descriptive analysis following the quantitiative experimental approach. The experiments were conducted in a controlled test bed setup in a computer laboratory environment. A total of eight scaleable models with different network topologies considering a number of varying properties were investigated comprising 119 subjects. This makes it the most comprehensive study done so far on the topic. We investigate whether there is any systematic difference in the success of compositional aggregation based on the model, scaling, and action hiding. Our results indicate that both scaling up the model and hiding more behaviour has a positive influence on compositional aggregation.

Jul 17 10:00

ABSTRACT. The parameterised verification problem seeks to verify all members of some family of systems. We consider the following instance: each system is composed of an arbitrary number of similar component processes, together with a fixed number of server processes; processes communicate via message passing; in particular, each component process has an identity, which may be included in messages, and passed to third parties. We extend the technique of view abstraction of Abdulla et al. to this setting. We give an algorithm and implementation that allows such systems to be verified for an arbitrary number of components. We show how this technique can be applied to a concurrent datatype built from reference-linked nodes, such as a linked list.

Jul 17 12:00

ABSTRACT. Scalable and automatic formal verification for concurrent systems is always demanding. In this paper, we propose a verification framework to support automated compositional reasoning for concurrent programs with shared variables. Our framework models concurrent programs as succinct automata and supports the verification of multiple important properties. Safety verification and simulations of succinct automata are parallel compositional, and safety properties of succinct automata are preserved under refinements. We generate succinct automata from infinite state concurrent programs in an automated manner. Furthermore, we propose the first automated approach to checking rely-guarantee based simulations between infinite state concurrent programs. We have prototyped our algorithms and applied our tool to the verification of multiple refinements.

Jul 17 14:00

ABSTRACT. We present the new tool Lodin for statistical model checking of LLVM-bitcode. Lodin implememts a simulation engine for LLVM- bitcode and implements classic statistical model checking algorithms on top it. The simulation engine implements only the core of LLVM but supports extending this core through a plugin-architecture. Besides the statistical model checking algorithms Lodin also provides an interactive simulation front-end. The simulator front-end was integral for our second contribution - an integration of Lodin into Plasma-Lab. The integration with Plasma-Lab is integral to allow reasoning about rare properties of programs.

Jul 17 14:30

ABSTRACT. Software-Defined Networking (SDN) is a recent networking paradigm that has become increasingly popular in the last decade. It gives unprecedented control over the global behavior of the network and provides a new opportunity for formal methods. Much work has appeared in the last few years on providing bridges between SDN and verification. This paper advances this research line and provides a link between SDN and traditional work on formal methods for verification of distributed software---actor-based modelling. We show how SDN programs can be seamlessly modelled using actors, and thus existing advanced model checking techniques developed for actors can be directly applied to verify a range of properties of SDN networks, including consistency of flow tables, violation of safety policies, and forwarding loops.

Jul 17 17:00

ABSTRACT. Model-finding tools like the Alloy Analyzer produce concrete examples of how a declarative specification can be satisfied. These formal tools are useful in a wide range of domains: software design, security, networking, and more. By producing concrete examples, they assist in exploring system behavior and can help find surprising faults.

Specifications usually have many potential candidate solutions, and yet model- finders tend to leave the choice of which examples to present entirely to the underlying solver. This paper closes that gap by exploring notions of coverage for the model-finding domain, yielding a novel, rigorous metric for output quality. These ideas are realized in the tool CompoSAT, which interposes itself between Alloy's constraint-solving and presentation stages to produce ensembles of examples that maximize coverage.

We show that high-coverage ensembles like those CompoSAT produces are useful for, among other things, detecting overconstraint---a particularly insidious form of specification error. We detail the underlying theory of CompoSAT, discuss its implementation, and evaluate it on numerous specifications.

Jul 17 16:00

ABSTRACT. We present a new partial order reduction method for reachability analysis of nondeterministic labeled transition systems over metric spaces. Nondeterminism arises from both the choice of the initial state and the choice of actions, and the number of executions to be explored grows exponentially with their length. We introduce a notion of $\varepsilon$-independence relation over actions that relates approximately commutative actions; $\varepsilon$-equivalent action sequences are obtained by swapping $\varepsilon$-independent consecutive action pairs. Our reachability algorithm generalizes individual executions to cover sets of executions that start from different, but $\delta$-close initial states, and follow different, but $\varepsilon$-independent, action sequences. The constructed over-approximations can be made arbitrarily precise by reducing the $\delta,\varepsilon$ parameters. Exploiting both the continuity of actions and their approximate independence, the algorithm can yield an exponential reduction in the number of executions explored. We illustrate this with experiments on consensus, platooning, and distributed control examples.

Jul 17 16:30

ABSTRACT. Deadlock analysis of object-oriented programs that dynamically create threads and objects is complex because these programs may have infinitely many states.

We define a simple calculus featuring recursion, threads and synchronizations that guarantee exclusive access to objects. We detect deadlocks by associating an abstract model to programs -- the lam model -- where it is decidable whether a problematic object dependency (e.g.~a circularity) between threads will be ever manifested.

The analysis is lightweight because the deadlock detection problem is fully reduced to the corresponding one in lams (without using other models). The technique is intended to be an effective tool for the deadlock analysis of mainstream programming languages by defining ad-hoc extraction processes.

Jul 17 15:00

ABSTRACT. We propose a novel approach to the formal specification and verification of dynamic architectures that are at the core of adaptive systems such as critical-infrastructure protection. Key features include run-time reconfiguration based on adding and removing components and connections, resulting in systems with unbounded number of components. We provide a logic-based specification of a Dynamic Parametrized Architecture (DPA), where parameters represent the infinite-state space of possible configurations, and first-order formulas represent the sets of initial configurations and reconfiguration transitions. We encode information flow properties as reachability problems of such DPAs, define a translation into an array-based transition system, and use an SMT-based model checker to tackle a number of case studies.

Jul 17 17:30

ABSTRACT. Operational requirements of safety-critical systems are often written in restricted specification logics. These restricted logics are amenable to automated analysis techniques such as model-checking, but are not rich enough to express complex requirements of unmanned systems. This short paper advocates for the use of expressive logics, such as higher-order logic, to specify the complex operational requirements and safety properties of unmanned systems. These rich logics are less amenable to automation and, hence, require the use of interactive theorem proving techniques. However, these logics support the formal verification of complex requirements such as those involving the physical environment. Moreover, these logics enable validation techniques that increase confidence in the correctness of numerically intensive software. These features result in highly-assured software that may be easier to certify. The feasibility of this approach is illustrated with examples drawn for NASA's unmanned aircraft systems.

Jul 17 09:00

ABSTRACT. In this article we present an industrial-strength approach based on formal methods to develop and check safety-critical interlocking software for railway signaling systems. The Prover Trident approach is developed by Prover Technology to meet industry needs for reduced cost and time-to-market, by capitalizing on the inherent repetitive nature of interlocking systems, in the sense that specific systems can be created and verified efficiently as specific instances of generic principles. This enables a high degree of automation, supported by an industrial-strength toolkit for creation of design and code, with seamless integration of push-button tools for simulation and formal verification. Using this approach, safety assessment relies on formal verification, performed on the design, the software code as well as the binary code. An independent toolset for formal verification that has been developed to meet the applicable certification requirements is used to verify the revenue service code. The basic ideas of this approach have been around for some time [1,2], while the methodology and tools have matured over many industrial application projects for revenue service systems. The presentation highlights the main ingredients in this successful application of formal methods, as well as challenges in establishing this approach for production use in a conservative industry domain.

Jul 17 09:30

ABSTRACT. Model-based testing is considered state-of-the-art in verification and validation of safety critical systems. This paper discusses some experiences of applying the model-based testing tool RTT-MBT for the evacuation function of an aircraft cabin controller. A major challenge of this project was the parametric design of the software, which allows to tailor the software to a certain aircraft configuration via a large number of application parameters. Application parameters thus need to be integrated in the test models. Further challenges consisted of mapping multiple detailed signals of the system under test to a single abstract model variable and vice versa, and handling incremental test model development during an ongoing test campaign. We discuss solutions that we developed to successfully conduct this test campaign.

Jul 17 10:00

ABSTRACT. Static code analysis can be applied to show compliance to coding guidelines, and to demonstrate the absence of critical programming errors, including runtime errors and data races. In recent years, security concerns have become more and more relevant for safety-critical systems, not least due to the increasing importance of highly-automated driving and pervasive connectivity. While in the past, sound static analyzers have been primarily applied to demonstrate classical safety properties they are well suited also to address data safety, and to discover security vulnerabilities. This article gives an overview and discusses practical experience.

Jul 17 14:00

ABSTRACT. As new security problems and innovative attacks continue to be discovered, program analysis remains a burgeoning area of research. However, writing new analyses has remained a specialist task, and although declarative and logic programming has been observed to be an excellent fit for phrasing complex analysis, the goal of democratising the creation of new checks has remained elusive.

QL builds on previous work to use Datalog for this purpose, but solves some of the traditional challenges: Its object-oriented nature enables the creation of extensive libraries, and the query optimiser minimises the performance cost of the abstraction layers introduced in this way. QL enables agile security analysis, allowing security response teams to find all variants of a newly discovered vulnerability. Their work can then be leveraged to provide automated on-going checking, thus ensuring that the same mistake never makes it into the code base again.

We will introduce the principles behind QL, and show how we can perform declarative variant analysis on high-profile open-source code bases.

Jul 17 14:30

ABSTRACT. By reason about security as object equivalence, you can leverage familiar OO reasoning techniques to write formal, modular, human-readable, machine-checkable proofs.

Jul 17 15:00

ABSTRACT. Theorem proving has a proud history of elite academic pursuit and select industrial use. Impact, when predicated on acquiring the internals of a formalism or proof environment, is gated on skilled and idealistic adapters. In the case of automatic theorem provers known as Satisfiability Modulo Theories, SMT, solvers, the barrier of entry is shifted to tool builders and their domains. SMT solvers typically provide convenient support for domains that are prolific in software engineering and have in the past decade found widespread use cases in both academia and industry. We describe some of the background developing the Z3 solver, the factors that have played an important role in shaping its use, and an outlook on further development and adaption.

Jul 17 16:00

ABSTRACT. We are proposing an integrated verification framework for developing certifiable safety- and security-critical software in an agile way. First, the framework supports integrated verification as it applies a combination of complementary formal software analysis methods. Second, the framework is evidential as verification evidences, which form the basis for certification, are automatically generated from pre-defined verification workflow patterns by chaining results from the integrated software analysis tools. Third, the framework is continuous as it is aimed at executing verification and generating corresponding evidences during each iteration of an agile development process.

Jul 17 16:30

ABSTRACT. Developing safety critical systems is a very difficult task. Such systems require talented engineers, strong experience and dedication when designing the safety principles of these systems. Indeed it should be demonstrated that no failure or combination of failures may lead to a catastrophic situation where people could be injured or could died because of that system. This article presents disruptive technologies that reduce the effort to develop such systems by providing integrated building blocks easier to use.

Jul 17 17:00