View: session overviewtalk overview
09:00 | The Method of Coalgebra: Exercises in Coinduction |
09:45 | Coalgebra meets Convexity in Probabilistic Systems |
09:00 | Verifying Distributed Systems ABSTRACT. TBA |
10:00 | ABSTRACT. We present a small Coq library for collecting side conditions and deferring their proof. |
09:00 | Invited Talk: Approximating partial by total: fixpoint characterizations of back-and-forth equivalences ABSTRACT. Model comparison games such as Ehrenfeucht-Fraisse, pebble and bisimulation games play a central role in finite and infinite model theory, modal logic and concurrency theory. We show how such games can be understood in terms of comonads on the category of relational structures and homomorphisms. This leads to novel syntax-free characterizations of the equivalences induced by a wide range of logics, and coalgebraic characterizations of key combinatorial parameters such as tree-width and tree-depth. This in turn leads to a novel analysis of back-and-forth equivalences. Instead of considering partial isomorphisms with back- and forth- extension properties, we show how they can be characterized in a uniform fashion in terms of sets of coKleisli morphisms for the associated comonads, satisfying a local invertibility condition. This leads in turn to a fixpoint characterization. An interesting feature of this characterization is that we are approximating partial maps by (sets of) total ones, and replacing extension properties with a greatest fixpoint computation. |
09:40 | Diffeological Spaces and Semantics for Differential Programming SPEAKER: Matthijs Vákár ABSTRACT. Differential programming, particularly systems based on Automatic Differentiation, has recently become a popular method to approach high-dimensional optimisation and integration problems found in machine learning and computational statistics. Examples of systems relying on these techniques are TensorFlow and Stan. The semantics of such systems, however, still remains rather poorly understood. In particular, it is unclear how differentiation should interact with more traditional computer science concepts like recursion. For these purposes, we are exploring a rich semantic framework for differential programming based on generalised smooth spaces known as diffeological spaces. We demonstrate how to construct Fiore's axiomatic domain theory in this setting. |
10:05 | Domain Theory for Intensional Computation ABSTRACT. Intensional computation supports generic queries of internal structure as well as the extensional computation familiar from lambda-calculus and combinatory logic. This paper introduces a denotational semantics for the simplest intensional calculus, namely SF-calculus. Its domain has a basis given by the normal forms of the calculus, extended with a least element. Their functionality is revealed by a mapping from the domain to its function space that is determined by the reduction rules of the calculus. Conversely, step-functions correspond to simple pattern-matching functions. These are representable in SF-calculus, which yields a right inverse to the mapping above. However, this semantics fails for traditional combinatory logic (SK-calculus) as the corresponding mapping into the function space does not have a right inverse. This is because combinatory logic cannot represent enough pattern-matching functions, cannot support intensional computation. This result complements the syntactic proof that SF-calculus is more expressive than SK-calculus. |
09:00 | A Formal Approach to Cyber-Physical Attacks ABSTRACT. We apply formal methods to lay and streamline theoretical foundations to reason about Cyber-Physical Systems (CPSs) and cyber-physical attacks. We focus on integrity and DoS attacks to sensors and actuators of CPSs, and on the timing aspects of these attacks. Our contributions are threefold: (1) we define a hybrid process calculus to model both CPSs and cyber-physical attacks. (2) we define a threat model of cyber-physical attacks and provide the means to assess attack tolerance/vulnerability with respect to a given attack. (3) we formalise how to estimate the impact of a successful attack on a CPS and investigate possible quantifications of the success chances of an attack. We illustrate definitions and results by means of a non-trivial engineering application. |
10:00 | SPEAKER: Luca Arnaboldi ABSTRACT. Organizations and researchers alike have widely recognised the multiple advantages of adapting Network Intrusion Detection Systems (NIDS) as the norm to monitor against DoS attacks on their systems. Although effective, implementation within the Internet of Things (IoT) is complicated as the setups and protocols used vary, necessitating data collection to be bespoke to an individual system. Standard approaches used to train NIDS include; 1) Use a database of known attacks or 2) testing systems to create a ``benchmark" behaviour and flag any anomaly as a potential attack. It is not feasible to establish a benchmark behaviour in dynamic IoT systems where devices may constantly shift, new devices might join and behaviours might change. The IoT is by its very nature ubiquitous and therefore time consuming to benchmark, we therefore focus on the first approach. This approach has its own drawbacks amplified for IoT systems as: 1) Collecting data unique to a system and for each attack is time consuming and 2) some system changes can require to collect the data or part of the data from scratch (e.g. interactive smart homes where devices can change frequently) To address these limitations we present a novel modelling approach which we call Lightweight IoT System under Attack (LISA) to represent the effects of Power Drain and DDoS attacks on IoT Systems. We begin with a precise formalization of properties of IoT devices using measurements from the real system and run verification on the model to assure matching behaviour. We then model specific attacks on the systems and generate synthetic dataset. |
09:00 | Opening remarks and introductions |
09:10 | Intrusion Tolerance in Complex Cyber Systems ABSTRACT. We will discuss intrusion tolerance as a desirable property of cyber systems and discuss the relationship between intrusion tolerance and resilience. Intrusion-tolerant complex systems maintain certain security properties even when components of those systems are compromised. We will examine some ways to quantify intrusion tolerance using graphical models of complex cyber systems with a focus on the misuse of authentication credentials and the exploitation of trust relationships. Finally, we will provide some examples of the impact of this analysis on real-world policy decisions. |
10:10 | Disclosure Analysis of SQL Workflows SPEAKER: Luciano García-Bañuelos ABSTRACT. In the context of business process management, the implementation of data minimization requirements requires that analysts are able to assert what private data each worker is able to access, not only directly via the inputs of the tasks they perform in a business process, but also indirectly via the chain of tasks that lead to the production of these inputs. In this setting, this paper presents a technique which, given a workflow that transforms a set of input tables into a set of output tables via a set of inter-related SQL statements, determines what information from each input table is disclosed by each output table, and under what conditions this disclosure occurs. The result of this disclosure analysis is a summary representation of the possible computations leading from the inputs of the workflow to a given output thereof. |
09:00 | TBA |
09:45 | TBA |
09:00 | ABSTRACT. Species of structures were first introduced by Joyal as a unified framework for the theory of generating series in enumerative combinatorics. In 1988, Girard introduced normal functors as a model of pure lambda-calculus where terms are interpreted as infinite series with sets as coefficients (which correspond to a special case of Joyal's species). Fiore presented a generalized definition that both encompasses Joyal's species and constitutes a model of differential linear logic. Since species encode families of labelled structures, much work has been done to investigate their connection with algebraic data types. We want to explore an alternative viewpoint of seeing them as terms in an extension of lambda-calculus motivated by the relationship between species and differential models of linear logic and use the combinatorial intuition as a guide in the design of the syntax. The next step would be to study methods of resolution of differential equations in the setting of generalized species with the ultimate goal being to establish in linear logic a combinatorial interpretation of the obtained differential calculus. |
09:20 | ABSTRACT. Affine lambda-terms are lambda-terms in which each bound variable occurs at most once and linear lambda-terms are lambda-terms in which each bound variable occurs once and only once. In this paper we count the number of affine closed lambda-terms of size n, linear closed lambda-terms of size n, affine closed beta-normal forms of size n and linear closed beta-normal forms of size n, for several measures of the size of lambda-terms. From these formulas, we show how we can derive programs for generating all the terms of size n for each class. The foundation of all of this is a specific data structure, made of contexts in which one counts all the holes at each level of abstractions by lambda's. |
09:55 | SPEAKER: Luc Pellissier ABSTRACT. Finding lower bounds in complexity theory has proven to be an extremely difficult task. Nowadays, only one research direction is commonly acknowledged to have the ability to solve current open problems, namely Mulmuley's Geometric Complexity Theory programme. Relying on heavy techniques from algebraic geometry, it stemmed from a first lower bound result for a variant of Parallel Random Access Machines. We analyse this original proof, together with two other proofs of lower bounds from a semantics point of view, interpreting programs as graphings -- generalizations of dynamical systems. We show that, abstracted to a more general setting that exploits the classic notion of topological entropy, this three proofs share the same structure. This reformulation recentres the proofs around dynamical aspects, relegating the use of algebraic geometry to a model-driven choice rather than a key concept of the method. |
09:00 | Polynomial models of type theory ABSTRACT. Polynomials (also known as containers) represent datatypes which, like polynomial functions, can be expressed using sums and products. Extending this analogy, I will describe the category of polynomials in terms of sums and products for fibrations. This category arises from a distributive law between the pseudomonad ‘freely adding’ indexed sums to a fibration, and its dual adding indexed products. A fibration with sums and products is essentially the structure defining a categorical model of dependent type theory. I will show how the process of adding sums to such a fibration is an instance of a general 'gluing' construction for building new models from old ones. In particular we can obtain new models of type theory in categories of polynomials. Finally, I will explore the properties of other type formers in these models, and consider which logical principles are and are not preserved by the construction. |
10:00 | Some No-Go Theorems for Distributive Laws (extended abstract) SPEAKER: Maaike Zwart ABSTRACT. While papers showing examples of distributive laws are abundant, papers containing the opposite result - that certain distributive laws cannot exist - are rarely seen. In fact, the only example known to us is in 'Distributing Probability over Nondeterminism' by Varacca and Winskel, which contains a proof by Plotkin that the probability monad does not distribute over the powerset monad. In 2008, Manes and Mulry posed the question of whether the list monad distributes over itself. We solve this question, proving that both the list monad and powerset monad do not distribute over themselves, while there is precisely one distributive law for the multiset monad over itself. |
09:00 | Lower Bound Techniques for QBF Proof Systems |
10:00 | ABSTRACT. Towards the Semantics of QBF Clauses |
09:00 | Parity Games - Part 1 |
09:00 | Logic and Software Engineering: Are We Nearly There Yet? ABSTRACT. Logic has been acknowledged as potentially important to software engineering for as long as the latter field has existed. Yet, in practice, most software today is built without explicit reliance on logic, and often by people who do not think of themselves as having any knowledge of formal logic. I will claim that this is unlikely to change. Does that mean logic is not useful to software engineering? Of course not: but it means its role may be implicit more than explicit, or concealed in language and tool design. I will discuss some of the ways I have seen logic turn up in model-driven software development, in particular, and try to point out some interesting challenges for the future. |
10:00 | SPEAKER: Shufang Zhu ABSTRACT. Translating formulas of Linear Temporal Logic (LTL) over finite traces, or LTLf, to symbolic Deterministic Finite Automata (DFA) plays an important role not only in LTLf synthesis, but also in synthesis for Safety LTL formulas. The translation is enabled by using MONA, a sophisticated tool for symbolic DFA construction from logic specifications. Recent works used a first-order encoding of LTLf formulas to translate LTLf to First Order Logic (FOL), which is then fed to MONA to construct the symbolic DFA. This encoding was shown to perform well, but other encodings have not been studied. Specifically, the natural question of whether second-order (MSO) encoding, which has significantly simpler quantificational structure, can outperform first-order (FOL) encoding remained open. In this paper we meet this challenge and study MSO encoding for LTLf formulas. We introduce a specific MSO encoding, and show that this encoding and its simplicity indeed allow more potential than FOL for optimization, thus benefiting symbolic DFA construction. We then show by empirical evaluations that, surprisingly, the FOL encoding outperforms in practice the MSO encodings. We analyze the results and explain how to improve MONA in order to allow the MSO encoding to outperform the FOL encoding. |
10:15 | SPEAKER: Ornela Dardha ABSTRACT. We propose a new type system for deadlock-free session-typed π-calculus, a core concurrent programming language, by integrating for the first time, two separate lines of work. The first is the propositions-as-types approach by Caires and Pfenning, which provides a linear logic foundation for session types and guarantees deadlock-freedom. The second is Kobayashi’s approach in which types are annotated with priorities for detecting cyclic dependencies between communication operations. The outcome is a new and more expressive variant of classical linear logic with a proof assignment that gives a session type system with Kobayashi-style priorities. |
11:00 | SPEAKER: Cyril Cohen ABSTRACT. In this talk we present an ongoing effort to develop a Coq formal library about elementary real analysis, in a firmly classical setting. Almost all existing proof assistants on the market have been used to define real numbers, and to investigate the formalization of real, and sometimes also complex, analysis. A survey by Boldo et al. reviews the different approaches and the breadth of the existing developments. In particular, the Coq standard library provides an axiomatization of real numbers, with a classical flavor and the Coquelicot external library is a conservative extension thereof. At the time of writing, these libraries however cover far less material that their analogues in the HOL ecosystem, including Harrison’s HOL Light library and its translation to Isabelle/HOL. The present work is yet another attempt at providing a library for classical analysis in Coq. The motivation is twofold. First, the library relies on stronger classical axioms, so as to get closer to the logical formalism used in classical mathematics. In particular, this impacts the formalization of compactness-related facts. Second, the library is designed along the formalization methodology put into practice in the Mathematical Components libraries. The latter libraries are essentially geared towards algebra and this work aims at providing an extension for topics in analysis. However, we incorporated a significant subset of the Coquelicot library. The main original contributions lie in the effort put in the infrastructure of the library: automation, notations, etc. The library is still rather incomplete, but it proved already useful enough for a few applications. |
11:30 | ABSTRACT. I propose a talk to open a discussion about this topic. My extended abstract details the motivation and the context. Briefly, oracles are a key ingredient in the success of CompCert. Such an oracle is an untrusted foreign code which outputs are checked by certified code. However, in CompCert, oracles are currently invoked through an unsafe FFI. I will illustrate some pitfalls of this FFI and propose how to overcome them. Moreover, I will conjecture that by using an adequate FFI, we can derive ``theorems for free'' a la Wadler in Coq from the Ocaml type of polymorphic oracles, and thus discharge a part of the certification on the Ocaml typechecker. However, my proposal raises more issues than it solves: in other words, it opens a new topic of research. |
12:00 | ABSTRACT. Yet Another deep embedding of Linear Logic in Coq We present some results and comments around the ongoing development of the Yalla library which provides a deep embedding of linear logic in Coq relying on an explicit exchange rule. |
11:00 | SPEAKER: Paola Giannini ABSTRACT. Traditionally, semantic models of imperative languages use an auxiliary structure which mimics memory. In this way, ownerships and other encapsulation properties need to be reconstructed from the graph structure of such global memory. We present an alternative "syntactic" model where memory is encoded as part of the program rather than as a separate resource. This means that execution can be modelled by just rewriting source code terms, as in semantic models for functional programs. Formally, this is achieved by the block construct, introducing local variable declarations, which play the role of memory when their initializing expressions have been evaluated. In this way, we obtain a language semantics which is more abstract, and directly represents at the syntactic level constraints on aliasing, allowing simpler reasoning about related properties. We illustrate this advantage by expressing in the calculus the "capsule" property, characterizing an isolated portion of memory, which cannot be reached through external references. Capsules can be safely moved to another location in the memory, without introducing sharing. We state that the syntactic model can be encoded in the conventional one, hence efficiently implemented, and outline the proof that the dynamic semantics are equivalent. |
11:30 | SPEAKER: António Ravara ABSTRACT. Scoped channels, in the pi-calculus, are not nameable, as they are bound and subject to alpha-renaming. For program analysis purposes, however, to identify properties of these channels, it is necessary to talk about them. We present herein a method for uniquely identifying scoped channels. |
12:00 | SPEAKER: Joseph Razavi ABSTRACT. Based on Gandy's principles for models of computation we give category-theoretic axioms describing locally deterministic updates to finite objects. Rather than fixing a particular category of states, we describe what properties such a category should have. The computation is modelled by a functor that encodes updating the computation, and we give an abstract account of such functors. We show that every updating functor satisfying our conditions is computable. |
11:00 | Invited Talk: A domain theory for quasi-Borel spaces and statistical probabilistic programming ABSTRACT. I will describe ongoing work investigating a convenient category of pre-domains and a probabilistic powerdomain construction suitable for statistical probabilistic programming semantics, as used in statistical modelling and machine learning. Specifically, we provide (1) a cartesian closed category; (2) whose objects are (pre)-domains; and (3) a commutative monad for probabilistic choice and Bayesian conditioning. Jones and Plotkin have shown that conditions (2)--(3) hold when one restricts attention to continuous domains, and Jung and Tix have proposed to search for a suitable category of continuous domains possessing all three properties (1)--(3), a question that remains open to date. I propose an alternative direction, considering spaces with separate, but compatible, measure-theoretic and domain-theoretic structures. On the domain-theoretic side, we require posets with suprema of countably increasing chains (omega-cpos). On the measure-theoretic side, we require a quasi-Borel space (qbs) structure, a recently introduced algebraic structure suitable for modelling higher-order probability theory. There are three equivalent characterisations of this category given by: imposing an order-theoretic separatedness condition on countable-preserving omega-cpo-valued presheaves; internal omega-cpos in the quasi-topos of quasi-Borel spaces; and an essentially algebraic presentation. The category of these omega-qbses validates Fiore and Plotkin's axiomatic domain theory, yielding semantics for recursive types. To conclude, I will describe a commutative powerdomain construction given by factorising the Lebesgue integral from the space of random elements to the space of sigma-linear integration operators. |
11:40 | Abstractness of Continuation Semantics for Asynchronous Concurrency SPEAKER: Eneia Nicolae Todoran ABSTRACT. The full abstractness problem was raised by Robin Milner and the diculty of designing fully abstract semantics is well-known. A denotational semantics is said to be fully abstract if it is correct and complete. The correctness condition is usually easy to establish. However, it may be difficult to establish the completeness condition. In this paper we study the abstractness of denotational models designed with continuation semantics for concurrency (CSC). Due to the interplay between denotations and continuations, the full abstractness problem seems to be even more dicult in continuation semantics. Although several papers employ continuations in the denotational design of concurrent languages, we are not aware of any full abstractness result for a concurrent language designed with continuations. Therefore we introduce a new criterion that we name weak abstractness (which is specific of the CSC technique): it preserves the correctness condition, but relaxes the completeness condition of the classic full abstractness criterion. Weak abstractness may be useful for a wide class of denotational models designed with continuations in which full abstractness is difficult (or impossible) to achieve. To illustrate this approach we present a denotational model for a simple asynchronous concurrent language designed with continuations over metric spaces. We show that this denotational semantics is weakly abstract with respect to a corresponding operational model. |
12:05 | Algebra semantics of recursive computation types SPEAKER: Paul Blain Levy ABSTRACT. Giving algebra semantics for recursive computation types in a language with effects has been an open problem. We explain how to solve it by expanding the category of cppo algebras to Fiore's category in which the maps are lax homomorphisms. |
11:00 | SPEAKER: Jiaming Jiang ABSTRACT. Attribute-based access control (ABAC) is a general access control (AC) model that subsumes numerous earlier AC models. Its increasing popularity stems from the intuitive generic structure of granting permissions based on domain-dependent attributes of users, subjects, objects, and other entities in the system. Multiple formal and informal languages have been developed to express policies in terms of such attributes. The utility of ABAC policy languages is potentially undermined without a properly formalized underlying model. The high-level structure in a majority of ABAC models consists of sets and sets of sets, expressions that demand that the reader unpack multiple levels of sets and tokens to determine what things mean. The resulting reduced readability potentially endangers correct expression and reduces maintainability and validation. These problems could be multiplied with models that employ nonuniform representations of actions and their governing policies. In this paper, we address these problems by recasting the high-level structure of ABAC models in a logical formalism that treats all types of actions uniformly. Our formalism uses a simple variant of description logics to model the high-level structure, and function-free first-order logic with equality to represent and reason about the policies. Use of description logics for model formalizations, including hierarchies of types of entities and attributes, is a promise of improved usability, compared with existing ABAC models, in specifying the relationships between and requirements on domain-dependent attributes. Our formal model provides improved flexibility in supporting a variety of different requirements depending on the domain. Specifically, we will discuss how to modify the model if time plays a role in authorizing a requested action, if different policies would potentially arrive at conflicting decisions, and if default and exception rules are in application. |
11:30 | SPEAKER: Alexandre Debant ABSTRACT. For many modern applications like e.g. contactless payment, and keyless systems, ensuring physical proximity is a security goal of paramount importance. Formal methods have proved their usefulness when analysing standard security protocols. However, existing results and tools do not apply to e.g. distance bounding that aims to ensure physical proximity between two entities. This is due in particular to the fact that existing models do not represent in a faithful way the locations of the participants, and the fact that transmission of messages takes time. In this paper, we propose several reduction results: when looking for an attack, it is actually sufficient to consider a simple scenario involving at most four participants located at some specific locations. An interesting consequence of our reduction results is that it allows one to reuse ProVerif, an automated tool developed for analysing standard security protocols. As an application, we analyse several distance bounding protocols, as well as a contactless payment protocol. |
12:00 | SPEAKER: Tajana Ban Kirigin ABSTRACT. Distance-bounding (DB) protocols were proposed to thwart relay attacks on proximity-based access control systems. In a DB protocol, the verifier computes an upper bound on the distance to the prover by measuring the time needed for a signal to travel to the prover and back. DB protocols are, however, vulnerable to distance fraud, in which a dishonest prover is able to manipulate the distance bound computed by an honest verifier. Despite their conceptual simplicity, devising a formal characterization of DB protocols and distance fraud attacks that is amenable to automated formal analysis is non-trivial, primarily because of their real-time and probabilistic nature. In this work, we present a framework, based on rewriting logic, for formally analyzing different forms of distance-fraud, including recently identified timing attacks. We introduce a generic, real-time and probabilistic model of DB protocols and use it to (mechanically) verify false-acceptance and false-rejection probabilities under various settings and attacker models through statistical model checking with Maude and PVeStA. Using this framework, we first accurately confirm known results and then define and quantitatively evaluate new guessing-ahead attack strategies that would otherwise be difficult to analyze manually. |
11:00 | A state machine system for insider threat detection SPEAKER: Haozhe Zhang ABSTRACT. The risk from insider threats is rising significantly, yet the majority of organizations are ill-prepared to detect and mitigate them. Research has focused on providing rule-based detection systems or anomaly detection tools which use features indicative of malicious insider activity. In this paper we propose a system complimentary to the aforementioned approaches. Based on theoretical advances in describing attack patterns for insider activity, we design and validate a state-machine system that can effectively combine policies from rule-based systems and alerts from anomaly detection systems to create attack patterns that insiders follow to execute an attack. We validate the system in terms of effectiveness and scalability by applying it on ten synthetic scenarios. Our results show that the proposed system allows analysts to craft novel attack patterns and detect insider activity while requiring minimum computational time and memory. |
11:45 | Combining Bayesian Networks and Fishbone Diagrams to Distinguish between Intentional Attacks and Accidental Technical Failures SPEAKER: Sabarathinam Chockalingam ABSTRACT. Because of modern societies' dependence on industrial control systems, adequate response to system failures is essential. In order to take appropriate measures, it is crucial for operators to be able to distinguish between intentional attacks and accidental technical failures. However, adequate decision support for this matter is lacking. In this paper, we use Bayesian Networks (BNs) to distinguish between intentional attacks and accidental technical failures, based on contributory factors and observations (or test results). To facilitate knowledge elicitation, we use extended fishbone diagrams for discussions with experts, and then translate those into the BN formalism. We demonstrate the methodology using an example in a case study from the water management domain. |
11:00 | TBA |
11:45 | Report on ISR |
11:00 | ABSTRACT. Refining and extending previous work by Retoré (1994), we develop a systematic approach to intersection types via natural deduction. We show how a step of beta reduction can be seen as performing, at the level of typing derivations, Prawitz reductions in parallel. Then we derive as immediate consequences of Subject reduction the main theorems about normalization for intersection types: for system D, strong normalization, for system DΩ, the leftmost reduction termination for terms typable without Ω. |
11:30 | SPEAKER: Simona Ronchi Della Rocca ABSTRACT. A new proof of strong normalization for simple type assignment for λ-calculus is obtained, through a translation from this system to a system of uniform intersection types, which is equivalent to it as typability power and whose strong normalization property can be easily proved by induction on derivation. |
12:00 | ABSTRACT. We investigate the possibility of a semantic account of the execution time (i.e. the number of beta v-steps leading to the normal form, if any) for the shuffling calculus, an extension of Plotkin's untyped call-by-value lambda-calculus. For this purpose, we use a linear logic based denotational model that can be seen as a non-idempotent intersection type system: relational semantics. Our investigation is inspired by similar ones for linear logic proof-nets and untyped call-by-name lambda-calculus. We first prove a qualitative result: a (possibly open) term is normalizable for weak reduction (which does not reduce under abstractions) if and only if its interpretation is not empty. We then show that the size of type derivations can be used to measure the execution time. Finally, we show that, differently from the case of linear logic and call-by-name lambda-calculus, the quantitative information enclosed in type derivations does not lift to types (i.e. to the interpretation of terms). In order to get a genuine semantic measure of execution time in a call-by-value setting, we conjecture that a refinement of its syntax and operational semantics is needed. |
11:00 | Infinitary Proof Theory: the Multiplicative Additive Case ABSTRACT. Infinitary and regular proofs are commonly used in fixed point logics. Being natural intermediate devices between semantics and traditional finitary proof systems, they are commonly found in completeness arguments, automated deduction, verification, etc. However, their proof theory is surprisingly underdeveloped. In particular, very little is known about the computational behavior of such proofs through cut elimination. Taking such aspects into account has unlocked rich developments at the intersection of proof theory and programming language theory. One would hope that extending this to infinitary calculi would lead, e.g., to a better understanding of recursion and corecursion in programming languages. Structural proof theory is notably based on two fundamental properties of a proof system: cut elimination and focalization. The first one is only known to hold for restricted (purely additive) infinitary calculi, thanks to the work of Santocanale and Fortier; the second one has never been studied in infinitary systems. In this paper, we consider the infinitary proof system µMALL∞ for multiplicative and additive linear logic extended with least and greatest fixed points, and prove these two key results. We thus establish µMALL∞ as a satisfying computational proof system in itself, rather than just an intermediate device in the study of finitary proof systems. |
12:00 | SPEAKER: Marie Kerjean ABSTRACT. In this talk we want to present the recent results of \cite{DK}. We construct several smooth classical denotational models of Linear Logic: they are smooth as non-linear proofs are interpreted as infinitely differentiable functions, and they feature an involutive linear negation. The starting point of this work consists in noticing that the multiplicative disjunction $\parr$ corresponds to the well-known Schwartz' epsilon product. Requiring its associativity then asks for a completeness notion, while the linear involutive negation is ensured by considering a good topology (the Arens topology) on the dual, ensuring that the linear involutive negation works as an orthogonality relation. |
11:00 | SPEAKER: Pierre Pradic ABSTRACT. We propose LMSO, a proof system inspired from Linear Logic, as a proof-theoretical framework to extract finite-state stream transducers from linear-constructive proofs of omega-regular specifications. We advocate LMSO as a stepping stone toward semi-automatic approaches to Church’s synthesis com- bining computer assisted proofs with automatic decisions procedures. LMSO is correct in the sense that it comes with an automata-based realizability model in which proofs are interpreted as finite-state stream transducers. It is moreover complete, in the sense that every solvable instance of Church’s synthesis problem leads to a linear-constructive proof of the formula specifying the synthesis problem. |
11:45 | SPEAKER: Yue Li ABSTRACT. We propose a coinductive extension of Miller et. al.'s framework of uniform proof as a machinery for formulating and proving coinductive invariants arising from first-order Horn clause logic programming. It helps the study of coinductive logic programming. |
11:00 | Size, Cost and Capacity: A Semantic Technique for Hard Random QBFs SPEAKER: Joshua Blinkhorn |
11:30 | Short Proofs in QBF Expansion |
11:50 | The Symmetry Rule for Quantified Boolean Formulas SPEAKER: Martina Seidl |
12:10 | Discussion |
11:00 | Parity Games - Part 2 |
11:00 | ABSTRACT. Our recently developed LRSX Tool implements a technique to automatically prove the correctness of program transformations in higher-order program calculi which may permit recursive let-bindings. The focused notion of correctness for program transformations is invariance with respect to the observational semantics of programs. The so-called diagram method is automated by combining unification, matching, and reasoning on alpha-renamings on the higher-order meta-language, and automating induction proofs via an encoding into termination problems of term rewrite systems. We explain the techniques, we illustrate the usage of the tool, and we report on experiments. |
11:30 | SPEAKER: Nils Dallmeyer ABSTRACT. The paper is a contribution to exploring and analyzing space-improvements in the concurrent and functional process-calculus CHF. Space-improvements are defined as a generalization from a deterministic pure functional language. The main part of the paper is a polynomial algorithm for space optimizations of parallel independent processes. Applications of this algorithm are: (i) affirmation of space improving transformations for particular classes of program transformations; (ii) support of an interpreter-based method for refuting space-improvements; and (iii) as a stand-alone offline-optimizer for space (or similar ressources) of parallel processes. |
12:00 | On Transforming Functions Accessing Global Variables into Logically Constrained Term Rewriting Systems SPEAKER: Yoshiaki Kanazawa ABSTRACT. A usual idea of transforming imperative programs over the integers into rewriting systems is to transform programs into transition systems where some auxiliary function symbols are introduced to represent intermediate states. In transforming a function calling itself or others, we allow an auxiliary symbol corresponding to a function call to have an extra argument where the called function is executed, and values stored in global variables are passed to the called function as its extra arguments, and the caller receives the possibly updated values together with returned values of the called function. Such a mechanism for global variables performs well under sequential execution, but is not enough to adapt to parallel execution. In this paper, we propose a transformation of imperative programs with function calls and global variables into logically constrained term rewriting systems that represent actions of the whole execution environment with call stacks. More precisely, we prepare a function symbol for the whole environment of execution, which stores global variables and a call stack as its arguments. For a function call, we push the frame to the stack and pop it after the execution. Any running frame is executed at the top of the stack, and statements accessing global variables are represented by rewrite rules for the environment symbol. |
11:00 | SPEAKER: Ariel Kellison ABSTRACT. Brouwer developed the notion of mental constructions based on his view of mathematical truth as experienced truth. These constructions extend the traditional practice of constructive mathematics, and we believe they have the potential to provide a broader and deeper foundation for various constructive theories.We here illustrate mental constructions in two well studied theories – computability theory and plane geometry – and discuss the resulting extended mathematical structures. Further, we demonstrate how these notions can be embedded in an implemented formal framework, namely the constructive type theory of the Nuprl proof assistant. Additionally, we point out several similarities in both the theory and implementation of the extended structures. |
11:15 | Can every real proof be represented by a formal proof? The Hilbert-Gentzen thesis ABSTRACT. The thesis that any mathematical proof can be formalized was advocated by (among others) Hilbert and Gentzen around a century ago. The Hilbert-Gentzen thesis says: “Every real proof can be represented by a formal proof.” This article will go through several possible interpretations of this thesis, to be able to understand and discuss it fairly. To be able to fully understand the reasoning that separates the different interpretations, there will be a discussion on the different arguments for why formalisation has the status it currently has within mathematics. The H-G thesis has held a firm place in the foundations and philosophy of mathematics since these thoughts were first expressed a century ago. However the recent “informal proofs debate” gives us several arguments against the thesis, arguing that it is false and that it should be rejected. This talk will (1) define the thesis with help from the context in which it was introduced, (2) give the main arguments for accepting it, (3) give the main arguments for refuting it, as stated by M. Marfori, C. Cellucci, K. gödel, and F. Tanswell, and then (4) evaluate the strength and scope of the arguments and counter arguments. This talk will be a summarization and discussion on the current status of the Hilbert-Gentzen thesis, and the conclusion will be that the thesis overestimates the value of formalisation and that the thesis should be rejected. |
11:30 | ABSTRACT. In this work, we explore proof theoretical connections between sequent, nested and labelled calculi. In particular, we show a semantical characterisation of intuitionistic, normal and non-normal modal logics for all these systems, via a case-by-case translation between labelled nested to labelled sequent systems. |
11:45 | SPEAKER: Sonia Marin ABSTRACT. We present a labelled deduction for intuitionistic modal logic that comes with two relation symbols, one for the accessible world relation associated with the Kripke semantics for modal logics, and one for the preorder relation associated with the Kripke semantics for intuitionistic logic. Thus, our labelled system is in close correspondence to the birelational Kripke models. |
12:00 | ABSTRACT. We introduce a new inference problem for topological logics, the unifiability problem. Our main result is that, within the context of the mereotopology of all regular closed polygons of the real plane, unifiable formulas always have finite complete sets of unifiers. |
14:00 | Coalgebras and Kleisli Maps for Probability |
14:45 | Coalgebraic Logics: From Branching Time to Linear Time |
14:00 | A Coq mechanised formal semantics for real life SQL queries : Formally reconciling SQL and (extended) relational algebra. SPEAKER: Véronique Benzaken ABSTRACT. TBA |
15:00 | Discussion with the Coq development team |
14:00 | ABSTRACT. Cryptographic constant-time programming is an established coding discipline used in cryptography to secure programs against timing attacks. Most, if not all, cryptography library try to adhere to this coding style. The C programming language is oftentimes considered a portable assembly, and is hence used by a great number of cryptography libraries. However, what is executed by the hardware is actual assembly, not C. One can thus wonder whether security properties are preserved through compilation as even formally verified compilers only ensure preservation of observable behaviors. We present in this paper how to derive a natural framework to prove preservation of cryptographic constant-time security from simulation based proofs of compiler correctness. We also give insights on how this could be adapted to CompCert. |
14:30 | SPEAKER: Daniel Dougherty ABSTRACT. A cryptographic protocol can be deployed in a variety of environments, but existing methods of protocol analysis focus only on the protocol, without being sensitive to assumptions about these environments. We present LPA, a tool which analyzes protocols in context. LPA uses two programs, cooperating with each other: CPSA, a well-known system for protocol analysis, and Razor, a model-finder based on SMT technology. Our analysis follows the enrich-by-need paradigm, in which models of protocol execution are generated and examined. The choice of which models to generate is important, and we develop a careful motivation and evaluation of LPA's strategy of building minimal models. In fact ``minimality'' can be defined with respect to either of two preorders, namely the homomorphism preorder and the embedding preorder (i.e. the preorder of injective homomorphisms); we discuss the merits of each. Our main technical contributions are algorithms for building homomorphism-minimal models and for generating a set-of-support for the models of a theory, in each case by scripting interactions with an SMT solver. |
15:00 | ABSTRACT. We present a new direction for the formal specification of cryptographic schemes using types. In this approach, we specify a cryptographic protocol using the tools of homotopy type theory. Homotopy type theory adds the notion of higher inductive type and univalence axiom to Martin-Löf’s intensional type theory. A higher inductive type allows us to introduce constructors for paths and higher-dimensional paths in addition to points. The paths are then identified by equivalences in the universe through univalence. A higher inductive type can act as a front-end mapped to a concrete cryptographic implementation in the universe. By having a higher inductive type front-end, we can encode domain-specific laws of the cryptographic implementation as higher-dimensional paths. Due to univalence and functoriality, the path structure will be preserved in the mapping and realized by equivalence in the universe. Using this model we can achieve various guarantees on the correctness of the cryptographic implementation. |
14:00 | Tendrils of Crime: Visualizing the Diffusion of Stolen Bitcoins SPEAKER: Mansoor Ahmed ABSTRACT. As cryptocurrencies enter the lives of the ordinary people cybercriminals start using them to facilitate crime. Europol last year estimated that about 3-4% of crime proceeds are laundered through the use of cryptocurrencies and that this figure is rising steadily. There are two main reasons - lack of regulation and anonymity. Recently, Anderson et al. showed that it is both computationally and legally possible to follow the 'taint' that a stolen coin leaves as it propagates through the network. In this paper we present new visualisation mechanisms for taint propagation in Bitcoin that display how cyber criminals launder money. |
14:45 | Deciding the Emptiness of Attack trees SPEAKER: Maxime Audinot ABSTRACT. We define and study the decision problem of the emptiness of an attack tree. This decision problem reflects the natural question of knowing whether some attack scenario described by the tree holds in a given model of the system to defend. We establish accurate complexity bounds, ranging from NP-completeness for arbitrary trees down to NLOGSPACE-completeness for trees with no occurrence of the AND operator. Additionally, if the input system to defend has a succinct description, we show that the emptiness problem is PSPACE-complete. |
14:00 | SPEAKER: Kuen-Bang Hou Favonia |
14:30 | |
15:00 |
14:00 | Informal Report on FSCD |
14:30 | Discussion on Reviewing Culture in TCS |
15:15 | Discussion on IFIP |
14:00 | Intersection Types for Unboundedness Problems (ITRS invited talk) ABSTRACT. Intersection types have been originally developed as an extension of |
15:00 | SPEAKER: Pedro Ângelo ABSTRACT. Gradual typing integrates dynamic and static types. Since its introduction, it has been successfully applied to several extensions of simple types, including subtyping, parametric polymorphism and substructural types. This work studies its application to intersection type systems. We introduce a new approach to define a gradual version of the original intersection type system of Coppo and Dezani. We then present a new operational semantics for terms typed with static and dynamic intersection types, which enables dynamic type casts and identifies the causes for type errors in a framework inspired by the blame calculus. Finally we prove several properties of our system including a correctness criteria and soundness of the extension of the original gradual type system. |
14:00 | ABSTRACT. We propose to return to the construction carried precedently, where we claimed that Simply-typed approximations = intersection types derivations in the precise sense that we built a categorical equivalence between specific type systems (that encompass all well-known intersection type systems used to characterize normalization) and simply-typed approximations, that we realize as approximation functors, that arise from the translation of the language into linear logic. By studying these specific functors, we claim that their main feature is that they map the exponential of linear logic into what can reasonably be called a resource modality, corresponding either to linear, affine or cartesian intersection types. So, we present the story under the slightly different, and less syntactic, slogan: Intersection type system = multiplicative linear logic + resource modality These resource modalities are linked with well-know systems. In particular, generalized species of structure can be seen as a strictification of the Kleisli category of the linear resource modality. The study of the link between these different resource modality can shed a new light on the extensional collapse, and paving the way for a study of this collapse for dynamic semantics (such as the Geometry of Interaction and ordered combinatory algebras, used to account for forcing and realizability). |
14:20 | SPEAKER: Elaine Pimentel ABSTRACT. Benchmarking automated theorem proving (ATP) systems using standardized problem sets is a well- established method for measuring their performance. However, the availability of such libraries for non-classical logics is very limited. In this work we seek to start a discussion of benchmarks for Girard’s linear logic and some of its variants. For some quick bootstrapping of the collection of problems, we use translations of the collection of Kleene’s intuitionistic theorems in the traditional monograph Introduction to Metamathematics. We analyze four different translations of intuitionistic logic into linear logic and compare their proofs using linear logic based provers with focusing. |
14:55 | SPEAKER: Lê Thành Dũng Nguyễn ABSTRACT. We introduce the notion of coherent graphs, and show how those can be used to define dynamic semantics for Multiplicative Linear Logic. The models thus obtained are finite with respect to several aspects (finite graphs, finite generation of types) and thus improve in this way previous constructions by Seiller. We also discuss how the added notion of coherence can also be used to introduce non-determinism. |
14:00 | Ornamentation put into practice in ML ABSTRACT. Ornaments have been introduced as a means to lift functions operating on some datatype into new functions operating on an ornamentation of this datatype, i.e., a variant of this datatype carrying additional information. A typical example consists in viewing lists as an ornamentation of natural numbers and then lifting the addition into the concatenation---the user just providing the missing parts of the code. Ornaments have first been introduced in Agda and extensively studied in a categorical setting. We have been exploiting and adapting the idea of ornamentation in the context of ML. Here, we are interested in syntactic liftings where the lifted function is related to the original one as in the general setting but with a closer correspondence: on the one hand, the behavior (and ideally the computation steps) of the lifted function should coincide with the behavior of the original function, except for the additional user-provided code; on the other hand, the lifted code should be as close as possible to the code the user would have manually written. Our approach is to first synthesize a generic version of the base program that can then be specialized to any ornamentation of the base version (including the base version as a particular case), and finally simplified. While the generic version of ML base code requires some form of dependent types to typecheck, we ensure by construction, using staged computation and dependent types, that each instantiation to concrete ornaments can be simplified back into ML code. We use parametricity to show the correspondence between the base and lifted versions. We verified on (small) examples that the lifted code is often close to hand-written code. This approach to ornamentation can be extended to perform disornamentation---with new design challenges and interesting applications. Ornamentation and disornamentation are just two examples of a more general type-based approach to code refactoring and evolution. |
15:00 | Profunctor Optics and the Yoneda Lemma SPEAKER: Jeremy Gibbons ABSTRACT. Profunctor optics are a neat and composable representation of bidirectional data accessors, including lenses, and their dual, prisms. The profunctor representation exploits higher-order functions and higher-kinded type constructor classes. The relationship between the profunctor representation and the familiar representation in terms of "getter" and "setter" functions is not at all obvious. We derive the former from the latter, making the relationship clear. It turns out to be a fairly direct application of the Yoneda Lemma, arguably the most important result in category theory. |
14:00 | Preprocessing for (D)QBF SPEAKER: Ralf Wimmer |
14:20 | Initial study on the effect of different QBF solvers and preprocessors on link-information sensitive QBF encodings for Abstract Dialectical Frameworks SPEAKER: Uwe Egly |
14:40 | SPEAKER: Florent Capelli ABSTRACT. We show how knowledge compilation can be used as a tool for QBF. More precisely, we show that certain one can apply quantification on certain data structures used in knowledge compilation which in combination with the fact that restricted classes of CNF-formulas can be compiled into these data structures can be used to show fixed-parameter tractable results for QBF. |
15:00 | SPEAKER: Matthias van der Hallen ABSTRACT. Recent solver research has developed powerful QBF solvers. Alas, we know of few tools that provide a modelling language on a higher level, translating this to QBF. This is surprising, as in the closely related field of SAT solvers, research has gone hand in hand with the develop- ment of such systems. This extended abstract on work in progress reports on a system that allows the use of second-order logic as a high-level modelling language and that grounds (translates) models written in such a language to a QBF formula. We provide an example encoding, outline the grounding process and propose further research and experiments. |
15:20 | PrideMM: QBFEVAL Benchmarks (QBFEval18 Contribution) SPEAKER: Radu Grigore |
15:25 | Chess Solving and Composing (QBFEval18 Contribution) |
14:00 | SPEAKER: Guy Avni ABSTRACT. Two-player games on graphs are widely studied in formal methods as they model the interaction between a system and its environment. The game is played by moving a token throughout a graph to produce an infinite path. There are several common modes to determine how the players move the token through the graph; e.g., in turn-based games the players alternate turns in moving the token. We study the {\em bidding} mode of moving the token, which, to the best of our knowledge, has never been studied in infinite-duration games. Both players have separate {\em budgets}, which sum up to $1$. In each turn, a bidding takes place. Both players submit bids simultaneously, and a bid is legal if it does not exceed the available budget. The winner of the bidding pays his bid to the other player and moves the token. Reachability bidding games, called {\em Richman games}, have been studied in \cite{LLPU96,LLPSU99}. There, a central question is the existence and computation of {\em threshold} budgets; namely, a value $t \in [0,1]$ such that if \PO's budget exceeds $t$, he can win the game, and if \PT's budget exceeds $1-t$, he can win the game. We focus on parity and mean-payoff games. We show the existence of threshold budgets and show that the complexity of finding them coincides with the $NP \cap coNP$ complexity of reachability bidding games. The solution for mean-payoff consists of our most technically challenging contribution, where we construct optimal strategies for the players while extending and generalizing the probabilistic connection that was known for reachability bidding games. |
14:40 | SPEAKER: Antonio Di Stasio ABSTRACT. In this paper we provide a broad investigation of the symbolic approach for solving Parity Games. Specifically, we implement in a fresh tool, called SymPGSolver, four symbolic algorithms to solve Parity Games and compare their performances to the corresponding explicit versions for different classes of games. By means of benchmarks, we show that for random games, even for constrained random games, explicit algorithms actually perform better than symbolic algorithms. The situation changes, however, for structured games, where symbolic algorithms seem to have the advantage. This suggests that when evaluating algorithms for parity-game solving, it would be useful to have real benchmarks and not only random benchmarks, as the common practice has been. |
14:00 | SPEAKER: Sebastian Buruiana ABSTRACT. We give a language-parametric solution to the problem of total correctness, by automatically reducing it to the problem of partial correctness, under the assumption that a variant that decreases with each program step in a well-founded order is provided. Our approach assumes that the programming language semantics is given as a rewrite theory. We implement a prototype on top of the RMT tool and we show that it works in practice on a number of examples. |
14:30 | SPEAKER: Naoki Nishida ABSTRACT. The grammar representation of a narrowing tree for a syntactically deterministic conditional term rewriting system and a pair of terms is a regular tree grammar that generates expressions for substitutions obtained by all possible innermost-narrowing derivations that start with the pair and end with non-narrowable terms. In this paper, under a certain syntactic condition, we show a transformation of the grammar representation of a narrowing tree into another regular tree grammar that generates the ranges of substitutions generated by the grammar representation. In our previous work, such a transformation is restricted to the ranges w.r.t. a given single variable, and thus, the usefulness is limited. We extend the previous transformation by representing the range of a substitution as a tuple of terms, which is obtained by the coding for finite trees. |
15:00 | SPEAKER: Christian Herrera ABSTRACT. We propose AdaHorn, a model checker for verification of Ada programs wrt. correctness properties. AdaHorn translates Ada programs together with their related correctness properties into a set of Constrained Horn Clauses which are solved by well-known SMT solvers such as Eldarica and PDR-Z3. We propose a set of Ada programs inspired by C programs from the competition SV-COMP, and use them to compare the effectiveness of AdaHorn and GNATProve, a well-known static analyzer for Ada programs. Our experiments show that AdaHorn outperforms GNATProve by yielding correct results in more cases than GNATProve. |
14:00 | POPLMark Reloaded: Mechanizing Logical Relations Proofs ABSTRACT. Mechanizing formal systems, given via axioms and inference rules, together with proofs about them plays an important role in establishing trust in formal developments. Over the past decade, the POPLMark challenge popularized the use of proof assistants in mechanizing the metatheory of programming languages. Focusing on the the meta-theory of System F with subtyping, it allowed the programming languages community to survey existing techniques to represent and reason about syntactic structures with binders and promote the use of proof assistants. Today, mechanizing proofs is a stable fixture in the daily life of programming languages researchers. As a follow-up to the POPLMark Challenge, we propose a new collection of benchmarks that use proofs by logical relations. Such proofs are now used to attack problems in the theory of complex languages models, with applications to issues in equivalence of programs, compiler correctness, representation independence and even more intensional properties such as non-interference, differential privacy and secure multi-language inter-operability. Yet, they remain challenging to mechanize. In this talk, we focus on one particular challenge problem, namely strong normalization of a simply-typed lambda-calculus with a proof by Kripke-style logical relations. We will advocate a modern view of this well-understood problem by formulating our logical relation on well-typed terms. Using this case study, we share some of the lessons learned tackling this challenge problem in Beluga, a proof environment that supports higher-order abstract syntax encodings, first-class context and first-class substitutions. We also discuss and highlight similarities, strategies, and limitations in other proof assistants when tackling this challenge problem. We hope others will be motivated to submit solutions! The goal of this talk is to engage the community in discussions on what support in proof environments is needed to truly bring mechanized metatheory to the masses. This is joint work with Andreas Abel (Chalmers), Aliya Hameer (McGill), Alberto Momigliano (Univ. of Milan), Steven Schäfer (Univ. Saarbrücken), Kathrin Stark (Univ. Saarbrücken) |
15:00 | ABSTRACT. Proving properties about proof systems is a necessary task to make sure that they are sound and satisfy basic criteria for the application being considered. Nevertheless, it is a tedious and error-prone task. In this talk, I will present on-going work on the development of a trustworthy system that helps with the verification of proof calculi's properties. This system allows for a natural specification of sequent-style calculi and visualization of rules in LaTeX. The user is also able to construct simple proof trees. Moreover, it can be used to check identity expansion, cut-elimination and permutation lemmas with the click of a button. Our goal is to free researchers from having to write down big and repetitive proofs, and move towards the use of mechanized proofs when meta-theory of proof systems is concerned. |
15:15 | SPEAKER: Liron Cohen ABSTRACT. Transitive closure logic is obtained by a modest addition to first-order logic that affords enormous expressive power. Most importantly, it provides a uniform way of capturing finitary inductive definitions. Thus, particular induction principles do not need to be added to the logic; instead, all induction schemes are available within a single, unified language. We here present a non-well-founded proof system for transitive closure logic which is complete for the standard semantics. This system captures implicit induction, and its soundness is underpinned by the principle of infinite descent. We also consider its subsystem of cyclic proofs. |
14:30 | SPEAKER: Antonina Kolokolova ABSTRACT. One of the main successes in circuit complexity is the strong lower bounds on complexity of monotone circuits. By analogy, one might expect that studying monotone reasoning would lead to similar lower bounds in proof complexity. Yet surprisingly, Atserias, Galesi and Pudlak have given a general quasipolynomial simulation of sequent calculus LK by its monotone fragment MLK. Moreover, their techniques give a polynomial simulation, provided properties of AKS sorting networks can be formalized inside LK. Such formalization was obtained in 2011 by Jerabek, assuming provable in LK existence of expander graphs. Several major results in complexity theory such as undirected graph reachability in logspace (Reingold, Rozenman-Vadhan) and monotone formulas for sorting (Ajtai-Komlos-Szemeredi sorting networks) are based on properties of expander graphs. But what is the complexity such proofs? Much of the existing expander constructions rely on computationally non-trivial algebraic concepts for the analysis, such as the spectral gap, even when constructions themselves are combinatorial. In this work, we show that existence of expanders of arbitrary size can be proven using NC^1 reasoning. We give a fully combinatorial analysis of an iterative construction of expanders using replacement product, powering and tensoring, and formalize this analysis in the bounded arithmetic system VNC^1. Combined with Atserias, Galesi, Pudlak'2002 and Jerabek'2011, this completes the proof that monotone LK is as powerful as LK for proving monotone sequents. Joint work with Sam Buss, Valentine Kabanets and Michal Koucky. |
15:00 | ABSTRACT. We propose arithmetic theories that link systems in monotone proof complexity to classes in monotone computational complexity. In particular, we focus on the case of polynomial-time, for which monotone function classes and monotone proof systems have recently been proposed. We complete the proof complexity theoretic account of this subject by proposing an accompanying logical theory, in the usual sense of 'bounded arithmetic' |
16:00 | Martingale-Based Methods for Reachability Probabilities: Excitements and Afterthoughts about Coalgebras |
16:45 | Coalgebra Dreams |
16:00 | SPEAKER: Mirai Ikebuchi ABSTRACT. We present a Coq plugin for performing completion procedures on rewriting rules given by users. The result is added in a hint rewrite database which works with an automated rewriting tactic, so it enables us to write proof scripts more simply. |
16:30 | SPEAKER: Cyprien Mangin ABSTRACT. We present a translation of guarded recursive functions in Coq to well-founded recursive functions using only case analysis eliminators and the eliminator of the accesibility predicate. This work-in-progress is a possible first step towards a formalization of Coq's guard condition. We also present an idea to extend the guard recursion to handle inductive-inductive definitions. |
17:00 | ABSTRACT. Coq's notation system is both extremely powerful and confusingly ad-hoc. While powerful enough to pretty-print abstract syntax trees in most domain-specific languages, how to do so does not seem to be common knowledge. Typical questions arising from such an endeavor might include "How do I pick notation levels?", "Why are these notations clashing?", "Which things should be marked as symbols?", "How do I use boxes in `format`?", and "How do I get parentheses to show up (only) where I want them to?" This interactive presentation aims to serve as a guide to these questions and more, by demonstrating and explaining how to pretty-print subsets of C using only Coq's `Notation` mechanism. |
17:30 | ABSTRACT. We present some early work being done to utilize Artificial Intelligence for proof search in the Coq theorem prover. In a similar vein as the TacticToe project for HOL4, we are working on a system that finds proofs of goals on the tactic level, by learning from previous tactic scripts. Learning on the level of tactics has several advantages over more low-level approaches. First, this allows for much coarser proof steps, meaning that during proof search more complicated proofs can be found. Second, it allows for the usage of custom built, domain specific tactics that where previously defined and used in the development. This will allow for better performance of the system in very specialized domains. The rest of this abstract will describe the required components of our system. Since a number of technical issues need to be addressed, we hope to solicit feedback from the Coq developers at the workshop. |
16:00 | Frames and frame relations SPEAKER: Imanol Mozo Carollo ABSTRACT. We follow two threads in Dana Scott's mathematics to study frames in a different light. First, injectivity is an important idea. Second, relational reasoning can get at functional behavior (via, for example, approximable maps). These will permit us to situate frames in larger ambient categories in which constructions arise from the combination of injectivity and relational reasoning. In particular, the assembly of a frame comes about as being isomorphic to a sublocale \(\mathcal{Q}(L)\) of the frame of endo-relations on a given frame. We prove this by showing directly that \(\mathcal{Q}(L)\) has the universal property of the assembly. From independent discoveries (Bruns and Lakser \cite{BL70}; Horn and Kimura \cite{HK71}), we know that frames are precisely the injective (meet) semilattices. Simply knowing this does not get us very far in studying frames \emph{qua} frames. But the full subcategory of \(\textsf{SL}\) consisting of injective semilattices comes closer. A semilattice map \(\fromto hML\) between two frames can be viewed ``approximably'' as a relation \(R_h\subseteq L\times M\) defined by \(x\leq h(y)\). Such a relation is closed under weakening: \(x\leq x'\), \(x' \mathrel{R_h}y'\) and \(y'\leq y\) implies \(x\mathrel{R_h}y\). It is a subframe of \(L\times M\). Any such relation, called a \emph{frame relation}, determines a semilattice homomorphism. So the category \(\overline{\textsf{Frm}}\) of frames and frame relations is opposite to the full subcategory of \(\textsf{SL}\) consisting of injective semilattices. In \(\overline{\textsf{Frm}}\), an adjoint pair of morphisms, i.e., an adjoint pair of frame relations corresponds to an adjoint pair of semilattice homomorphisms. So by adjunction, the lower adjoint determines a semilattice homomorphism preserving all joins. Thus the category \(\textsf{Map}(\textsf{Frm})\), consisting of frames and adjoint pairs of frame relations \((\check f\dashv \hat f)\), is isomorphic to the category \(\textsf{Frm}\). A subobject (a sublocale) is then an isomorphism class of extremal epimorphisms in this category. One can check that a sublocale of a frame \(L\) corresponds exactly to an idempotent, reflexive frame relation. What we have said so far is mostly known, at least in folklore. Nevertheless, this points toward studying frames directly via frame relations, as they are (dual to) the natural morphisms of injective semilattices. To illustrate the point, we construct the assembly of a frame via frame relations, showing that its familiar universal property comes about naturally from this approach. Between any two posets \(P\) and \(Q\), the weakening relations from \(P\) to \(Q\) obviously form a completely distributive lattice, hence a frame. If \(P\) is a meet semilattice and \(Q\) is a join semilattice, then the Heyting structure is especially easy to define; \(x\mathrel{(\Phi\to\Psi)}y\) if and only if for all \(w\) and \(z\) \(w\mathrel{\Phi}z\) implies \(w\wedge x\mathrel{\Psi} y\vee z\). One uses this to show that if \(L\) and \(M\) are frames, then the collection of all frame relations \(\overline{\textsf{Frm}}(L,M)\) is in fact a sublocale of the frame of all weakening relations. But composition of frame relations preserves only finite meets, and not arbitrary joins. So it is natural to consider the homsets as injective semilattices and not as frames, per se. Though we continue to call them frames, this suggests that we are actually studying the injective semilattice enriched category of injective semilattices. For a frame \(L\), the frame \(\overline{\textsf{Frm}}(L,L)\) includes the reflexive, idempotent frame relations. These, as we have noted, correspond to sublocales of \(L\). Clearly, they are closed under arbitrary intersections. And a simple calculation using the Heyting arrow shows that they constitute a sublocale of \(\overline{\textsf{Frm}}(L,L)\), denoted by \(\mathcal{Q}(L)\). Moreover, \(L\) embeds in \(\mathcal{Q}(L)\) via the map sending \(a\in L\) to the frame relation \(x\mathrel{\Gamma_a}y\) if and only if \(x\leq y\wedge a\). Note that \(\Gamma_a\) corresponds to the closed sublocale of \(a\). The relation corresponding to the open sublocale is \(x\mathrel{\Upsilon_a}y\) if and only if \(a\vee x\leq y\). Any adjoint pair \(\check f\dashv \hat f\) of frame relations between \(L\) and \(M\) extends by composition with the embedding of \(L\) in \(\mathcal{Q}(L)\) to a pair of frame relations \(f^*,f_*\) between \(\mathcal{Q}(L)\) and \(M\). These \emph{extend} \(\check f\) and \(\hat f\) in a natural way, and furthermore \(f^*;f_* \leq 1_M\). It is natural to ask when these are in fact adjoint. The frame \(M\) has a \emph{center} relation: \(a\prec^*_M b\) if and only if there is some complemented \(w\) so that \(a\leq w\leq b\). Now if \(f=(\check f\dashv \hat f)\) is an adjoint pair of frame relations so that \(\check f;\hat f\subseteq \mathord{\prec^*_M}\), then \(f^*\dashv f_*\) is the unique adjoint pair extending \(f\) along \(\Gamma\). Conversely, using the injectivity of \(M\) and \(\mathcal{Q}(L)\), if \(f^*\) is adjoint to \(f_*\), then \(\check f;\hat f\subseteq \mathord{\prec^*_M}\). Hence, \(\mathcal{Q}(L)\) has the universal property that any adjoint pair of frame relations from \(L\) to \(M\) that sends \(L\) to the center of \(M\) factors uniquely through \(\Gamma\). \begin{references}{99} \bibitem{BL70} Bruns, G. and Lakser, H. Injective hulls of semilattices. \emph{Canadian Mathematics Bulletin}, \textbf{13} (1970), 115--118. \bibitem{HK71} Horn, A. and Kimura, N. The category of semilattices. \emph{Algebra Universalis}, \textbf{1} (1971), 26--38. \end{references} |
16:25 | Extending Stone Duality to Relations SPEAKER: M. Andrew Moshier ABSTRACT. The importance of, on the one hand, categories of relations and, on the other hand, Stone Duality does not seem to be in need of much justification in a workshop on domain theory. The purpose of this presentation is to explain how both can be combined. In other words, given a concrete duality in which arrows are structure preserving functions, extend this duality to a duality of the corresponding categories of relations (Thm 1). We will use this approach to give a reconstruction of domain logics for non-zero dimensional spaces (Thm 2), based on Scott's idea of moving from algebraic to continuous lattices by splitting certain idempotents. |
16:50 | Applications of entailments: de Groot duality ABSTRACT. We study interaction between de Groot duality and the semantic domains of various choices (i.e. powerlocale constructions) on stably compact locales. We give a simple point-free account of the duality on several powerlocale constructions. This is done via the representation of a stably compact locale by means of Scott’s entailment relation with an idempotent relation due to Coquand and Zhang [Theoret. Comput. Sci., 305(1-3):77–84, 2003]. |
16:00 | Continuous probability distributions in concurrent games SPEAKER: Hugo Paquet ABSTRACT. Probabilistic strategies were first introduced by Danos and Harmer in a probabilistic extension of Hyland-Ong games. They work particularly well for modelling probabilistic programs with state: the model is fully abstract for a probabilistic variant of Idealised Algol. Recently, concurrent games were introduced as an alternative framework for game semantics, based on event structures, a fundamental model for concurrent processes. Concurrent games can also be enriched with probability, using a notion of probabilistic event structure, and this makes possible an analysis of Probabilistic PCF (including an intensional full abstraction result). However all the above do not readily support continuous distributions, making those models unsatisfactory for practical probabilistic languages, in which such distributions are essential. Vakar and Ong have recently announced a generalisation of the Danos-Harmer model, which supports continuous distributions. They also describe an application to a stateful probabilistic language. Here we present a new probabilistic concurrent games model in which strategies are equipped to support arbitrary distributions on the real numbers. We rely on methods of measure theory and introduce measurable event structures, which generalise event structures and form the basis for a model of measurable concurrent games. Then one may adjoin probability in the form of a family of stochastic kernels. We illustrate the model by giving semantics to a higher-order, affine version of Probabilistic PCF with continuous distributions on the reals. |
16:30 | A definable game semantics for the linear quantum lambda-calculus SPEAKER: Marc De Visme ABSTRACT. In previous work, presented at GaLoP 2018, we have defined an adequate game semantics for the linear quantum lambda calculus. To do this we drew inspiration on two lines of work. On the one hand, our account of the dynamics of computation came from the concurrent games on event structures of Winskel et al, and their extension with probabilistic annotations (which we extend conservatively). On the other hand, our formulation of quantum states was inspired by the work of Pagani, Selinger and Valiron about static denotational semantics for the quantum λ-calculus. We used for games event structures annotated by Hilbert spaces, and we annotated the strategies by operators on those Hilbert spaces. In this work, we present a notion of (sequential) quantum innocence, leading to a definability result with respect to the linear quantum lambda-calculus. Quantum innocence imposes a compatibility between quantum annotations and the causal structure of strate- gies, ensuring that they can be reproduced by terms. We obtain a fully abstract model for the linear quantum λ-calculus. This is not the first fully abstract model in the linear case, see [8] by Selinger and Valiron. However, there is not yet any fully abstract model for the non linear case. Following the work on event structures with symmetry in by Castellan , Clairambault, Paquet and Winskel for probabilistic PCF, we expect to be able to extend it into a fully abstract model for the (full) quantum λ-calculus. Interestingly, the work on definability lead us to a more abstract definition of quantum games and strategies, relying only weakly on the properties of quantum annotations. Though this is work in progress, it seems to lead to a fully abstract games model for a linear λ-calculus parametrised by a symmetric monoidal category. |
17:00 | Resource-Tracking Concurrent Games SPEAKER: Aurore Alcolei ABSTRACT. In the study of programming languages, denotational semantics is used to abstract away from (some) intermediate steps of computation, in particular by hiding/internally handling the process of composition of λ-calculi. Although these abstractions are very useful to help reasoning about program behaviours such as program termination, they are often too strong to be able to reflect some quantitative information, or meta-data, such as permissions, time, or energy consumption, that can be of interest for program analysis and optimisation. In this work we present a framework of annotated concurrent games based on event structures that generally allows for keeping track of meta data, that is, information whose value may vary with the execution flow but cannot modify it. More precisely, for a given equational theory, we build a compact closed category of concurrent games and strategies in which Player moves come equipped with expressions that may only depend on variables representing Opponent moves from their causal history. These expressions may indicate resources required to reach these events, for instance in terms of permissions or time. The free variables corresponding to Opponent moves in these expressions reflect the fact that resource usage may depend on what Opponent already requires; they are instantiated while performing the composition of the strategy with its environment. In a second time, we use this new framework to give a denotational semantics of an affine IPA-style language with explicit time consumption (or any resource that fits in a concurrent monoid(R, ⋆, ; , 0, ≤) with distributive law a; (b ⋆ c) = (a; b) ⋆ (a; c)) in the spirit of Ghica’s slot games. We show that this interpretation is sound with respect to the usual interleaving-based operational semantics, and (unlike slot games) with respect to its extension allowing non-interfering parallel computation. Even with this extension, however, adequacy does not hold: in essence, our semantics predicts resource usage for the most parallel execution possible, with no scheduling or allocation constraints. |
17:30 | Automata Theory and Game Semantics ABSTRACT. I will give a survey of various classes of automata that have been used to capture the game semantics of higher-order programs and, consequently, obtain decidability results for contextual equivalence. |
16:00 | The Attacker Does not Always Hold the Initiative: Attack Trees with External Refinement SPEAKER: Ross Horne ABSTRACT. Attack trees provide a structure to an attack scenario, where disjunctions represent choices decomposing attacker's goals into smaller subgoals. This paper investigates the nature of choices in attack trees. For some choices, the attacker has the initiative, but for other choices either the environment or an active defender decides. A semantics for attack trees combining both types of choice is expressed in linear logic and connections with extensive-form games are highlighted. The linear logic semantics defines a specialisation preorder enabling trees, not necessarily equal, to be compared in such a way that all strategies are preserved. |
16:45 | On Linear Logic, Functional Programming, and Attack Trees SPEAKER: Harley Eades Iii ABSTRACT. This paper has two main contributions. The first is a new linear logical semantics of causal attack trees in four-valued truth tables. Our semantics is very simple and expressive, supporting specializations, and combines in an interesting way the ideal and filter semantics of causal attack trees. Our second contribution is Lina, a new embedded, in Haskell, domain specific functional programming language for conducting threat analysis using attack trees. Lina has many benefits over existing tools; for example, Lina allows one to specify attack trees very abstractly, which provides the ability to develop libraries of attack trees, furthermore, Lina is compositional, allowing one to break down complex attack trees into smaller ones that can be reasoned about and analyzed incrementally. Furthermore, Lina supports automatically proving properties of attack trees, such as equivalences and specializations, using Maude and the semantics introduced in this paper. |
17:30 | Closing remarks and discussions |
16:00 | SPEAKER: Jonathan Weinberger |
16:30 |
16:00 | Panel Discussion ABSTRACT. Nick Benton, Marco Gaboardi, Cynthia Kop, Alexandra Silva will participate in the discussion. |
16:00 | A shared memory semantics for session types ABSTRACT. [joint work with Klaas Pruiksma]
|
17:00 | SPEAKER: Lê Thành Dũng Nguyễn ABSTRACT. Semantic evaluation has proven to be a valuable tool to prove computational complexity results on monomorphic type systems. Can we apply it in presence of impredicative polymorphism? The linearity and stratification at work in light logics might make it possible. |
17:20 | SPEAKER: Sharjeel Khan ABSTRACT. We present a declarative and modular specification of an automated trading system (ATS) in the concurrent linear framework CLF. We implemented it in Celf, a CLF type checker which also supports executing CLF specifications. We outline the verification of a representative property of trading systems using generative grammars, an approach to reasoning about CLF specifications. |
16:00 | SPEAKER: Reuben Rowe ABSTRACT. Transitive closure logic is a known extension of first-order logic obtained by introducing a transitive closure operator. While other extensions of first-order logic with inductive definitions are a priori parametrized by a set of inductive definitions, the addition of the transitive closure operator uniformly captures all finitary inductive definitions. We present an infinitary proof system for transitive closure logic which is an infinite descent-style counterpart to the existing (explicit induction) proof system for the logic. We show that, as for similar systems for first-order logic with inductive definitions, our infinitary system is complete for the standard semantics and subsumes the explicit system. Moreover, the uniformity of the transitive closure operator allows semantically meaningful complete restrictions to be defined using simple syntactic criteria. |
16:45 | ABSTRACT. We study the logical complexity of proofs in cyclic arithmetic (CA), as introduced in Simpson '17, in terms of quantifier alternations of formulae occurring. Writing CΣ_n for (the logical consequences of) cyclic proofs containing only Σ_n formulae, our main result is that IΣ_{n+1} and CΣ_n prove the same Π_{n+1} theorems, for n > 0. Furthermore, due to the 'uniformity' of our method, we also show that CA and PA proofs differ only elementarily in size. The inclusion IΣ_{n+1} ⊆ CΣ_n is obtained by proof theoretic techniques, relying on normal forms and structural manipulations of Peano arithmetic (PA) proofs. It improves upon the natural result that IΣ_n ⊆ CΣ_n. The inclusion CΣ_n ⊆ IΣ_{n+1} is obtained by calibrating the approach of Simpson '17 with recent results on the reverse mathematics of Büchi’s theorem (Kolodziejczyk, Michalewski, Pradic & Skrzypczak '16), and specialising to the case of cyclic proofs. These results improve upon the bounds on proof complexity and logical complexity implicit in Simpson '17 and Berardi & Tatsuta '17. This talk will be based on the following work: http://www.anupamdas.com/wp/log-comp-cyc-arith/ |
16:00 | A Comparison of Algebraic and Semi-Algebraic Proof Systems |
17:00 | SPEAKER: Adam Kurpisz ABSTRACT. Various key problems from theoretical computer science can be expressed as polynomial optimization problems over the boolean hypercube. One particularly successful way to prove complexity bounds for these types of problems are based on sums of squares (SOS) as nonnegativity certificates. In this article, we initiate optimization problems over the boolean hypercube via a recent, alternative certificate called sums of nonnegative circuit polynomials (SONC). We show that key results for SOS based certificates remain valid: First, for polynomials, which are nonnegative over the $n$-variate boolean hypercube with constraints of degree $d$ there exists a SONC certificate of degree at most $n+d$. Second, if there exists a degree $d$ SONC certificate for nonnegativity of a polynomial over the boolean hypercube, then there also exists a short degree $d$ SONC certificate, that includes at most $n^{O(d)}$ nonnegative circuit polynomials. Finally, we show certain differences between SOS and SONC cones namely, we prove, that in opposite to SOS, SONC cone is not closed under taking affine transformation of variables and that for SONC there does not exist an equivalent to Putinar's Positivestellensatz for SOS. We discuss these results both from algebraic and optimization perspective. |
17:30 | SPEAKER: Joao Marques-Silva ABSTRACT. This work overviews recent results on the dual-rail based MaxSAT solving, including polynomial upper bounds on the refutation of PHP and 2PHP formulae with core-guided MaxSAT solvers and MaxSAT resolution as well as their relative efficiency compared to general resolution and cutting planes. |
16:00 | Understanding and Extending Incremental Determinization for 2QBF SPEAKER: Markus Rabe |
16:20 | SPEAKER: Tomáš Peitl ABSTRACT. Quantified Boolean Formulas (QBFs) are a generalization of propositional formulae that admits succinct encodings of verification and synthesis problems. Given that modern QBF solvers are based on different architectures with complementary performance characteristics, a portfolio-based approach to QBF solving is particularly promising. We define a natural set of features of circuit QBFs and show that they can be used to construct portfolio-based algorithm selectors of state-of-the-art circuit QBF solvers that are close to the virtual best solver. We further demonstrate that most of this performance can be achieved using surprisingly small subsets of cheaply computable and intuitive features. |
16:40 | DepQBF (QBFEval'18 Contribution) |
16:45 | RAReQS, QFUN, QESTO (QBFEval'18 Contribution) |
16:50 | Qute (QBFEval'18 Contribution) |
16:55 | QBFEval 2018 |
17:20 | Discussion SPEAKER: Hubie Chen |
16:00 | SPEAKER: Bastien Maubert ABSTRACT. Program synthesis automatically constructs programs from specifications. Strategy Logic is a powerful specification language whose goal is to give theoretical foundations for program synthesis in a multi-agent setting. One limitation of Strategy Logic is that it is purely qualitative. For instance it cannot specify quantitative properties of executions such as "every request is quickly granted", or quantitative properties of trees such as "most executions of the system terminate". In this work, we extend Strategy Logic to include quantitative aspects in a way that can express bounds on "how quickly" and "how many". We define Prompt Strategy Logic, which encompasses Prompt LTL (itself an extension of LTL with a prompt eventuality temporal operator), and we define Bounded-Outcome Strategy Logic which has a bounded quantifier on paths. We supply a general technique, based on the study of automata with counters, that solves the model-checking problems for both these logics. |
16:40 | SPEAKER: Bastien Maubert ABSTRACT. We introduce an extension of Strategy logic for the imperfect-information setting, called SLii, and study its model-checking problem. As this logic naturally captures multi-player games with imperfect information, the problem turns out to be undecidable. We introduce a syntactical class of "hierarchical instances" for which, intuitively, as one goes down the syntactic tree of the formula, strategy quantifications are concerned with finer observations of the model. We prove that model-checking SLii restricted to hierarchical instances is decidable. This result, because it allows for complex patterns of existential and universal quantification on strategies, greatly generalises previous ones, such as decidability of multi-player games with imperfect information and hierarchical observations, and decidability of distributed synthesis for hierarchical systems. |
17:20 | SPEAKER: Nicolas Markey ABSTRACT. Strategy Logic (SL) is a very expressive temporal logic for specifying and verifying properties of multi-agent systems: in SL, one can quantify over strategies, assign them to agents, and express LTL properties of the resulting plays. Such a powerful framework has two drawbacks: first, model checking SL has non-elementary complexity; second, the exact semantics of SL is rather intricate, and may not correspond to what is expected. In this paper, we focus on strategy dependences in SL, by tracking how existentially-quantified strategies in a formula may (or may not) depend on other strategies selected in the formula, revisiting the approach of [Mogavero et al., Reasoning about strategies: On the model-checking problem, 2014]. We explain why elementary dependences, as defined by Mogavero et al., do not exactly capture the intended concept of behavioral strategies. We address this discrepancy by introducing timeline dependences, and exhibit a large fragment of SL for which model checking can be performed in 2EXPTIME under this new semantics. |
16:00 | SPEAKER: Cláudia Nalon ABSTRACT. In recent work we have proposed a calculus for the multimodal logic K, where clauses are labelled by the modal level in which they occur. The calculus has been implemented and the evaluation shows that the prover performs well if the propositional symbols are uniformly distributed over the modal levels. However, by construction, the set of propositional clauses is always satisfiable. We are investigating the combination of SAT provers with our prover, in order to extract relevant information that could help reducing the time currently used during saturation. |
16:15 | SPEAKER: Hanna Lachnitt ABSTRACT. Intuitionistic logics have shown to constitute the foundations for many practical applications. For example, they serve as basis for state-of-the art proof assistents like Coq. Nevertheless, they have also been controversially discussed. David Hilbert wrote that: "Taking the principle of excluded middle from the mathematician would be the same, say, as proscribing the telescope to the astronomer or to the boxer the use of his fists."[4] Indeed, this property of intuitionistic logic gives rise to plenty of new challenges. When trying to embed intuitionistic logics in higher-order logic, monotonicity has to be minded. Furthermore, it is desirable to prove a set of frame conditions equivalent to the axioms used to build different intuitionistic logics. This talk will give a short overview on the challenges occurring due to the absence of the principle of excluded middle and discuss possible solutions. |
16:30 | SPEAKER: Trisha Nowland ABSTRACT. Modelling in psychometrics has become increasingly reliant on computer software; at the same time, many offline decisions that a researcher makes remain unrecorded and perhaps, unreconciled to anything more than the researcher’s intuition or best guess. The aim of this paper is to set out a logic that accounts for and guides accounting for decision procedures in psychometric research practices. Such a logic is informed by the integration of three systematic viewpoints under a view of constraints-based practice: i) bounded rationality (Gigerenzer & Goldstein, 1996); ii) axiomatic measurement theory with specific focus on handling measurement uncertainty (Suppes, 2002, 2006, 2016); and iii) a constructive approach to mathematical set theory (Ferreiros, 2016). The integration of these three viewpoints under an overall perspective that is characterised by inference from the best systematisation (Rescher, 2016) is reviewed, and compared to current researcher practices, with particular reference to the problems for psychometric and biometric sciences that are revealed in the Reproducibility Project (Open Science Collaboration, 2012) outcomes. Our conclusion for the overall system is that the constraints characterised by constructive mathematics offer unique tools to researcher communities in accounting for their decision procedures, and a proposal for a software tool that handles the decision protocol is presented. In characterising the decision procedures, we also explore the way that rough set theory (Pawlak, 1998) is integrated into decision procedures to provide insight into database fields or variables that hold some import but may otherwise remain hidden in research outcome reporting. |
Round table panel discussion, where attendees will be joined by leading figures in the FLoC community to discuss "Priorities for Diversity in Logic for Computer Science".
Women in Logic reception and buffet supper at Wadham College (contact Ursula.Martin@maths.ox.ac.uk for more details).