FLOC 2018: FEDERATED LOGIC CONFERENCE 2018
PROGRAM FOR SATURDAY, JULY 7TH
Days:
previous day
next day
all days

View: session overviewtalk overview

09:00-10:30 Session 23A (CL&C)
09:00
Sheaf models of classical logic extended by independence relations

ABSTRACT. Reasoning involving notions of dependence and independence occurs in a variety of situations, and it is of interest to study dependence and independence as basic logical concepts in their own right. Indeed, this desideratum has been one motivation for the development of the "independence friendly logic" of Hintikka, and for the "dependence logic" programme of Väänänen. In the talk I will show that certain sheaf models naturally support an extension of classical logic with independence relations, leading to logical principles for reasoning about independence. No prior knowledge of sheaf theory will be assumed.

10:00
Towards a Dualized Sequent Calculus with Canonicity

ABSTRACT. In pursuit of a canonistic logic comprised of dualized proof rules, we introduce a sequent calculus system, 2Intx, that is inspired by Wansing's bi-intuitionistic propositional logic 2Int. Though 2Int has canonicity and duality, it defines only natural deduction proof rules and employs an unintuitive Kripke semantics that allows atomic formulas to be both true and false. In addition to defining the sequent calculus rules of 2Intx, we also define a Kripke semantics that only admits models in which atomic formulas are either true or false but not both. Finally, we prove soundness of 2Intx.

09:00-10:30 Session 23B (Domains13)
Location: Maths LT1
09:00
Invited Talk: Looking Backward; Looking Forward
10:00
A Brouwerian proof of the Fan Theorem

ABSTRACT. Brouwer’s induction theorems – the Fan Theorem and Bar Induction – which underpin his theory of the continuum, are often presented as postulates or axioms. For Brouwer they were foundational theorems. When proofs are given, modern presentations are generally proof-theoretic in character, whereas Brouwer presented his proofs, somewhat obscurely, as analyses of mathematical constructions. The induction theorems are constructively valid in suitable sheaf models. We argue that particular monoid models – based on the monoids of open endomorphisms of the formal Baire and Cantor spaces – provide an interpretation for intuitionistic quantification over spreads, non-classical mathematical entities introduced by Brouwer’s ‘second act of intuitionism’. We focus here on the Fan theorem and Cantor space, C – we anticipate that a similar analysis should apply to Bar Induction for Baire space. A topological model over T represents a choice sequence α as a continuous function α : T → C, a choice sequence corresponds to the species of its finite prefixes. An open map σ : C → T may be presented as an internal formal space that corresponds to a Brouwerian spread, and the spread is substantial iff the map is a surjection. Monoid models differ subtly from topological models. Topological models interpret quantification over sequences presented as species. We interpret the Joyal-Lawvere sheaf-theoretic interpretation of universal quantification in the monoid model of open endomorphisms of C as subsuming sequences introduced by substantial spreads. We outline a Brouwerian presentation of this model, and argue that this provides a Brouwerian proof – an argument that should have been acceptable to Brouwer – of the Fan Theorem.

09:00-10:30 Session 23C (GS)
Location: Maths LT3
09:00
TBA
10:00
"Operational'' Game Semantics

ABSTRACT. In this survey talk, we revisit the framework of operational game semantics initiated by Soren Lassen.

In contrast to traditional game semantics of programming languages that exposes the constraints on the inhabitants of types, this study provides little or no information on the type structure. However, operational game semantics comes equipped with a proof principle, namely open bisimulation, to study the equality of programs.

We will examine the application of operational game semantics to two non-traditional applications: aspect oriented languages and layout randomization.

Based on joint work with Corin Pitcher, Julian Rathke and James Riely

09:00-10:30 Session 23D: Invited talk: Emily Riehl (HDRA)
Location: Blavatnik LT2
09:00
Homotopy coherent adjunctions and other structures

ABSTRACT. Naturally occurring diagrams in algebraic topology are commutative up to homotopy, but not on the nose. It was quickly realized that very little can be done with mere “homotopy commutativity." Homotopy coherent category theory arose out of a desire to catalog the higher homotopical information required to restore constructibility (or more precisely, functoriality) in such "up to homotopy" settings. In this talk we’ll focus on the syntactic categories that index homotopy coherent diagrams, which must satisfy a “freeness” or “cofibrancy” property that we axiomatize under the name “simplicial computads.” We then present as many explicit examples as time permits, including Dwyer-Kan free resolutions (which define the shape of a homotopy coherent diagram indexed by a 1-category), the homotopy coherent simplices (which are special cases), and the homotopy coherent realization of a simplicial set (a common generalization of these notions). A final example is given by the free homotopy coherent adjunction, a simplicial computad supported by a convenient graphical calculus developed in joint work with Dominic Verity.

10:00
Merge-bicategories: towards semi-strictification of higher categories

ABSTRACT. A 2-polygraph is regular if its 2-cells have non-degenerate, “interval-shaped” input and output boundaries. A merge-bicategory is a regular 2-polygraph with an algebraic composition of 2-cells, where the composable diagrams are the “disk-shaped” ones. A merge-bicategory is representable if it contains enough “divisible” 1-cells and 2-cells, satisfying certain universal properties.

I will show that representable merge-bicategories and morphisms that preserve divisible cells are equivalent to bicategories and (pseudo)functors; through a natural monoidal biclosed structure on merge-bicategories, one can also recover higher morphisms. I will use this to develop a semi-strictification argument for bicategories, combining the explicit combinatorial aspects of string-diagram based coherence proofs, with the main features of Hermida's abstract proof based on representable multicategories. All the constructions can be generalised to higher dimensions: I will sketch how this could lead to semi-strictification (excluding weak units) for higher categories, where it is still an open problem.

09:00-10:00 Session 23E: Invited talk: Naoki Kobayashi (HOR)
09:00
On Two Kinds of Higher-Order Model Checking and Higher-Order Program Verification
09:00-10:30 Session 23F: Invited Talk and Higher Dimensional Rewriting (IWC)
09:00
Confluence in Constraint Handling Rules: An overview from early, fundamental results to recent extensions including state invariants and conf. modulo equivalence.

ABSTRACT. Constraint Handling Rules, CHR, is a nondeterministic programming language whose programs consists of rewrite rules over program states, and being able to show confluence may be an important part of a program correctness proof. In fact, most CHR implementations are deterministic, yet confluence is useful as the programmer does not need to consider the details of the underlying execution strategy.

The initial results on proving confluence in CHR, developed during to the 1990s, were formulated with respect to a logic-based semantics. This choice gave elegant proofs based on the Critical Pair Theorem and Newman's lemma but can only describe a limited class of programs, namely logical (and terminating) programs.

It can be argued that invariants and confluence modulo equivalence are important from a practical point of view. Most programs are developed with a particular set of initial queries in mind, which reduces the set of reachable states. Therefore, taking the induced invariant into account makes a much larger class of programs confluent. Confluence modulo equivalence is a generalization where forked states must reach equivalent states, rather than a common state. For instance, a program may produce redundant data structures such as representing sets as lists, and the equivalence states that the order of the elements does not matter. Both the invariant and the equivalence relation may be tailored for the individual program.

In our recent work, we have extended previous methods for proving confluence to include invariants and confluence modulo equivalence. We have focused on a more realistic semantics that reflects the de-facto standard implementations of CHR upon Prolog, including a correct treatment of Prolog's non-logical devices (e.g., var/1, nonvar/1, is/2) and runtime errors. As part of this, we developed a notion of abstract simulations that may be interesting also for other kinds of transition and rewriting systems. The classical critical pair constructions are generalized, having the transition system of interest simulated by another meta-level system (rather that reusing the system itself). This is useful especially for invariants and state equivalences that - by nature - are meta-level statements that typically cannot be expressed in the original system.

10:00
Coherence modulo relations

ABSTRACT. The computation of minimal convergent presentations for monoids, categories or higher-dimensional categories appear in low-dimensional combinatorial problems on these structures, such as coherence problems. A method to compute coherent presentations using convergent string rewriting systems was developed following works of Squier. In this approach, coherence results are formulated in terms of confluence diagrams of critical pairs. This work proposes an extension of these methods to string rewriting systems modulo.

09:00-10:30 Session 23G: Type Theory and Proof Assistants (LFMTP)
Location: Maths LT2
09:00
Invited talk: Cubical Computational Type Theory and RedPRL

ABSTRACT. (Joint work with Carlo Angiuli, Evan Cavallo, Robert Harper and Jonathan Sterling)

Recent research revealed the deep connection between type theory and homotopy theory, which inspired a series of new type theories. The characteristic new features are univalence and higher inductive types, which have led to a fruitful development of synthetic homotopy theory. Unfortunately, those new features were originally introduced as axioms and disrupted the computational content of type theory, which affects their practicality in mechanizing proofs.

To date, the most promising approach to enjoy new features inspired by homotopy theory while retaining computational content is through cubical structure. Cohen et al. constructed a type theory based on De Morgan cubes with both univalence and higher inductive types. Influenced by their work, we also built a type-theoretic semantics with all the features, but instead based on Cartesian cubes. In addition to using a different cubical structure, our semantics is built from computational content directly, following a computation-first methodology which itself is interesting.

RedPRL is our first proof assistant based on the new semantics, inheriting the PRL style pioneered by the Nuprl proof assistant, which is also based on the computation-first methodology (but without cubical structure). We are also developing different proof assistants with distinct proof theories, in hope to create the best tools for mechanizing homotopy theory and other mathematics.

In this talk, I will describe how the computation-first methodology works, compare two cubical type theories and discuss RedPRL.

10:00
Sharing a library between proof assistants: reaching out to the HOL family

ABSTRACT. We observe today a large diversity of proof systems. This diversity has the negative consequence that a lot of theorems are proved many times. Unlike programming languages, it is difficult for these systems to co-operate because they do not implement the same logic. Logical frameworks are a class of theorem provers that overcome this issue by their capacity of implementing various logics. In this work, we study the STT∀ βδ logic, an extension of Simple Type Theory that has been encoded in the logical framework Dedukti. We present a translation from this logic to OpenTheory, a proof system and interoperability tool between provers of the HOL family. We have used this translation to export an arithmetic library containing Fermat’s little theorem to OpenTheory and to two other proof systems that are Coq and Matita.

09:00-10:30 Session 23H: Amiga (LOLA)
09:00
Asynchronous games fifteen years later

ABSTRACT. The notion of asynchronous game was introduced by the author fifteen years ago in order to investigate the causal structure of innocent strategies in game semantics. Shifting from traditional sequential games to asynchronous games was the critical step behind the discovery of a number of important structural properties of games and logic. This includes a truly concurrent formulation of innocence based on local confluence diagrams, and a positionality theorem for innocent strategies. Asynchronous games also played a central role in the emergence of tensorial logic as the logic of dialogue games, in the same way as linear logic is the logic of proof nets. Since its origins, one guideline of the theory has been the observation that asynchronous games and asynchronous strategies are causal structures of the very same nature: in other words, asynchronous strategies are asynchronous games themselves, appropriately mapped into other asynchronous games. In this paper, we develop this simple idea one step further, and explain how to recover and to compare a number of game semantics with different flavors (alternating and non alternating) from the same categorical combinatorics, performed in the category of small categories.

09:30
Algebraic Theories and Control Effects, Back and Forth

ABSTRACT. The seminal work of Moggi [6] identified the abstract structure of computational effects as that given by the categorical concept of a monad. Plotkin and Power [7] considered monads induced by algebraic theories from this computational viewpoint and developed a theory of algebraic effects. The second-order algebraic theories of Fiore et al [2, 3] are a conservative extension of the algebraic theories of universal algebra that provide algebraic foundations for simple type theories with variable binding. The theme of this talk is to consider and study them from the perspective of computational effects. Fiore and Staton [5] gave a computational interpretation of the second-order algebraic theory of the substitution algebras of Fiore et al [4] as jump effects. I will introduce more general second-order algebraic theories of inception algebras and equip them with computational interpretations as control effects. These will be justified by a CPS semantics and then related to de Groote's computational interpretation of negation elimination and the excluded middle [1] and to Thielecke's CPS calculus [8]. Computational interpretations of second-order algebraic theories of the lambda calculus will also be considered. In particular, left lambda algebras will be shown to correspond to a coroutine mechanism.

References

[1] P. de Groote. A Simple Calculus of Exception Handling. In Proc. TLCA'95, LNCS 902, 1995.

[2] M. Fiore and C.-K. Hur. Second-order equational logic. In Proc. CSL 2010, LNCS 6247, 2010.

[3] M. Fiore and O. Mahmoud. Second-order algebraic theories. In Proc. MFCS 2010, LNCS 6281, 2010.

[4] M. Fiore, G. Plotkin and D. Turi. Abstract syntax and variable binding. In Proc. LICS'99, 1999.

[5] M. Fiore and S. Staton. Substitution, Jumps, and Algebraic Effects. In Proc. CSL-LICS'14, 2014.

[6] E. Moggi. Notions of computation and monads. Information and Computation 93, 1991.

[7] G. Plotkin and A.J. Power. Notions of computation determine monads. In Proc. FOSSACS'02, LNCS 2303, 2002.

[8] H. Thielecke. Categorical Structure of Continuation Passing Style. PhD thesis, University of Edinburgh, 1997.

09:00-10:30 Session 23I (Linearity/TLLA)
09:00
Substructural Calculi with Dependent Types

ABSTRACT. In this paper, we investigate how to introduce dependent types into the substructural calculi such as the Lambek calculus and linear logic. The motivations of such a move include facilitating a closer correspondence between syntax and semantics in natural language analysis and developing promising applications such as that to concurrency through dependent session types.

We shall present two substructural calculi with dependent types: the first containing dependent Lambek types and the second dependent linear types. Technically, the former adheres to the usual assumption that types do not depend on substructural variables (in this case, the Lambek variables), which makes the technical development easier, while the latter allows type dependency on linear variables, which makes the development more challenging as well as more interesting in applications.

09:20
On the Lambek Calculus with an Exchange Modality
SPEAKER: Jiaming Jiang

ABSTRACT. In this paper we introduce Commutative/Non-Commutative Logic (CNC logic) and two categorical models for CNC logic. This work abstracts Benton’s Linear/Non-Linear Logic by removing the existence of the exchange structural rule. One should view this logic as composed of two logics; one sitting to the left of the other. On the left, there is intuitionistic linear logic, and on the right is a mixed commutative/non-commutative formalization of the Lambek calculus. Then both of these logics are connected via a pair of monoidal adjoint functors. An exchange modality is then derivable within the logic using the adjunction between both sides. Thus, the adjoint functors allow one to pull the exchange structural rule from the left side to the right side. We then give a categorical model in terms of a monoidal adjunction, and then a concrete model in terms of dialectica Lambek spaces.

09:55
The Bang Calculus and the Two Girard's Translations

ABSTRACT. We study the two Girard's translations of intuitionistic implication into linear logic by exploiting the Bang Calculus, a paradigmatic functional language with an explicit box-operator that allows both the call-by-name and call-by value lambda-calculi to be encoded in. We investigate how the bang calculus subsumes both call-by-name and call-by-value lambda-calculi from both a syntactic and a semantic point of view.

09:00-10:30 Session 23J: Invited and Contributed (NLCS)
09:00
Is compositionality all it's cracked up to be?

ABSTRACT. Compositionality is taken to be a fundamental principle for formal languages, including programming languages.  The practical importance of compositionality is to allow programs to be made modular and code to be reused. Apparent violations of compositionality can be handled formally by enriching the logic used to define the language.  The semantics of natural languages is often handled similarly within the formal semantics tradition: apparent cases of non-compositionality (such as the idiom used in the title) can be analysed differently to give a compositional semantics.  This approach is controversial, however, since such accounts may have many ramifications.  However, if natural language is not fundamentally compositional, it is unclear how it is possible for an indefinitely large number of sentences to be generated/understood by native speakers: this has analogies to the practical considerations in programming.  Approaches to computational linguistics which follow the formal semantics tradition assume some form of compositionality, but with broad coverage syntax-based grammars, this has interesting consequences, some of which I will outline.  I will further argue that many recent approaches in computational linguistics ignore compositionality in the sense used in the formal tradition.  Compositional distributional semantics has been an important area of research, but I will argue that this primarily involves modeling behaviour which is non-compositional or semi-compositional, because of the types of evaluation which are used.  In many applications of neural networks, the move away from modularity in favour of end-to-end models removes one of the basic practical arguments for compositionality.  Some recent results suggest that commonly used neural models may not always model even very simple forms of compositionality.  This in turn suggests there is a danger that such models will fail in unexpected ways.

10:00
Anti-Unification and Natural Language Processing
SPEAKER: Temur Kutsia

ABSTRACT. Anti-unification is a well-known method to compute generalizations in logic. Given two objects, the goal of anti-unification is to reflect commonalities between these objects in the computed generalizations, and highlight differences between them.

Anti-unification appears to be useful for various tasks in natural language processing. Semantic classification of sentences based on their syntactic parse trees, grounded language learning, semantic text similarity, insight grammar learning, metaphor modeling: This is an incomplete list of topics where generalization computation has been used in one form or another. The major anti-unification technique in these applications is the original method for first-order terms over fixed arity alphabets, introduced by Plotkin and Reynolds in 1970s, and some of its adaptations.

The goal of this paper is to give a brief overview about existing linguistic applications of anti-unification, discuss a couple of powerful and flexible generalization computation algorithms developed recently, and argue about their potential use in natural language processing tasks.

09:00-10:30 Session 23K: Invited tutorial (first part) (PARIS)
Location: Maths L4
09:00
Workshop introduction
SPEAKER: David Baelde
09:15
An Introduction to Cyclic Proofs

ABSTRACT. Cyclic proofs have recently been gaining in popularity, both as a proof-theoretic tool for studying logical systems, and as a paradigm for automatic proof search.  Over two hour-long talks at the PARIS workshop I hope to give an introduction to cyclic proofs for the uninitiated, to explore some of their applications, to outline their relationships to other techniques and, perhaps, to clear up some common misconceptions.

09:00-10:30 Session 23L: Tuning solvers and applications (POS)
Location: Maths L5
09:00
Tuning Parallel SAT Solvers

ABSTRACT. In this paper we present new implementation details and benchmarking results for our parallel portfolio solver \topo. In particular, we discuss ideas and implementation details for the exchange of learned clauses in a massively-parallel SAT solver which is designed to run more that $1,000$ solver threads in parallel. Furthermore, we go back to the roots of portfolio SAT solving, and discuss the impact of diversifying the solver by using different restart- , branching- and clause database management heuristics. We show that these techniques can be used to tune the solver towards different problems. However, in a case study on formulas derived from Bounded Model Checking problems we see the best performance when using a rather simple clause exchange strategy. We show details of these tests and discuss possible explanations for this phenomenon.

As computing times on massively-parallel clusters are expensive, we consider it especially interesting to share these kind of experimental results.

09:30
CryptoMiniSat Parameter-Optimization for Solving Cryptographic Instances

ABSTRACT. Performing hundreds of test runs and a source-code analysis, we empirically identified improved parameter configurations for the CryptoMiniSat (CMS) 5 for solving cryptographic CNF instances originating from algebraic known-plaintext attacks on 3 rounds encryption of the Small AES-64 model cipher SR(3, 4, 4, 4). We finally became able to reconstruct 64-bit long keys in under an hour real time which, to our knowledge, has never been achieved so far. Especially, not without any assumptions or previous knowledge of key-bits (for instance in the form of side-channels, as in Mohamed et al., Improved Algebraic Side-Channel Attack on AES, 2012). A statistical analysis of the non-deterministic solver runtimes was carried out and command line parameter combinations were defined to yield best runtimes which ranged from under an hour to a few hours in median at the beginning. We proceeded using an Automatic Algorithm Configuration (AAC) tool to systematically extend the search for even better solver configurations with success to deliver even shorter solving times. In this work we elaborate on the systematics we followed to reach our results in a traceable and reproducible way. The ultimate focus of our investigations is to find out if CMS, when appropriately tuned, is indeed capable to attack even bigger and harder problems than the here solved ones. For the domain of cryptographic research, the duration of the solving time plays an inferior role as compared to the practical feasibility of finding a solution to the problem. The perspective scalability of the here presented results is the object of further investigations.

10:00
Stedman and Erin Triples encoded as a SAT Problem

ABSTRACT. A very old quest in campanology is the search for peals, which can be considered as constrained searches for Hamiltonian cycles of a Cayley graph. Two particularly hard problems are finding bobs-only peals of Stedman Triples and Erin Triples. We show how to efficiently reduce them to boolean satisfiability and use a SAT solver to help find bobs-only peals of Stedman Triples, and express the unsolved problem of bobs-only Erin Triples as an unsolved SAT problem. This approach is based on the author's very efficient general reduction of the Hamiltonian Cycle Problem (HCP) to Boolean Satisfiability (SAT) converting any Hamiltonian Cycle problem with n vertices and m directed edges to a SAT problem with approximately n.log2(m) variables and 2m.(log2(n)+1) clauses.

09:00-10:30 Session 23M: SR Invited tutorial: Edith Elkind (SR)
09:00
Strategic behavior in social choice - Part 1

ABSTRACT. Applications of voting systems range from political elections to ranking universities and making funding decisions. In this tutorial, we will discuss various aspects of strategic behavior in voting, with a focus on the analysis of voting equilibria.

09:00-10:30 Session 23N: switch (TYDI)
Location: Maths Boardroom
09:00
An introduction to deep inference

ABSTRACT. This will be a tutorial on deep inference for general computer scientists. It will connect with the simply-typed lambda-calculus, and with classical logic. No further background knowledge is expected.

10:00
Deep-Inference Intersection Types
SPEAKER: Joseph Paulus

ABSTRACT. We explore the formulation of non-idempotent intersection types for the lambda-calculus in deep-inference using the open-deduction formalism.

09:00-10:30 Session 23P: Invited Talk, and orders and sets of E-unifiers (UNIF)
09:00
Handling substitutions via duality

ABSTRACT. Many interesting problems concerning intuitionistic and intermediate propositional logics are related to properties of substitutions: among them, besides unification, we have rule admissibility, characterization of projective formulae, definability of maximum and minimum fixpoints, finite periodicity theorems, etc. Since all these questions can be stated in purely category-theoretic terms, they are all sensible to an approach via duality techniques. The available duality for finitely presented Heyting algebras involves both sheaves (giving the appropriate geometric framework) and bounded bisimulations (handling the combinatorics of definability aspects): we show how to use such duality to attack and solve the above problems in a uniform way. [Recent and new results come from joint and ongoing work with Luigi Santocanale]

10:00
Unification based on generalized embedding

ABSTRACT. At this time we only submit the abstract, which I uploaded under paper

09:30-10:30 Session 24A (HoTT/UF)
Location: Blavatnik LT1
09:30
Axiomatizing Cubical Sets Models of Univalent Foundations

ABSTRACT. The constructive model of Homotopy Type Theory introduced by Cohen-Coquand-Huber-Mörtberg in 2015 (building on the work of Bezem-Coquand-Huber in 2013) uses a particular presheaf topos of cubical sets. It is arguably one of the most significant contributions to constructive mathematics since Martin-Löf's work in the 1970s. Since its introduction, much effort has been expended to analyse, simplify and generalize what makes this model of the univalence axiom tick, using a variety of techniques. In this talk I will describe an axiomatic approach -- the isolation of a few elementary axioms about an interval and a universe of fillable shapes that allow a univalent universe to be constructed. Not all the axioms are visible in the original work; and the axioms can be satisfied in toposes other than the original presheaf topos of cubical sets. The axioms and constructions can almost be expressed in Extensional Martin-Löf Type Theory, except that the global nature of some of them leads to the use of a modal extension of that language. This is joint work with Ian Orton, Dan Licata and Bas Spitters.

09:30-10:30 Session 24B: Invited Talk (TERMGRAPH)
09:30
Critical Pairs in Graph Transformation Systems

ABSTRACT. Critical pairs are a minimal representation of two conflicting transformations of a given structure. Their origin goes back to 1970, when Knuth and Bendix introduced them as a tool for proving local confluence of term rewriting systems. In graph transformation, the first notion of critical pairs was presented in the early 90's by Plump.  Since then, and especially in the last few years, other notions of critical pairs have been presented. In this talk, I will review these various notions, together with their associated results.

10:00-10:30 Session 25 (HOR)
10:00
Combinatorics of explicit substitutions (extended abstract)

ABSTRACT. lambda-upsilon is an extension of the lambda-calculus which internalises the calculus of substitutions. In the current paper, we investigate the combinatorial properties of lambda-upsilon focusing on the quantitative aspects of substitution resolution. We exhibit an unexpected correspondence between the counting sequence for lambda-upsilon terms and famous Catalan numbers. As a by-product, we establish effective sampling schemes for random lambda-upsilon terms. We show that typical lambda-upsilon terms represent, in a strong sense, non-strict computations in the classic lambda-calculus. Moreover, typically almost all substitutions are in fact suspended, i.e., unevaluated, under closures. Consequently, we argue that lambda-upsilon is an intrinsically non-strict calculus of explicit substitutions. Finally, we investigate the distribution of various redexes governing the substitution resolution in lambda-upsilon and investigate the quantitative contribution of various substitution primitives.

10:30-11:00Coffee Break
11:00-12:30 Session 26A (CL&C)
11:00
Fast cut-elimination using proof terms: an empirical study

ABSTRACT. Urban and Bierman introduced a calculus of proof terms for the sequent calculus LK with a strongly normalizing reduction relation. We extend this calculus to simply-typed higher-order logic with inferences for induction and equality, albeit without strong normalization. We implement this calculus in GAPT, our library for proof transformations. Evaluating the normalization on both artificial and real-world benchmarks, we show that this algorithm is typically several orders of magnitude faster than the existing Gentzen-like cut-reduction, and an order of magnitude faster than any other cut-elimination procedure implemented in GAPT.

11:45
Validating Back-links of FOL-ID Cyclic Pre-proofs

ABSTRACT. Cyclic pre-proofs can be represented as sets of finite tree derivations with back-links. In the frame of the first-order logic with inductive definitions (FOL$_{\mbox{\scriptsize ID}}$), the nodes of the tree derivations are labelled by sequents and the back-links connect particular terminal nodes, referred to as \emph{buds}, to other nodes labelled by a same sequent. However, only some back-links can constitute sound pre-proofs. Previously, it has been shown that special ordering and derivability conditions, defined along the minimal cycles of the digraphs representing particular normal forms of cyclic pre-proofs, are sufficient for validating the back-links. In that approach, a same constraint could be checked several times when processing different minimal cycles, hence one may require additional recording mechanisms to avoid redundant computation in order to downgrade the time complexity to polynomial.

We present a new approach that does not need to process minimal cycles. It based on a normal form that allows to define the validation conditions by taking into account only the root-bud paths from the non-singleton strongly connected components existing in the digraphs.

11:00-12:30 Session 26B (Domains13)
Chair:
Location: Maths LT1
11:00
Invited Talk: A Brief History of Denotational Semantics

ABSTRACT. Fifty years ago, Dana Scott and Christopher Strachey introduced denotational semantics as a way to specify program behavior, using domain theory and fixed-point principles to make sense of recursive definitions. These ideas have had enormous influence and have been developed extensively and applied to a wide variety of programming languages. We recall the foundational principles that underpin the denotational approach, and give a somewhat revisionist account of some of the landmark developments, with emphasis on concurrent programming languages. We introduce a denotational framework for relaxed memory concurrency, an area which is notoriously difficult to formalize program behavior, and we suggest some directions for further development.

11:30
First-order differential programming
12:00
Games and domains
11:00-12:30 Session 26C (GS)
Location: Maths LT3
11:00
Some reflections on algorithmic game semantics

ABSTRACT. Besides its origins in logic, game semantics has also a well known operational aspect. We outline some of the early operational intuitions behind the full abstraction for PCF, some follow-up algorithmic ideas and some reflections on possible future developments.

11:30
Strategies as sheaves on plays

ABSTRACT. Although the HO/N games are fully abstract for PCF, the traditional notion of innocence, which underpins these games, is not satisfactory for such language features as nondeterminism and probabilistic branching. Based on a category of P-visible plays with a notion of embedding as morphisms, we propose a natural generalisation by viewing innocent strategies as sheaves over (a site of) plays, echoing a slogan of Hirschowitz and Pous. Our approach gives rise to fully complete game models in a variety of cases, including deterministic, nondeterministic and probabilistic branching. 
 

This is based on joint work with Takeshi Tsukada.

11:00-12:30 Session 26D (HDRA)
Location: Blavatnik LT2
11:00
The equivalence between opetopic sets and many-to-one polygraphs

ABSTRACT. From the polynomial approach to the definition of opetopes of Kock et al. [8], we derive a category $\mathbb{O}$ of opetopes, and show that its $Set$-valued presheaves, or opetopic sets, are equivalent to many-to-one polygraphs.

As an immediate corollary, we establish that opetopic sets are equivalent to multitopic sets, introduced and studied by Makkai et al. [3, 4].

11:30
Minimal models for monomial algebras

ABSTRACT. Using combinatorics of chains going back to works of Anick, Green, Happel and Zacharia, we give, for any monomial algebra A, an explicit description of its minimal model. This also provides us with formulas for a canonical A-infinity-structure on the Ext-algebra of the trivial A-module.

12:00
Termination in linear (2,2)-categories with braidings and duals

ABSTRACT. This work is part of a research project aiming at developing rewriting methods to study diagrammatic algebras. In particular, we are interested in this work to diagrammatic algebras which can be seen as linear~(2,2) categories with an additional structure, for instance given by braidings, adjunctions or duals.

We present new termination heuristics for linear (3,2)-polygraphs presenting these linear (2,2)-categories based on the definition of an order similar to a monomial order, but that is not required to be total. This order is defined by counting the generators on the diagrams, and finding characteristics which are stable by contexts. We illustrate these methods by proving termination of a particular linear (3,2)-polygraph presenting a candidate 2-category for categorifying a quantum group.

11:00-12:00 Session 26E: Invited talk: Pierre Vial (HOR)
11:00
Some applications of quantitative types inside and outside type theory
11:00-12:30 Session 26F (HoTT/UF)
Location: Blavatnik LT1
11:00
Internalizing Presheaf Semantics: Charting the Design Space
SPEAKER: Andreas Nuyts

ABSTRACT. Presheaf semantics are an excellent tool for modelling relational preservation properties of (dependent) type theory. They have been applied to parametricity (which is about preservation of relations), univalent type theory (which is about preservation of equivalences), and directed type theory (which is about preservation of morphisms). Of course after going through the endeavour of constructing a presheaf model of type theory, we want type-theoretic profit, i.e. we want internal operations that allow us to write cheap proofs of the 'free' theorems that follow from the preservation property concerned. There is currently no general theory of how we should craft such operations. Cohen et al. introduced the final type extension operation Glue, using which one can prove the univalence axiom and hence also the 'free' equivalence theorems it entails. In previous work with Vezzosi, we showed that Glue and its dual, the initial type extension operation Weld, can be used to internalize parametricity to some extent. Earlier, Bernardy, Coquand and Moulin had introduced completely different 'boundary-filling' operations to internalize parametricity. Each of these operations has to our knowledge only been used in concrete models and hence their expressivity has not been compared. We have done some work to fill the hiatus: we consider and compare the various operators in more general presheaf categories. In this first step, we do not consider fibrancy requirements.

11:30
Cubical Assemblies and the Independence of the Propositional Resizing Axiom
12:00
Dependent Right Adjoint Types
SPEAKER: Bas Spitters
11:00-12:30 Session 26G: Algebraic Structures and Coherence (IWC)
11:00
Coherence of monoids by insertions
SPEAKER: Nohra Hage

ABSTRACT. We introduce string data structures as combinatorial descriptions of structured words on totally ordered alphabets. The data can be described by words through a reading map and can be constructed by using an insertion algorithm. The insertion map defines a product on datum. We show that the associativity of this product, the cross section property of the data structure, and the confluence of the rewriting system defined by the insertion map are equivalent properties. We explicit a coherent presentation of the monoid presented by the data structure, made of generators, rewriting rules describing the insertion of letters in words and relations among the insertion algorithms.

11:30
Critical pairs for Gray categories

ABSTRACT. Higher categories are a generalization of standard categories where there are not only $1$-cells between $0$-cells but more generally $n{+}1$-cells between $n$-cells. They are more and more used in mathematics, physics and computer science. They can notably be used to represent algebraic structures. There are several variants going from weak categories, that are the most general formalism but also the hardest to manipulate, to strict categories, simpler but less general. One usually wants both the expressive power of weak categories and the simplicity of strict categories. Semi-strict categories, such as Gray categories in dimension $3$, are an in-between formalism that it used in this work. Here, we are interested in proving \emph{coherence} of certain algebraic structures in dimension $3$ using rewriting, where ``coherence'' is the property that there is at most one $3$-cell between two $2$-cells. It amounts to compute critical pairs of a rewriting system and use a variant of Newmann's lemma. In this setting, an algorithm exists to compute these critical pairs.

12:00
The diamond lemma for free modules

ABSTRACT. We are studying rewriting systems over free modules, that is linear combinations of free generators with noninvertible coefficients. We provide a sufficient condition in terms of local confluence restricted to generators for the global rewrite relation to be confluent: this condition is formulated in terms of syzygies. When the coefficients belong to a domain, we equip the set of syzygies with a module structure, which provides a finer criterion: the local confluence has to be checked over a subset of syzygies, namely a generating set for the module structure.

11:00-12:40 Session 26H: Verification and Testing (LFMTP)
Location: Maths LT2
11:00
Invited talk: Why and How Does K work? The Logical Infrastructure Behind It

ABSTRACT. The K framework was born from our firm belief that an ideal language framework is possible. Specifically, that programming languages must have formal definitions, and that tools for a given language, such as interpreters, compilers, state-space explorers, model checkers, deductive program verifiers, etc., are derived from just one reference formal definition of the language, correct-by-construction and at no additional cost specific to that language. No other semantics for the same language should be needed. Several real languages have been formalized their semantics in K, including C, Java, JavaScript, PHP, Python, Rust, and more recently the Etherem VM (EVM), and the generic K tools have been instantiated to with these languages. In particular, the EVM semantics is used commercially by Runtime Verification to formally verify smart contracts on the Ethereum blockchain. But what is behind K? Why and how does it work? This talk will discuss the logical formalism underlying K, matching logic, a first-order logic (FOL) variant for specifying and reasoning about structure by means of patterns and pattern matching. Matching logic generalizes several logical frameworks important for program analysis, such as: propositional logic, algebraic specification, FOL with equality, (polyadic) modal logics, temporal logics, separation logic, as well as dynamic and Hoare logics. Binders and fixed-points can also be defined in matching logic, and thus the variety of lambda/mu-calculi and substitution-based semantics. Patterns can specify both structural requirements, including separation requirements at any level in any program configuration (not only in the "heap"), as well as logical requirements and constraints. The various languages defined in K, regardless of their size and complexity, become nothing but matching logic theories, and the various tools provided by the K framework, such as interpretion, symbolic execution, search and model checking, as well as full-fledged deductive program verification, become nothing but proof search heuristics in matching logic.

12:00
Computation-as-deduction in Abella: work in progress

ABSTRACT. The Abella theorem prover is based on a logic in which relations, and not functions, are defined by induction (and coinduction). Of course, many relations do, in fact, define functions and there is real value in separating functional computation (marked by determinism) from more general deduction (marked by nondeterminism and backtracking). Recent work on focused proof systems for the logic underlying Abella is used in this paper to motivate the design of various extensions to the Abella system. With these extensions to the system (which do not extend the logic), it is possible to fully automate functional computations within the relational setting as soon as a proof is provided that a given relation does, in fact, capture a total function. In this way, we can use Abella to compute functions on data structures that contain bindings.

12:20
Property-Based Testing of Abstract Machines: an Experience Report

ABSTRACT. Contrary to Dijkstra's diktat, testing, and more in general, validation, has found an increasing niche in formal verification, prior or even in alternative to theorem proving. Validation, and in particular, property-based testing (PBT) is quite effective in mechanized meta-theory of programming languages, where theorems have shallow but tedious proofs that may go wrong for fairly banal mistakes in specifications. In this report, we abandon the comfort of high-level object languages and address the validation of abstract machines and typed assembly languages. We concentrate on Appel and Leroy's List-machine benchmark (JAR, 2012), which we tackle both with alphaCheck, the simple model-checker on top of the nominal logic programming alphaProlog and the PBT library FSCheck for F#. This allows us to compare the relative merits of exhaustive-based PBT in a logic programming style versus the more usual randomized functional setting. We uncover one major bug in the published version of the paper, plus several typos and ambiguities thereof. This is particularly striking, as the paper is accompanied by two full formalizations, in Coq and Twelf. Finally, we do a bit of mutation testing on the given model, to asses further the trade-off between exhaustive and randomized data generation. Spoiler alert: the former performs better.

11:00-12:30 Session 26I: Bellmac (LOLA)
11:00
A more precise, more correct stack and register model for CompCert

ABSTRACT. The proposed talk describes ongoing work on modeling subregister aliasing in the CompCert formally verified~C compiler. We discuss some possible ways of modeling paired subregisters and their trade-offs. Exploring the design space uncovered long-standing fundamental type errors in CompCert's semantic models; implementing the eventual solution uncovered some more. We are close to finishing a new semantic model in which registers and stack slots are no longer treated as containing symbolic values but tuples of bytes.

11:30
Towards a library of formalised undecidable problems in Coq: The undecidability of intuitionistic linear logic

ABSTRACT. .

12:00
A graph-rewriting perspective of the beta-law
SPEAKER: Koko Muroya

ABSTRACT. This preliminary report studies a graphical version of Plotkin's call-by-value equational theory, in particular its soundness with respect to operational semantics. Although an equational theory is useful in safe program transformation like compiler optimisation, proving its soundness is not trivial, because it involves analysis of interaction between evaluation flow and a particular sub-program of interest. We observe that soundness can be proved in a direct and generic way in the graphical setting, using small-step semantics given by a graph-rewriting abstract machine previously build for evaluation-cost analysis. This would open up opportunities to think of a cost-sensitive equational theory for compiler optimisation, and to prove contextual equivalence directly in the presence of language extensions.

11:00-12:35 Session 26J (Linearity/TLLA)
11:00
Termination of lambda-calculus linearization methods
12:00
Taking Linear Logic Apart
SPEAKER: Wen Kokke

ABSTRACT. Process calculi based in logic, such as πDill and CP, provide a foundation for deadlock-free concurrent programming. However, there is a mismatch between the structures of operators used as proof terms in previous work, and the term constructs of the standard π-calculus. We introduce the Hypersequent Classical Processes (HCP), which addresses this mismatch. The key insight is to register parallelism in the typing judgements using hypersequents, a technique from logic which generalises judgements from one sequent to many. This allows us to take apart the term constructs used in Classical Processes (CP) to more closely match those of the standard π-calculus. We prove that HCP enjoys subject reduction and progress, and prove several properties relating it back to CP.

11:00-12:30 Session 26K: Contributed papers (NLCS)
Chair:
11:00
Propositional Attitude Operators in Homotopy Type Theory

ABSTRACT. See attachment.

11:30
Propositional Forms of Judgemental Interpretations
SPEAKER: Zhaohui Luo

ABSTRACT. In type-theoretical semantics, sentences may often be interpreted as judgements, rather than propositions. When interpreting composite sentences such as those involving negations and conditionals, one may want to turn a judgemental interpretation into a proposition in order to obtain an intended semantics. In this paper, we propose a new negation operator $\NOT$ for constructing propositional forms of judgemental interpretations. NOT is introduced axiomatically, with five axiomatised laws to govern its behaviour, and several examples are given to illustrate its use in semantic interpretation. In order to justify NOT, we employ a heterogeneous equality to prove its laws and, since the addition of heterogeneous equality to type theories is consistent, so is our introduction of the NOT operator. Also discussed is how to use the negation operator in event semantics.

12:00
Paychecks, Presupposition, and Dependent Types
SPEAKER: Ribeka Tanaka

ABSTRACT. This paper proposes an analysis of paycheck sentences in the framework of Dependent Type Semantics. We account for the anaphora resolution of paycheck pronouns by using dependent function types in dependent type theory. We argue that the presupposition of the possessive NP provides a function that contributes to the paycheck reading. The proposed analysis provides a uniform treatment of paycheck pronouns and standard referential pronouns, without introducing additional formal mechanisms to the system.

11:00-12:30 Session 26L: Contributed talks (PARIS)
Location: Maths L4
11:00
Local Validity for Circular Proofs in Linear Logic with Fixed Points

ABSTRACT. Circular (ie. non-wellfounded but regular) proofs received increasing interest in recent years with the simultaneous development of their applications and meta-theory: infinitary proof theory is now well-established in several proof-theoretical frameworks such as Martin Löf's inductive predicates, linear logic with fixed points, etc.

In the setting of non-wellfounded proofs, a validity criterion is necessary to distinguish, among all infinite derivation trees (aka. pre-proofs), those which are logically valid proofs. A standard approach is to consider a pre-proof to be valid if every infinite branch is supported by a progressing thread.

This work focuses on circular proofs for MALL with fixed points. Among all representations of valid circular proofs, a new fragment is described, based on a stronger validity criterion.

This new criterion is based on a labelling of formulas and proofs, whose validity is purely local.

This allows this fragment to be easily handled, while being expressive enough to still contain all circular embeddings of Baelde's muMALL finite proofs with (co)inductive invariants: in particular deciding validity and computing a certifying labelling can be done efficiently. Moreover the Brotherston-Simpson conjecture holds for this fragment: every labelled representation of a circular proof in the fragment is translated into a standard finitary proof.

Finally we explore how to extend these results to a bigger fragment, by relaxing the labelling discipline while retaining (i) the ability to locally certify the validity and (ii) to some extent, the ability to finitize circular proofs.

11:45
Collapses of Fixpoint Alternation Hierarchies in Low Type-Levels of Higher-Order Fixpoint Logic
SPEAKER: Florian Bruse

ABSTRACT. Higher-Order Fixpoint Logic (HFL) is an extension of the modal mu-calculus by a typed lambda calculus. As in the mu-calculus, whether the nesting of least and greatest fixpoints increases expressive power is an important question. It is known that at low type theoretic levels, the fixpoint alternation hierarchy is strict. We present classes of structures over which the alternation hierarchy of HFL-formulas at low type level collapses into the alternation-free fragment, albeit at increase in type level by one.

11:00-12:30 Session 26M: Pseudo-Boolean and Proofs (POS)
Location: Maths L5
11:00
Two flavors of DRAT

ABSTRACT. DRAT proofs have become the de facto standard for certifying SAT solvers' results. State-of-the-art DRAT checkers are able to efficiently establish the unsatisfiability of a formula. However, DRAT checking requires unit propagation, and so it is computationally non-trivial. Due to design decisions in the development of early DRAT checkers, the class of proofs accepted by state-of-the-art DRAT checkers differs from the class of proofs accepted by the original definition. In this paper, we formalize the operational definition of DRAT proofs, and discuss practical implications of this difference for generating as well as checking DRAT proofs. We also show that these theoretical differences have the potential to affect whether some proofs generated in practice by SAT solvers are correct or not.

11:30
Competitive Sorter-based Encoding of PB-Constraints into SAT

ABSTRACT. A Pseudo-Boolean (PB) constraint is a linear inequality constraint over Boolean variables. A popular idea to solve PB-constraints is to transform them to CNFs (via BDDs, adders and sorting networks) and process them using -- increasingly improving -- state-of-the-art SAT-solvers. Recent research have favored the approach that use Binary Decision Diagrams (BDDs), which is evidenced by several new constructions and optimizations. We show that encodings based on comparator networks can still be very competitive. We present a system description of a PB-solver based on MiniSat+ which we extended by adding a new construction of a selection network called 4-Way Merge Selection Network, with a few optimizations based on other solvers. Experiments show that on many instances of popular benchmarks our technique outperforms other state-of-the-art PB-solvers.

12:00
Divide and Conquer: Towards Faster Pseudo-Boolean Solving
SPEAKER: Jan Elffers

ABSTRACT. The last 20 years have seen dramatic improvements in performance of algorithms for Boolean satisfiability---so-called SAT solvers---and today conflict-driven clause learning (CDCL) solvers are routinely used for real-world problems in a wide range of application areas. One serious short-coming of CDCL, however, is that the underlying method of reasoning is quite weak, which can lead to exponential running times for many simple combinatorial problems. A tantalizing solution is to instead use stronger pseudo-Boolean (PB) reasoning, but so far the promise of exponential gains in performance has failed to materialize---the increased theoretical strength seems hard to harness algorithmically, and in many applications CDCL-based methods are still superior.

We propose a modified approach to pseudo-Boolean solving, using division instead of the saturation rule in [Chai and Kuehlmann '05] and other PB solvers. In addition to resulting in a stronger conflict analysis, this also keeps integer coefficient sizes down, which further improves performance. Together with some other optimizations, this yields a solver that performed very well in the Pseudo-Boolean Competitions 2015 and 2016, and that has speed approaching that of CDCL-based methods measured in terms of number of conflicts per second.

This is a presentation-only submission of a paper to be published at IJCAI '18.

11:00-12:30 Session 26N: SR Invited tutorial: Edith Elkind (SR)
11:00
Strategic behavior in social choice - Part 2

ABSTRACT. Applications of voting systems range from political elections to ranking universities and making funding decisions. In this tutorial, we will discuss various aspects of strategic behavior in voting, with a focus on the analysis of voting equilibria.

11:00-12:30 Session 26P: Frameworks (TERMGRAPH)
11:00
Drags: an algebraic framework for graph rewriting

ABSTRACT. Rewriting with graphs enjoys a long history in computer science, graphs being used to represent not only data structures, but also program structures, and even computational models. Termination and confluence techniques have been elaborated for various generalizations of trees, such as rational trees, directed acyclic graphs, lambda terms and lambda graphs. But the design of tools for rewriting arbitrary graphs has made little progress beyond various categorical definitions dating from the mid seventies and the study of the particular families of graphs listed above. This paper describes the generalization of term rewriting techniques to a general class of multigraphs that we call drags, viz. finite, directed, ordered (multi-)graphs. To this end, we develop a rich algebra of drags which allows us to view graph rewriting in exactly the same way as we view term rewriting.

11:30
A framework for rewriting families of string diagrams

ABSTRACT. We describe a mathematical framework for equational reasoning about infinite families of string diagrams which is amenable to computer automation. The framework is based on context-free families of string diagrams which we represent using context-free graph grammars. We model equations between infinite families of diagrams using rewrite rules between context-free grammars. Our framework represents equational reasoning about concrete string diagrams and context-free families of string diagrams using double-pushout rewriting on graphs and context-free graph grammars respectively. We prove that our representation is sound by showing that it respects the concrete semantics of string diagrammatic reasoning and we show that our framework is appropriate for software implementation by proving important decidability properties.

12:00
On quasi ordinal diagram systems

ABSTRACT. We generalize the ordinal diagram system (a strong non-monotonic ordinal notation system) of Takeuti in a quasi-well-ordering form. We sketch two ways of showing well-quasi-orderedness; one by means of Dershowitz-Tzameret’s version of tree-embedding theorem with gap-condition (on the path comparable trees), the other by means of the generalized system of transfinite inductive definition. Although the ordinal diagrams (and other strong ordinal notation systems) could be a good tool for termination proof for certain pattern-based rewrite rule programs, there are some difficulties in applying them, at a slight look; the traditional termination proof paradigm of term rewrite theory uses the monotonicity property and the substitution property, but the strong ordering systems do not have them in general. In this paper, we note some possible uses of the strong well-quasi-ordinal systems for termination proofs by taking an example of pattern-based rewrite rules from the hydra game (of Buchholz).

11:00-12:30 Session 26Q: promotion (TYDI)
Location: Maths Boardroom
11:00
Truely Concurrent Processes in the Calculus of Structures

ABSTRACT. I discuss recent results enabling expressive process calculi to be embedded in extensions of BV. I also discuss perspectives on understanding the true concurrency of these process embeddings.

11:30
Sequentialising nested systems

ABSTRACT. In this work, we investigate the proof theoretic connections between sequent and nested proof calculi. Specifically, we identify general conditions under which a nested calculus can be transformed into a sequent calculus by restructuring the nested sequent derivation (proof) and shedding extraneous information to obtain a derivation of the same formula in the sequent calculus. These results are formulated generally so that they apply to calculi for intuitionistic, normal and non-normal modal logics.

12:00
Nested sequents for modal logics and beyond

ABSTRACT. In this talk, we will review the different ways nested sequents have been used to give cut-free deductive systems for various logics, in particular many that cannot be handled in ordinary (Gentzen) calculi, and other applications as interpolation results, realisation theorems for justification logics, etc.

11:00-12:30 Session 26R: Unification, protocol analysis, and logics (UNIF)
11:00
Knowledge Problems in Equational Extensions of Subterm Convergent Theories

ABSTRACT. We study decision procedures for two knowledge problems critical to the verification of security protocols, namely the intruder deduction and the static equivalence problems. These problems can be related to particular forms of context matching and context unification. Both problems are defined with respect to an equational theory and are known to be decidable when the equational theory is given by a subterm convergent term rewrite system. In this note we extend this to consider a subterm convergent equational term rewrite system defined modulo an equational theory, like Commutativity or Associativity-Commutativity. We show that for certain classes of such equational theories, namely the shallow classes, the two knowledge problems remain decidable.

11:30
Bounded ACh Unification

ABSTRACT. We consider the problem of unification modulo an equational theory ACh, which consists of a function h which is homomorphic over an associative-commutative operator +. Unification modulo ACh is undecidable, so we define a bounded ACh unification problem. In this bounded version of ACh unification we essentially bound the number of times h can be recursively applied to a term, and only allow solutions that satisfy this bound. There is no bound on the number of occurrences of h in a term, and the + symbol can be applied an unlimited number of times. We give inference rules for solving bounded ACh unification, and we prove that the rules are sound, complete and terminating. We have implemented the algorithm in Maude and give experimental results. We argue that this algorithm is useful in cryptographic protocol analysis.

12:00
About the unification type of topological logics over Euclidean spaces

ABSTRACT. Topological logics are formalisms for reasoning about topological relations between regions. In this paper, we introduce a new inference problem for topological logics, the unifiability problem, which extends the validity problem by allowing one to replace variables by terms before testing for validity. Our main result is that, within the context of the mereotopology of all regular closed polygons of the real plane, unifiable formulas always have finite complete sets of unifiers.

12:00-12:30 Session 27 (HOR)
12:00
Orthogonality and sequentiality in substructural Linear HRSs

ABSTRACT. Linear HRSs are higher-order rewriting systems having the substructural linear lambda-calculus as a substitution calculus. We explore the properties of orthogonality and sequentiality, focusing on multiplicative and additive conjunction, and how they differ in behaviour to `intuitionistic' HRSs.

12:30-14:00Lunch Break
14:00-15:30 Session 28A (CL&C)
14:00
On Natural Deduction for Herbrand Constructive Logics III: The Strange Case of the Intuitionistic Logic of Constant Domains

ABSTRACT. The logic of constant domains is intuitionistic logic extended with the so-called forall-shift axiom, a classically valid statement which implies the excluded middle over decidable formulas. Surprisingly, this logic is constructive and so far this has been proved by cut-elimination for ad-hoc sequent calculi. Here we use the methods of natural deduction and Curry-Howard correspondence to provide a simple computational interpretation of the logic.

14:45
Herbrand's Theorem as Higher-Order Recursion

ABSTRACT. Herbrand's Theorem, a fundamental result of classical logic, states that for every valid existential formula $ \exists x F(x) $ there is a finite set of terms $ T $ such that the quantifier-free disjunction $ \bigvee_{t\in T} F(t) $ is a tautology. Herbrand disjunctions can be computed in a number of ways, such as via cut-elimination, re-interpretations of classical logic in intuitionistic logic and term calculi for classical natural deduction.

In this talk I will outline a language-theoretic representation of Herbrand's Theorem. Specifically, I will introduce a class of higher order recursion schemes (HORS) for computing Herbrand disjunctions from classical sequent calculus proofs. HORS are a generalisation of regular tree grammars to finite types which equate to the simply-typed $\lambda Y$-calculus with non-deterministic reductions. The representation interprets inference rules of proofs as non-terminals whose production rules operate compositionally to extract the term instantiations that result from reductive cut-elimination. Current limitations of the approach, as well as future directions, will be discussed.

14:00-15:30 Session 28B (Domains13)
Location: Maths LT1
14:00
Invited Talk: Scott Domains for Denotational Semantics and Program Extraction

ABSTRACT. This talk will highlight the most important results in domain theory from the perspective of program semantics and will explain why they play a crucial role in program extraction from proofs, a topic which, at first glance, seems to have little connection to domain theory. Surprisingly, Scott-domains suffice for the purpose of program extraction, even for the extraction of nondeterministic and concurrent programs.

14:40
Robust computability notions for types arising in classical analysis

ABSTRACT. A recurring question in computability theory has been whether one can identify a `canonical' notion of computable operation on data of some given kind. If the data are natural numbers (or equivalently finite strings over a finite alphabet), then the existence of such a canonical notion is of course what is claimed by the classical Church-Turing thesis. If the data are themselves of a higher-order nature, then the situation is much more complex, but work of Normann (2000) and Longley (2007) has shown that even widely divergent concepts of higher-order computability will often converge in the class of `hereditarily total' functionals they give rise to.

I will outline some recent extensions of these results that imply the existence of a similarly robust computability notion for many kinds of data arising in traditional mathematical practice, particularly complex analysis. To construct (for example) the space of analytic functions on a given complex domain, or some type of higher-order operators acting on this space, one typically needs recourse not just to function spaces, but also to subset and quotient types. These constitute a non-trivial extension from the point of view of computability theory: the computable functions on a subset S of T are not immediately determined by those on T, as there may well be computable functions on S with no computable extension to T. Nevertheless, our new results show that one still obtains a highly robust and `canonical' computability notion for a wide range of mathematical types arising in this way. For many types of interest, this accords with the notion arising in Type Two Effectivity, but our contribution is to show robustness with respect to variations in the choice of the underlying computability model.

Whilst we believe that our computability notion has good credentials, it is still not the only such notion on the market for the types we have in mind. Drawing on a theorem of Schroeder (2010), we shall advance the conjecture that it is in fact one of just two natural computability notions arising in the arena of (higher-order) classical analysis.

A draft paper covering much of the material to be discussed is available at:

http://homepages.inf.ed.ac.uk/jrl/Research/ubiquity-reloaded3.pdf

15:05
T^\omega-representations of compact sets through dyadic subbases
SPEAKER: Hideki Tsuiki

ABSTRACT. We explore representing the compact subsets of a given represented space by infinite sequences over Plotkin's \T. We show that computably compact computable metric spaces admit a representation of their compact subsets in such a way that compact sets are essentially underspecified points. We can even ensure that a name of an n-element compact set contains n-1 occurrences of \bot. We undergo this study effectively and show that such a T^\omega-representation is effectively obtained from structures of computably compact computable metric spaces.

Along the way, we introduce the notion of a computable dyadic subbase, and prove that every computably compact computable metric space admits a proper computable dyadic subbase. As an application, we prove some statements about the Weihrauch degree of closed choice for finite subsets of computably compact computable metric spaces.

14:00-15:30 Session 28C (GS)
Location: Maths LT3
14:00
Game semantics: from call-by-push-value and CPS to coalgebra

ABSTRACT. This talk presents the following aspects of game semantics.

Game models of higher-order languages can be seen as reflecting CPS, where a move represents a function call. More specifically, the CPS transform from call-by-push-value translates two kinds of computation into a function call: forcing a thunk, and returning a value. These give rise to the two kinds of move: question and answer. But how should such a correspondence be formulated? In recent years, many authors have given transition systems to describe the operational behaviour that the game semantics is supposed to capture. We consider two questions concerning these. Firstly, transition systems may be more precisely described as coalgebras; so we consider what the functors are. And secondly, whereas the transition system usually studied in this context is open (normal form), there are other forms of bisimulation used for operational reasoning, viz. applicative and environmental, so we consider how these too might be related to game semantics.

14:30
Incomplete information and uniformity in game semantics

ABSTRACT. The strategies which represent programs or proofs in game semantics typically do not have access to the full history of play when making their choices. Game theorists recaognise this as the concept of games of imperfect information. An alternative notion of partial information, known as incomplete information, is much less commonplace in game semantics. In this talk we consider the notion of incomplete information, that is, a lack of information about the game itself, in the setting of game semantics of linear logic. Exploiting Harsanyi's reduction of incomplete information to partial information about the payoff structure of games, we show that this incompletness constraints strategies to exhibit a kind of uniformity in their choice of moves, which entails a full completeness result for multiplicative linear logic, replaying Abramsky and Jagadeesan's result of 25 years ago.

15:00
Asynchronous games fifteen years later

ABSTRACT. The notion of asynchronous game was introduced fifteen years ago in order to investigate the causal structure of innocent strategies in game semantics. Shifting from traditional sequential games to asynchronous games was the critical step behind the discovery of a number of structural properties of games and logic. This includes a truly concurrent formulation of innocence based on local confluence diagrams, and a positionality theorem for innocent strategies. Asynchronous games also played a central role in the emergence of tensorial logic as the logic of dialogue games, in the same way as linear logic is the logic of proof nets. Since its origins, one guideline of the theory has been the observation that asynchronous games and asynchronous strategies are causal structures of the very same nature: in other words, asynchronous strategies are asynchronous games themselves, appropriately mapped into other asynchronous games. In this talk, I will develop this simple idea one step further, and explain how to recover and to compare a number of game semantics with different flavors (alternating and non alternating) from the same categorical combinatorics, performed in the category of small categories.

14:00-15:30 Session 28F: Term Rewriting (IWC)
14:00
Convergence of Simultaneously and Sequentially Unraveled TRSs for Normal Conditional TRSs
SPEAKER: Naoki Nishida

ABSTRACT. Unravelings, which are transformations of a conditional term rewriting system (CTRS, for short) into an unconditional term rewriting system (TRS, for short),  are useful to prove confluence and operational termination of some CTRSs. A simultaneous unraveling has been proposed for normal 1-CTRSs and a sequential one has been proposed for deterministic 3-CTRSs, the class of which includes normal 1-CTRSs. In this paper, we first show that for a normal 1-CTRS, the simultaneously unraveled TRS is orthogonal iff so is the sequentially unraveled one. Then, we show that for a normal 1-CTRS, if the simultaneously unraveled TRS is terminating, then so is the sequentially unraveled one. Finally, we show that for a normal 1-CTRS with termination of the unraveled TRS, the simultaneously unraveled TRS is locally confluent iff so is the sequentially unraveled one.

14:30
Towards a Verified Decision Procedure for Confluence of Ground Term Rewrite Systems in Isabelle/HOL

ABSTRACT. Confluence is a decidable property of ground rewrite systems. We present a formalization effort in Isabelle/HOL of the decision procedure based on ground tree transducers.

15:00
Certified Ordered Completion
SPEAKER: Sarah Winkler

ABSTRACT. On the one hand, ordered completion is a fundamental technique in equational theorem proving that is employed by automated tools. On the other hand, their complexity makes such tools inherently error prone. As a remedy to this situation we give an Isabelle/HOL formalization of ordered rewriting and completion that comes with a formally verified certifier for ordered completion proofs. By validating generated proof certificates, our certifier increases the reliability of ordered completion tools.

14:00-15:30 Session 28G: Formalization (LFMTP)
Location: Maths LT2
14:00
Formalization in Constructive Type Theory of the Standardization Theorem
SPEAKER: Matrín Copes

ABSTRACT. We present a full formalization in Martin-L\"of's Constructive Type Theory of the Standardization Theorem for the Lambda Calculus using first-order syntax with one sort of names for both free and bound variables and Stoughton's multiple substitution. Our version is based on a proof by Ryo Kashima, in which a notion of $\beta$-reducibility with a standard sequence is captured by an inductive relation. The proof uses only structural induction over the syntax and the relations defined, which is possible due to the specific formulation of substitution that we employ. The whole development has been machine-checked using the system Agda.

14:30
Formalisation of Barendregt's Variable Convention for Generic Structures with Binders

ABSTRACT. We introduce a universe of regular datatypes with variable binding information, for which we define generic formation, elimination, and induction operators. We then define a generic $\alpha$-equivalence relation over the types of the universe based on name-swapping, and derive iteration and induction principles which work modulo $\alpha$-conversion capturing Barendregt's Variable Convention. We instantiate the resulting framework so as to obtain the $\lambda$-Calculus and System F, for which we derive substitution operations and substitution lemmas for $\alpha$-conversion and substitution composition. The whole work is carried out in Constructive Type Theory and machine-checked by the system Agda.

15:00
What Does This Notation Mean Anyway? BNF-style notation as it is actually used
SPEAKER: David Feller

ABSTRACT. Following the introduction of BNF notation by Backus for the Algol 60 report and subsequent notational variants, a metalanguage involving formal “grammars” has developed for discussing structured objects in Computer Science and Mathematical Logic. We refer to this offspring of BNF as Math-BNF or MBNF, to the original BNF and its notational variants just as BNF, and to aspects common to both as BNF-style. MBNF is sometimes called abstract syntax, but we avoid that name because MBNF is in fact a concrete form and has a more abstract form. What all BNF-style notations share is the use of production rules roughly of this form: ◯ ⩴ □₁ | ⋯ | □ₙ Normally, such a rule says “every instance of □ᵢ for i ∈ {1, ..., n} is also an instance of ◯”.

MBNF is distinct from BNF in the entities and operations it allows. Instead of strings, MBNF builds arrangements of symbols that we call math-text. Sometimes “syntax” is defined by interleaving MBNF production rules and other mathematical definitions that can contain chunks of math-text.

There is no clear definition of MBNF. Readers do not have a document which tells them how MBNF is to be read and must learn MBNF through a process of cultural initiation. To the extent that MBNF is defined, it is largely through examples scattered throughout the literature.

This paper gives MBNF examples illustrating some of the differences between MBNF and BNF. We propose a definition of syntactic math text (SMT) which handles many (but far from all) uses of math-text and MBNF in the wild. We aim to balance the goal of being accessible and not requiring too much prerequisite knowledge with the conflicting goal of providing a rich mathematical structure that already supports many uses and has possibilities to be extended to support more challenging cases.

14:00-15:30 Session 28H (Linearity/TLLA)
14:00
The structure of non decomposable connectives of linear logic

ABSTRACT. This paper studies the so-called generalized multiplicative connectives of linear logic, focusing on the question of finding the ``non-decomposable'' ones, i.e., those that may not be expressed as combinations of the default binary connectives of multiplicative linear logic, Tensor and Par. In particular, we concentrate on generalized connectives of a surprisingly simple form, called ``entangled connectives'', and prove a characterization theorem giving a criterion for identifying the decomposable (resp., undecomposable) entangled connectives.

14:20
On the Taylor expansion of λ-terms and the groupoid structure of their rigid approximants

ABSTRACT. We show that the normal form of the Taylor expansion of a λ-term is isomorphic to its Böhm tree, improving Ehrhard and Regnier’s original proof along three independent directions.

First, we simplify the final step of the proof by following the left reduction strategy directly in the resource calculus, avoiding to introduce an abstract machine ad-hoc.

We also introduce a groupoid of permutations of copies of arguments in a rigid variant of the resource calculus, and relate the coefficients of Taylor expansion with this structure, while Ehrhard and Regnier worked with groups of permutations of occurrences of variables.

Finally, we extend all the results to a non-deterministic setting: by contrast with previous attempts, we show that the uniformity property that was crucial in Ehrhard and Regnier’s approach can be preserved in this setting.

14:55
From Linear Logic to Cyclic Sharing

ABSTRACT. We present a translation from Multiplicative Exponential Linear Logic to a simply-typed lambda calculus with cyclic sharing. This translation is derived from a simple observation on the Int-construction on traced monoidal categories. It turns out that the translation is a mixture of the call-by-name CPS translation and the Geometry of Interaction-based interpretation.

14:00-15:30 Session 28I: Contributed papers (NLCS)
Chair:
14:00
Speakers in vats: simulating model-theoretic alignment with distributional semantics

ABSTRACT. One long-standing puzzle in semantics is the ability of speakers to refer successfully in spite of holding different models of the world. This puzzle is famously illustrated by the cup/mug example: if two speakers disagree on whether a specific entity is a cup or a mug (i.e. if their interpretation functions differ), how can they align so that the entity can still be talked about?

Another puzzle, coming to us through lexical and distributional semantics, is that word meaning seems to be infinitely flexible, indeed much more so than the traditional notion of sense would have it. This makes the alignment process between speakers even more unpredictable.

In this talk, I will report on a series of experiments aiming at investigating what differences in language use can tell us about the ability of speakers to align at a model-theoretic level. Since speaker-dependent data is extremely hard to obtain, I propose a new methodology to 'spawn' speakers from a reference distributional semantic space, corresponding to different types of variations in language use. I show how and where alignment is disturbed, and give a theoretical account of how such perturbations relate to potentially catastrophic differences in world representations.

15:00
Graph Knowledge Representations for SICK

ABSTRACT. Determining semantic relationships between sentences is essential for machines that understand and reason with natural language. Despite neural networks big successes, end-to-end neural architectures still fail to get acceptable performance for textual inference, maybe due to lack of adequate datasets for learning. Recently large datasets have been constructed e.g. SICK, SNLI, MultiNLI, but it is not clear how trustworthy these datasets are. This paper describes work on an expressive open-source semantic parser GKR that creates graphical representations of sentences used for further semantic processing, e.g. for natural language inference, reasoning and semantic similarity. The GKR is inspired by the Abstract Knowledge Representation (AKR), which separates out conceptual and contextual levels of representation that deal respectively with the subject matter of a sentence and its existential commitments. We recall work investigating SICK and its problematic annotations and propose to use GKR as a better representation for the semantics of SICK sentences.

14:00-15:30 Session 28J: Contributed talks (PARIS)
Location: Maths L4
14:00
Useless Explicit Induction Reasoning

ABSTRACT. In the setting of classical first-order logic with inductive predicates, we give as example a theory for which the conjectures that cannot be proved without induction reasoning are still not provable by adding non-trivial explicit induction reasoning.

14:45
Towards the Automatic Construction of Schematic Proofs
SPEAKER: David Cerna

ABSTRACT. In recent years \emph{schematic representations} of proofs by induction have been studied for there interesting proof theoretic properties, i.e.\ allowing extensions of Herbrand's theorem to certain types of inductive proofs. Most of the work concerning these proof theoretic properties presented schematic proofs as sets of proofs connected by \emph{links} together with a global soundness condition. Recently, the $\mathcal{S}\mathit{i}\mathbf{LK}$-calculus was introduced which provides inferences for expanding the sets of proofs within a schematic proof as well as introducing links without violating the soundness condition. In this work we discuss a simplification of the $\mathcal{S}\mathit{i}\mathbf{LK}$-calculus which isolates the essential mechanisms and provides a path towards the automated construction of schematic proofs.

14:00-15:30 Session 28K (PC)
Location: Maths L6
14:00
Space Proof Complexity beyond Resolution

ABSTRACT. Starting from results and techniques to prove lower and upper bounds for space complexity of resolution refutations, we will show some results about the state of the art of space bounds for the complexity of proofs in proof systems beyond resolution, like polynomial calculus, cutting planes and Frege systems, underlying open problems and  their importance.

 

15:00
Branching Program Complexity of Canonical Search Problems and Proof Complexity of Formulas

ABSTRACT. It is well-known that there is an equivalence between ordered resolution and ordered binary decision diagrams (OBDD); i.e., for any unsatisfiable formula phi, the size of the smallest ordered resolution refutation of phi equal to the size of the smallest OBDD for the canonical search problem corresponding to phi. But there is no such equivalence between resolution and branching programs (BP).

In this project, we study different proof systems equivalent to classes of branching programs between BP and OBDD. These proof systems are similar to roABP-IPS, an algebraic proof system defined by Forbes et al. and based on the ideal proof system introduced by Grochow and Pitassi.

We show that proof systems equivalent to b-OBDD are not comparable with resolution and cutting planes (for b > 1). We also prove exponential lower bounds for these proof systems on Tseitin formulas. Additionally, we show that proof systems equivalent to (1,+b)-BP are strictly stronger than regular resolution for b > 0; moreover, resolution does not p-simulate (1, +b)-BP for b > 5.

14:00-15:30 Session 28L: Determinacy, compositionality (SR)
Chair:
14:00
Compositional Game Theory
SPEAKER: Neil Ghani

ABSTRACT. We introduce open games as a compositional foundation of economic game theory. A compositional approach potentially allows methods of game theory and theoretical computer science to be applied to large-scale economic models for which standard economic tools are not practical. An open game represents a game played relative to an arbitrary environment and to this end we introduce the concept of coutility, which is the utility generated by an open game and returned to its environment. Open games are the morphisms of a symmetric monoidal category and can therefore be composed by categorical composition into sequential move games and by monoidal products into simultaneous move games. Open games can be represented by string diagrams which provide an intuitive but formal visualisation of the information flows. We show that a variety of games can be faithfully represented as open games in the sense of having the same Nash equilibria and off-equilibrium best responses.

14:40
Concurrent games and semi-random determinacy

ABSTRACT. Consider concurrent, infinite duration, two-player win/lose games played on graphs. If the winning condition satisfies some simple requirement, the existence of Player 1 winning (finite-memory) strategies is equivalent to the existence of winning (finite-memory) strategies in finitely many derived one-player games. Several classical winning conditions satisfy this simple requirement.

Under an additional requirement on the winning condition, the non-existence of Player 1 winning strategies from all vertices is equivalent to the existence of Player 2 stochastic strategies winning almost surely from all vertices. Only few classical winning conditions satisfy this additional requirement, but a fairness variant of o

14:00-15:30 Session 28M: Invited Talk, and Semantics (TERMGRAPH)
14:00
Modeling terms by graphs with structure constraints (two illustrations)

ABSTRACT. Highlighting the usefulness of graph techniques for problems that have
been approached predominantly as questions about terms, I want to
explain two examples from my past and current work: increasing sharing
in functional programs, and tackling problems about Milner's process
interpretation of regular expressions. The unifying element consists in
modeling terms by term graphs or transition graphs with structure
constraints (higher-order features or labelings with added conditions),
and in being able to go back and forth between terms and graphs.

The first example concerns the definition, and efficient implementation
of maximal sharing for the higher-order terms in the lambda calculus
with letrec. Jan Rochel and I showed that higher-order terms can be
represented by, appropriately defined, higher-order term graphs, that
those can be encoded as first-order term graphs, and that these can in
turn be represented as deterministic finite-state automata (DFAs). Via
these correspondences and DFA minimization, maximal shared forms of
higher-order terms can be computed.

The setting for the second illustration is the process interpretation
of regular expressions, which yields nondeterministic finite-state
automata (NFAs) whose equality is studied under bisimulation. Unlike
for the standard language interpretation, not every NFA can be
expressed by a regular expression. I will explain Milner's
recognizability and axiomatization problems, and some of the results/
partial results that have been obtained. Finally I will explain my
current approach in work together with Wan Fokkink: inspired by
Milner's notion of `loop' we use labelings of process graphs that
witness direct expressibility by a regular expression.
 

15:00
Semantics-Preserving DPO-Based Term Graph Rewriting
SPEAKER: Wolfram Kahl

ABSTRACT. Term graph rewriting is important as ``conceptual implementation'' of the execution of functional programs, and of data-flow optimisations in compilers. One way to define term graph transformation rule application is via the well-established and intuitively accessible double-pushout (DPO) approach; we present a new result proving semantics preservation for such DPO-based term graph rewriting.

14:00-15:30 Session 28N: Medial (TYDI)
Location: Maths Boardroom
14:00
Two Unifying Structural Principles

ABSTRACT. It is possible to design proof systems for a vast range of logics, including all the common ones, by adopting two general proof constructors:

(1) the subatomic shape recently introduced by Andrea Aler Tubella, and

(2) a very general notion of formal substitution that is currently being explored by Benjamin Ralph and that subsumes standard quantification.

The two constructors could be considered two design principles, or, in other words, two ways of looking at proofs. In this talk, I will show how, by adopting them, we get a good starting point for designing proof systems that naturally yield canonical proofs of minimal size and that allow for a unified normalisation theory.

15:00
Deep Inference, Herbrand's Theorem and Expansion Proofs

ABSTRACT. The reduction of undecidable first-order logic to decidable propositional logic via Herbrand's theorem has long been of interest to theoretical computer science, with the notion of a Herbrand proof motivating the definition of expansion proofs, a compositional formalism that expresses the strictly first-order content of a proof. Recently, a simple deep inference system for first-order logic, KSh2, has been constructed based around the notion of expansion proofs, as a starting point to developing a rich proof theory around this foundation. This abstract summarises the author's recent paper, with a slight change of focus due to the nature of the workshop.

14:00-15:30 Session 28P: Invited Talk, and Unification Modulo (UNIF)
14:00
Compressed Term Unification: Results, uses, open problems, and hopes

ABSTRACT. Already in the classic first-order unification problem, the choice of a suitable formalism for term representation has a significant impact in computational efficiency. One can find other instances of this situation in some variants of second-order unification, where representing partial solutions efficiently leads to better algorithms.

In this talk I will present some compression schemes for terms and discuss computational complexity results for variants of first and second-order unification on compressed terms. I'll show how these results build up on each other and discuss open problems in compressed term unification, as well as potential approaches to solutions.

15:00
ACUI Unification modulo Ground Theories

ABSTRACT. It is well-known that the unification problem for a binary associative-commutative-idempotent function symbol with a unit (ACUI-unification) is polynomial for unification with constants and in NP for general unification. We show that the same is true if we add a finite set of ground identities.

14:30-15:30 Session 29A: Commodore (LOLA)
14:30
Invited talk: Semantic Equivalence Checking for HHVM Bytecode

ABSTRACT. We describe a semantic differencing tool used to compare the bytecodes generated by two different compilers for Hack/PHP at Facebook. The tool is a prover for a simple relational Hoare logic for low-level code and is used in testing, allowing the developers to focus on semantically significant differences between the outputs of the two compilers.

14:30-15:30 Session 29B: MAXSAT & local search (POS)
Location: Maths L5
14:30
Local-Style Search in the Linear MaxSAT Algorithm: A Computational Study of Solution-Based Phase Saving

ABSTRACT. We study solution-based phase saving in the linear maxSAT algorithm. Contrary to conventional phase saving, which assigns values to variables based on their most recent assignment during the search, solution-based phasing selects the values according to the best solution found so far. Thus, the algorithm focuses its efforts in the region ``close'' to the current best solution. Such an approach resembles techniques used in local search algorithms, where small incremental perturbations are performed on the best known solution. Although the algorithm has appeared in the literature before, it has not been used in the annual maxSAT evaluations and thus its position among modern solvers is not clear. We implemented solution-based phase saving in Open-WBO and evaluated its performance on benchmarks from the incomplete track of the latest maxSAT evaluation 2017 and the international timetabling competition 2011. The experimental results demonstrate that the approach is highly effective when compared to the baseline linear maxSAT algorithm, outperforming the rank one solvers from the 60 and 300 seconds unweighted competition tracks. When compared to winner of the weighted categories, maxroster, our analysis reveals that the approach provides substantially better results for a number of applications, even competing against specialized timetabling algorithms, although maxroster achieves a higher overall competition score. Hence, we argue solution-based phase saving should be part of the standard machinery for maxSAT.

15:00
MLIC: A MaxSAT-Based framework for learning interpretable classification rules

ABSTRACT. The wide adoption of machine learning approaches in the industry, government, medicine and science has renewed the interest in interpretable machine learning: many decisions are too important to be delegated to black-box techniques such as deep neural networks or kernel SVMs. Historically, problems of learning interpretable classifiers, including classification rules or decision trees, have been approached by greedy heuristic methods as essentially all the exact optimization formulations are NP-hard. Our primary contribution is a MaxSAT-based framework, called MLIC, which allows principled search for interpretable classification rules expressible in propositional logic. Our approach benefits from the revolutionary advances in the constraint satisfaction community to solve large-scale instances of such problems. In experimental evaluations over a collection of benchmarks arising from practical scenarios we demonstrate its effectiveness: we show that the formulation can solve large classification problems with tens or hundreds of thousands of examples and thousands of features, and to provide a tunable balance of accuracy vs. interpretability. Furthermore, we show that in many problems interpretability can be obtained at only a minor cost in accuracy.

The primary objective of the paper is to show that recent advances in the MaxSAT literature make it realistic to find optimal (or very high quality near-optimal) solutions to large-scale classification problems. We also hope to encourage researchers in both interpretable classification and in the constraint programming community to take it further and develop richer formulations, and bespoke solvers attuned to the problem of interpretable ML.

[We have submitted the paper concurrently to CP-18 and therefore marked the paper in "presentaiton-only" category]

15:00-15:30 Session 30A (HDRA)
Location: Blavatnik LT2
15:00
Towards a fully formalized definition of syllepsis in weak higher categories

ABSTRACT. The aim of this presentation is to illustrate the use of formal methods in order to reason in the theory of weak mega-categories. The formalism considered here is based on the type theory CaTT introduced by Finster and Mimram, extended with some metatheory on top of it, with an implementation

We first present how the system works and can be used, and then develop some ``real-world'' examples, such as the definition of the braidings in k-tuply monoidal omega-categories (following the terminology of Baez), k>= 2, with the aim of showing that we have a syllepsis in the case k<= 3. These developments have motivated metatheoretical improvements to the proof assistant, which we also present and discuss here, in order to handle and partly automate large proofs.

15:00-15:30 Session 30B (HOR)
15:00
Refining Properties of Filter Models: Sensibility, Approximability and Reducibility

ABSTRACT. In this paper, we study the tedious link between the properties of sensibility and approximability of models of untyped λ-calculus. Approximability is known to be a slightly, but strictly stronger property that sensibility. However, we will see that so far, each and every (filter) model that have been proven sensible are in fact approximable. We explain this result as a weakness of the sole known approach of sensibility: the Tait reducibility candidates and its realizability variants.

In fact, we will reduce the approximability of a filter model D for the λ-calculus to the sensibility of D but for an extension of the λ-calculus that we call λ-calculus with D-tests. Then we show that traditional proofs of sensibility of D for the λ-calculus are smoothly extendable for this λ-calculus with D-tests.

15:00-15:30 Session 30C (HoTT/UF)
Location: Blavatnik LT1
15:00
Geometric realization of truncated semi-simplicial sets meta-constructed within HoTT
15:30-16:00Coffee Break
16:00-18:00 Session 31A (CL&C)
16:00
Admissible tools in the kitchen of intuitionistic logic

ABSTRACT. The usual reading of logical implication φ → ψ as “if φ then ψ” fails in intuitionistic logic: there are formulas φ and ψ such that φ → ψ is not provable, even though ψ is provable whenever φ is provable. Intuitionistic rules apparently cannot derive interesting meta-properties of the logic and, from a computational perspective, the programs corresponding to intuitionistic proofs are not powerful enough. We propose a term assignment corresponding to the admissible rules of intuitionistic propositional logic, and employ it to study the admissible principles. We provide as an example a study of Kreisel-Putnam logic KP: we give a Curry-Howard correspondence for this system, proving the disjunction property and all the good constructive properties. This is a first step in understanding how to the programs of admissible rules look like.

16:30
The “duality of computation” from a fibrational perspective

ABSTRACT. We revisit the symmetric mu-mutilde-framework introduced by Curien and Herbelin (later baptised System L by Munch-Maccagnoni) from the perspective of fibred category theory, i.e. using fibred spans and cospans and their relation to (CAT-valued) distributors. The mu-mutilde-framework provides direct computational interpretations for proofs in variants of Gentzen’s classical (multi-conclusioned) sequent calculus LK, exhibiting a duality between call-by-name and call-by-value evaluation. Not globally fixing an evaluation strategy however gives rise to a non-confluent critical pair. This problem has been further analyzed by Munch-Maccagnoni via polarized linear logic and the notion of duploid, where the critical pair is exposed as a failure of associativity of composition. One important motivation to revisit duploids and the mu-mutilde-framework from the perspective of fibred category theory is to understand these in a setting where usually dependent type theories are interpreted and to consider what kind of dependencies might be appropriate for such a symmetric system beyond a mere combination with ”usual” asymmetric intuitionistic dependent types. This question leads in a natural way to consider co- and contravariant reindexing, thereby providing for example a fresh perspective on the mu-mutilde-critical pair as choice between the two notions of reindexing, left and right sequent calculus rules as left and right actions (like for module structures in algebra) and polarity shifts as related to (op)cartesian liftings. It also suggests a refinement of the notions of “call-by-name” and “call-by-value”. This is still work in progress, and feedback will be appreciated.

17:00
A Denotational Model of the Typed Lambda-Mu Calculus

ABSTRACT. This paper gives a denotational model to the typed lambda-mu calculus. In this model, absurdy type $\bot$ is interpreted as the type of truth values, and each type is divided into infinitely many ranks, each of which forms a Boolean algebra. The double-negation elimination, the signature deduction rule of classical logic, is interpreted as a move from a member of domain $D$ of type $\neg \neg A (= (A \to \bot) \to \bot)$, rank $\leq n$ to a member of domain $D'$ of type $A$, rank $\leq n+1$. $\mu$-variables are interpreted as variables ranging over a certain restricted domain.

17:30
Some ideas on cut-elimination for cyclic arithmetic proofs

ABSTRACT. Cyclic arithmetic, proposed by Simpson '17, is a deduction system based in the language of arithmetic where proofs may be non-wellfounded, as opposed to usual approaches to infinitary proof theory via an omega-rule. Naturally, some form of correctness condition must be imposed to ensure sound reasoning, and this is implemented by a 'trace' condition at the level of the 'flow graph' of the proof (cf. Buss '91). 'Cyclic arithmetic' (CA) itself consists of such proofs that are regular, i.e. that have only finitely many distinct subtrees, and so may be expressed as a finite directed graph (with cycles). It was independently shown by Simpson '17 and Berardi & Tatsuta '17 that CA and PA are equivalent, and recently that logical complexity in the two theories is similar (Das '18).

We consider the issue of cut-elimination for CA. Such a procedure cannot preserve regularity of proofs, so the issue is to show that cut-elimination is 'productive'. To this end, 'continuous' cut-elimination procedures have long been studied in the proof theory of arithmetic, originating from Mints '78. However the difficulties arising from the 'repetition' rule, ensuring continuity, and the need to preserve trace conditions seems to suggest that an alternative approach is pertinent.

In this work-in-progress, we show how cut-elimination can be similarly achieved by a certain reduction to finitary cut-elimination. We compute certain 'runs' through a non-wellfounded proof which must be finite thanks to the trace condition, and show that these are preserved in the image of cut-elimination. Productivity follows since cut-free runs must be non-empty, and validity follows by the finiteness of runs.

The computation of runs, naively, makes use of a semantic oracle, though we believe that this can be replaced by purely syntactic concepts via a 'geometry of interaction' approach to cut-elimination, cf. Girard '89. This would yield a novel proof of the consistency of PA indirectly via CA.

REFERENCES

Berardi & Tatsuta '17. Equivalence of inductive definitions and cyclic proofs under arithmetic. Proceedings of LICS ’17.

Buss '91. The undecidability of k-provability. Annals of Pure and Applied Logic.

Das '18. On the logical complexity of cyclic arithmetic. Preprint.

Girard '89. Towards a geometry of interaction. Proceedings of Categories in Computer Science and Logic.

Mints '78. Finite investigations of transfinite derivations. Journal of Soviet Mathematics.

Simpson '17. Cyclic Arithmetic is Equivalent to Peano Arithmetic. Proceedings of FoSSaCS ’17.

16:00-16:50 Session 31B (Domains13)
Location: Maths LT1
16:00
Higher-dimensional categories: recursion on extensivity

ABSTRACT. Bicategories have been used for many years to model computational phenomena such as concurrency and binders. The collectivity, Bicat, of bicategories has the structure of a tricategory, which have occasionally appeared explicitly and more often implicitly in the programming semantics literature. But what is a weak n-category in general? Strict n-categories for arbitrary n have been used to model concurrency and in connection with rewriting. So it seems only a matter of time until weak n-categories for arbitrary n prove to be of value for programming semantics too. Here, we explore, enrich, and otherwise mildly generalise a prominent definition by Batanin, as refined by Leinster, together with the surrounding theory: they assumed knowledge of sophisticated mathematics, glossed over the inherent recursion, and did not consider the concerns of programming semantics.

16:25
A logical view of complex analytic maps

ABSTRACT. We introduce the localic derivative of complex analytic functions in the sense of program logic, giving rise to a Stone duality result for these functions. This extends the related work on real-valued functions on real finite dimensional Euclidean spaces to functions of complex variables.

16:00-18:00 Session 31C (GS)
Chair:
Location: Maths LT3
16:00
Games and the Lambda Cube

ABSTRACT. Type theory and game semantics are both foundational and practically applicable theories of meaning for programming languages and logics. Indeed, types are at the heart of game semantics, which typically identifies them with games themselves, although moving beyond simple type theory can be challenging. I will give an overview of research into the game semantics of richer typing systems, such as those characterised by Barendregt's "lambda-cube", including:

* Polymorphism: generic models of System F and their extension with subtype polymorphism and bounded quantification. * Dependent Types: recent work developing semantics of Martin-Löf dependent type theory, and extending it with "semantic universes" by relating intensional and extensional models. * Type-operators: some steps towards the semantics of higher-order polymorphism (i.e. quantification over type-operators), which are used in interpreting modules, datatype generic programming and generalised algebraic datatypes.

16:30
Games, Geometry of Interaction, Reversible Computations, Lambda Calculus

ABSTRACT. In [Abr05], S. Abramsky introduced various linear/affine combinatory algebras consisting of partial involutions over a suitable formal language, in order to discuss reversible computation in a game-theoretic setting. These algebras arise as instances of the general paradigm explored by E. Haghverdi, called "Abramsky's Programme" [Hag00,AHS02], which amounts to defining a lambda-algebra starting from a GoI Situation in a traced symmetric monoidal category.

In this paper, we investigate this construction from the point of view of the model theory of lambda-calculus. We focus on the strictly linear and the strictly affine parts of Abramsky's Affine Combinatory Algebras. Finally, we briefly outline how to encompass the full algebra.

The gist of our approach is that the GoI interpretation of a term based on involutions is "dual" to the principal type of the term, with respect to the simple types discipline for a linear/affine lambda-calculus. In the general case the type discipline and the calculus need to be extended, respectively, with intersection and !-types, and !-abstractions.

Our analysis unveils three conceptually independent, but ultimately equivalent, accounts of application in the lambda-calculus: beta-reduction, the GoI application of involutions based on symmetric feedback (Girard's Execution Formula), and unification of principal types.

Our result provides an answer, in the strictly affine case, to the question raised in [Abr05] of characterising the partial involutions arising from bi-orthogonal pattern matching automata, which are denotations of affine combinators, and it points to the answer to the full question. Furthermore, we prove that the strictly linear combinatory algebra of partial involutions is a strictly linear lambda-algebra, albeit not a combinatory model, while both the strictly affine combinatory algebra and the full affine combinatory algebra are not. In order to check all the necessary equations involved in the definition of affine lambda-algebra we implement in Erlang application of involutions.

[Abr05] S. Abramsky, "A Structural Approach to Reversible Computation", Theoretical Computer Science, vol. 347 (2005), no.3, pp. 441-464. [AHS02] S. Abramsky, E. Haghverdi, P. Scott, "Geometry of Interaction and linear combinatory algebras", Mathematical Structures in Computer Science, vol. 12 (2002), no.5, pp. 625--665. [Hagh00] E. Haghverdi, "A Categorical Approach to Linear Logic, Geometry of Proofs and full completeness", PhD Thesis, University of Ottawa, 2000.

Joint work with Alberto Ciaffaglione, Furio Honsell, Ivan Scagnetto (University of Udine, Italy)

17:00
Characterizing observational equivalence for PCF terms within richer languages

ABSTRACT. I will explore the idea that if L is a sublanguage of L', it may happen that the observational equivalence relations of L and L' are both hard to analyse, but that L'-observational equivalence for L-terms is relatively tame. For example, we know from game semantics that if L is PCF, then in fact many different extensions L' all yield the same well-understood equivalence relation on L, namely equality of innocent strategies, or equivalently of PCF Böhm trees.

I will discuss various ways of analysing and axiomatizing this equivalence, and present a general theorem showing that any language L' satisfying some mild conditions will give rise to this equivalence on PCF terms.

16:00-18:00 Session 31D (HDRA)
Location: Blavatnik LT2
16:00
Minimal Bacus FP is Turing Complete

ABSTRACT. In his 1977 Turing Award address, John Backus introduced the model of functional programming called "FP". FP is a descendant of the Herbrand-Godel notion of recursive definablity and the ancestor of the programming language Haskell. One reason that FP is attractive is that it provides "an algebra of functional programs" However, Backus did not believe that basic FP was powerful enough; "FP systems have a number of limitations..... If the set of primitive functions and functional forms is weak, it may not be able to express every computable function." and he moved on to stronger systems. It turns out that, in this respect, Backus was mistaken. Here we shall show that FP can compute every partial recursive function. Indeed we shall show that FP can simulate the behavior of an arbitrary Turing machine. Our method for doing this is the following. We first observe that the equational theory of Cartesian monoids is a fragment of FP. Cartesian monoids are rather simple algebraic structures of which you know many examples. They also travel under many assumed names such as Cantor algebras, Jonsson-Tarski algebras, and Freyd-Heller monoids. This theory, together with fixed points for all Cartesian monoid polynomials, is contained in FP. Now Cartesian monoids with fixed points can be studied using rewrite techniques, learned from lambda calculus, including confluence and standarization. Turing machines can then be simulated; transitions corresponding to fixed points and computations corresponding to standard reductions to normal form.

16:30
String data structures for Chinese monoids

ABSTRACT. The structure of Chinese monoid appeared in the classification of monoids with the growth function coinciding with that of the plactic monoid. In this work, we deal with the presentations of the Chinese monoid from the rewriting theory perspective using the notion of string data structures. We define a string data structure associated to the Chinese monoid using the insertion algorithm on Chinese staircases. As a consequence, we construct a finite semi-quadratic convergent presentation of the Chinese monoid and we extend it into a finite coherent presentation of this monoid.

17:00
Generalizations of the associative operad and convergent rewrite systems

ABSTRACT. The associative operad is the quotient of the magmatic operad by the operad congruence identifying the two binary trees of degree $2$. We introduce here a generalization of the associative operad depending on a nonnegative integer $d$, called $d$-comb associative operad, as the quotient of the magmatic operad by the operad congruence identifying the left and the right comb binary trees of degree $d$. We study the case $d = 3$ and provide an orientation of its space of relations by using rewrite systems on trees and the Buchberger algorithm for operads to obtain a convergent rewrite system.

17:30
Normal forms for planar connected string diagrams

ABSTRACT. In the graphical calculus of planar string diagrams, equality is generated by the left and right exchange moves, which swaps the heights of adjacent vertices. We show that for connected diagrams the left- and right-handed exchanges each give strongly normalizing rewrite strategies. We show that these strategies terminate in O(n^3) steps where n is the number of vertices. We also give an algorithm to directly construct the normal form, and hence determine isotopy, in O(n∙m) time, where m is the number of edges.

16:00-17:00 Session 31E (HOR)
16:00
Normalization and Taylor expansion of lambda-terms

ABSTRACT. The aim of this work is to characterize three fundamental normalization proprieties in lambda-calculus trough the Taylor expansion. The general proof strategy consists in stating the dependence of ordinary reduction strategies on their resource counterparts and in finding a convenient resource term in the support of the Taylor expansion that behaves well under the considered kind of reduction.

16:30
The next 700 (type-theoretical) denotational models ?

ABSTRACT. The use of intersection type systems in the the study of denotational semantics of the untyped λ-calculus has been a fruitful line of research over the past thirty-five years. In this extended abstract we present a research project aiming at broaden the boundaries of such type-theoretical approach to semantics, possibly to other idealized programming languages. The key idea is to ex- ploit a more abstract view on what a type system is. This view finds its root in higher-dimensional category theory.

16:00-18:00 Session 31F (HoTT/UF)
Location: Blavatnik LT1
16:00
Univalent Foundations and the Constructive View of Theories
16:30
Algebraic models of dependent type theory
17:00
First-order homotopical logic and Grothendieck fibrations
16:00-18:00 Session 31G: Applications and Business Meeting (IWC)
16:00
Complete Axiom System of Cluster Algebra
SPEAKER: Kousuke Fukui

ABSTRACT. Top trees with DAG representation can be used to compress huge ordered-tree data such as XML documents. However, one ordered tree can be represented by several top trees, so it is necessary to efficiently decide which top trees represent the same tree for higher compression rate. In this paper, we give a complete axiom system for the equational theory of top trees, called the cluster algebra. In order to prove the completeness, we introduce a reduction system on cluster algebra, and show the strong normalization and the unique normal form property.

16:30
IWC Business Meeting
ABSTRACT.
  • report by program chairs
  • organization of IWC and CoCo
  • bylaws
  • next IWC
16:00-18:00 Session 31H: Implementation (LFMTP)
Location: Maths LT2
16:00
Invited talk: A fresh view of call-by-need

ABSTRACT. Call-by-need is a lazy evaluation strategy which overwrites an argument with its value the first time it is evaluated, thus avoiding a costly re-evaluation mechanism. It is used in functional programming languages like Haskell and Miranda. In this talk we present a fresh view of call-by-need in three different aspects:

We revisit the completeness theorem relating (weak) call-by-need with standard (weak) call-by-name. We relate the syntactical notion of (weak) call-by-need with the corresponding semantical notion of (weak) neededness, based on the theory of residuals. We extend the weak call-by-need strategy, which only computes weak head normal forms of closed terms, to a strong call-by-need strategy which computes strong normal forms of open terms, a notion of reduction which is used in proof assistants like Coq and Agda.

We conclude our talk by proposing some future research directions.

17:00
Abstract Representation of Binders in OCaml using the Bindlib Library

ABSTRACT. The Bindlib library for OCaml provides a set of tools for the manipulation of data structures with variable binding. It is very well suited for the representation of abstract syntax trees, and has already been used for the implementation of half a dozen languages and proof assistants (including a new version of the logical framework Dedukti). Bindlib is optimised for fast substitution, and it supports variable renaming. Since the representation of binders is based on higher-order abstract syntax, variable capture cannot arise during substitution. As a consequence, variable names are not updated at substitution time. They can however be explicitly recomputed to avoid “visual capture” (i.e., distinct variables with the same apparent name) when a data structure is displayed.

17:30
Functional programming with λ-tree syntax: a progress report

ABSTRACT. In this progress report, we highlight the design of the functional programming language MLTS which we have recently proposed elsewhere. This language uses the λ-tree syntax approach to encoding data structures that contain bindings. In this setting, bound variables never become free nor escape their scope: instead, binders in data structures are permitted to move into binding sites within programs. The concrete syntax of MLTS is based on the one for OCaml but includes additional binders within programs that directly support the mobility of bindings. The natural semantics of MLTS can be viewed as a logical theory within the logic G, which forms the basis of the Abella proof system and which includes nominal abstractions and the ∇-quantifier. Here, we provide several examples of MLTS programs. We also illustrate how many Abella relational specifications that are known to specify functions can be rewritten as functions in MLTS.

16:00-18:00 Session 31I: DECsystem (LOLA)
16:00
Denotational Event structure for relaxed memory

ABSTRACT. We present a new methodology to model relaxed memory, interpreting a program on a given architecture by an event structure, expressing the causality between instructions, according to the optimisations performed by the hardware. The methodology uses standard tools of event structure theory, in particular product of event structures and the correspondence between event-based and execution-based representations.

In the talk, we will demonstrate the methodology on a simple weak memory specification: TSO (corresponding to Intel x86 processors). We will then present two applications of this model: one theoretical, showing a strong DRF guarantee, and another, more practical, showing how the model allows tinkering to generate smaller program representation than the state of the art.

16:30
A resource modality for RAII

ABSTRACT. We model exceptions in a linear, effectful setting by relaxing the notion of monadic strength to contexts that are discardable, in the spirit of C++ destructors. We find a ressource modality in linear call-by-push-value, that recalls unique_ptr/Box and move semantics in C++11 and Rust. This resource modality gives rise to an ordered logic in which symmetry (move) and weakening (drop) are available as effectful operations. We explore consequences in language design for resource management in functional programming languages. (2-page abstract)

17:00
Experimenting with graded monads: certified grading-based program transformations
SPEAKER: Tarmo Uustalu

ABSTRACT. In this work, we formalized in Agda graded monads, type and grade inference for computational lambda calculus with a graded monad, instantiate this for three kinds of effect and prove correct a number of program equivalences that depend on grading - to try out the use of graded monads for reasoning about programs in practice.

(We submitted the same abstract to TYPES 2018, which is a similar informal workshop with no publication.)

17:30
Heaps denote finitely partitioned trees

ABSTRACT. We give a denotational semantics of a first-order language with references, in which a computation denotes a dependent partial function. Heaps are represented as finitely partitioned trees.

16:00-18:10 Session 31J (Linearity/TLLA)
16:00
Applying linear logic semantics to probabilistic programming
17:00
Proof nets, coends and the Yoneda isomorphism

ABSTRACT. Proof nets provide permutation-independent representations of proofs and are used to investigate coherence problems for monoidal categories. We investigate a coherence problem concerning Second Order Multiplicative Linear Logic MLL2, that is, the one of characterizing the equivalence over proofs generated by the interpretation of quantifiers by means of ends and coends. By adapting the "rewiring approach" used in the proof net characterization of the free *-autonomous category, we provide a compact representation of proof nets for a fragment of MLL2 related to the Yoneda isomorphism. We prove that the equivalence generated by coends over proofs in this fragment is fully characterized by the rewiring equivalence over proof nets.

17:35
Quantum programming made easy
SPEAKER: Luca Paolini

ABSTRACT. We introduce the functional language IQu which, under the paradigm “quantum data & classical control” and in accordance with the model QRAM, allows to define and manipulate quantum circuits and quantum states on which we can execute partial measurement. IQu tailors a lot of ideas from the design of Idealized Algol (roughly, PCF extended with local stores and assignment) and its side-effect management. These ideas play a crucial role in the language design: each quantum co-processor is formalized by means of a quantum register (storing a quantum state) that can be modified by quantum directives (lists of unitary gates). The linearity of quantum states is assured by a one-to one correspondence between quantum states and quantum registers. We adapt the type system of Idealized Algol for typing both quantum-registers and quantum-directives. The types for quantum registers are parametric on the number of qubits and their linearity is granted for free. IQu operates on quantum circuits as they were classical data so no restriction exists on their duplication.

16:00-18:00 Session 31K: Contributed talks (NLCS)
16:00
Automated Reasoning from Polarized Parse Trees
SPEAKER: Larry Moss

ABSTRACT. This paper contributes to symbolic inference from text, including naturally occurring text. The idea is to take sentences in a framework like CCG, and then run a polarizing algorithm like the one in Hu and Moss 2018 to determine inferential polarity markings of all the constituents. From this, it is just a small step to obtain an inference engine which is both simple to describe and implement and at the same time is surprisingly powerful. We have implemented the basic inference step. This paper is work in progress, also going into detail on our projected next steps. The overall goal is to have a working symbolic inference system which covers "in-practice" inference and also is correct and efficient.

16:30
Do different syntactic trees yield different logical readings? Some remarks on head variables in typed lambda calculus.
SPEAKER: Davide Catta

ABSTRACT. A natural question in categorial grammar is the relation between a syntactic analysis s and the logical form, i.e. the logical formula obtained from this syntactic analysis, once provided with semantic lambda terms. More precisely, do different syntactic analyses fed with equal semantic terms, lead to equal logical form? We shall show that when this question is too simply formulated, the answer is "NO" while with some constraints on semantic lambda terms the answer is "yes". 

17:00
Automatic test suite generation for PMCFG grammars

ABSTRACT. We present a method for finding errors in formalized natural language grammars, by automatically and systematically generating test cases that are intended to be judged by a human oracle. The method works on a per-construction basis; given a construction from the grammar, it generates a finite but complete set of test sentences (typically tens or hundreds), where that construction is used in all possible ways. Our method is an alternative to using a corpus or a treebank, where no such completeness guarantees can be made. The method is language-independent and is implemented for the grammar formalism PMCFG, but also works for weaker grammar formalisms. We evaluate the method on a number of different grammars for different natural languages, with sizes ranging from toy examples to real-world grammars.

17:30
OpenWordNet-PT: Taking Stock

ABSTRACT. This note discusses work on lexical resources for Portuguese centered around OpenWordNet-PT, an open source wordnet-like resource for Portuguese. We discuss the initial developments, the sister project Nomlex-PT and mostly the applications that were developed in the quest for being able to do reasoning with Portuguese texts.

16:00-18:00 Session 31L: Invited talk + contributed talk (PARIS)
Location: Maths L4
16:00
The size-change principle and circular proofs: checking totality of (co)recursive definitions
17:15
What makes guarded types tick?
SPEAKER: Patrick Bahr

ABSTRACT. We give an overview of the syntax and semantics of Clocked Type Theory (CloTT), a dependent type theory for guarded recursion with many clocks, in which one can encode coinductive types and capture the notion of productivity in types. The main novelty of CloTT is the notion of ticks, which allows one to open the delay type modality, and, e.g., define a dependent form of applicative functor action, which can be used for reasoning about coinductive data. In the talk we will give examples of programming and reasoning about guarded recursive and coinductive data in CloTT, and we will present the main syntactic results: Strong normalisation, canonicity and decidability of type checking. If time permits, we will also sketch the main ideas of the denotational semantics for CloTT.

16:00-18:10 Session 31M (PC)
Location: Maths L6
16:00
On OBBD Proofs
17:00
Resolution and the binary encoding of combinatorial principles

ABSTRACT. We give lower bounds in Resolution-with-bounded-conjunction, Res$(s)$, for families of contradictions where witnesses are given in the unusual binary encodings. The two families we focus on are the $k$-Clique Formulas and those associated with the (weak) Pigeonhole Principle. If one could give lower bounds in Res$(\log$) for such families under the binary encoding, then these would translate to lower bounds for the more typical unary encoding in Resolution, Res$(1)$. Such a lower bound is not possible for certain very weak Pigeonhole Principles, but might be dreamt of for the $k$-Clique Formulas.

17:20
Resolution with Counting: Different Moduli and Tree-Like Lower Bounds
SPEAKER: Fedor Part

ABSTRACT. We study the complexity of resolution extended with the ability to count over different characteristics and rings. These systems capture integer and moduli counting, and in particular admit short tree-like refutations for insolvable sets of linear equations. For this purpose, we consider the system ResLin(R), in which proof-lines are disjunction of linear equations over a ring R.(We focus on the case where the variables are Boolean, i.e., we add the Boolean axioms (x=0)\/(x=1), for all variables x.) Extending the work of Itsykson and Sokolov we obtain new lower bounds and separations, as follows:

Finite fields:

Exponential-size lower bounds for tree-like ResLin(Fp) refutations of Tseitin mod q formulas, for every pair of distinct primes p,q. As a corollary we obtain an exponential-size separation between tree-like ResLin(Fp) and tree-like ResLin(Fq).

Exponential-size lower bounds for tree-like ResLin(Fp) refutations of random k-CNF formulas, for every prime p and constant k.

Exponential-size lower bounds for tree-like ResLin(F) refutations of the pigeonhole principle, for every field F.

All the above hard instances are encoded as CNF formulas. The lower bounds are proved using extensions and modifications of the Prover-Delayer game technique and size-width relations.

Characteristic zero fields:

Separation of tree-like ResLin(F) and (dag-like) ResLin(F), for characteristic zero fields F. The separating instances are the pigeonhole principle and the Subset Sum principle. The latter is the formula a1 x1 +... +an xn = b, for some b not in the image of the linear form. The lower bound for the Subset Sum principle employs the notion of immunity from Alekhnovich and Razborov.

17:40
Proof systems for #SAT based on restricted circuits

ABSTRACT. In this work, we extend a well-known connection between linear resolution refutation and read-once branching program by constructing proof systems based on syntactically restricted circuits studied in the field of Knowledge Compilation. While our approach only yield proof systems that are weaker than resolution, they may be used to define proof systems for problems such as #SAT or MaxSAT. This is a work in progress.

16:00-18:00 Session 31N: SAT pragmatics (POS)
Location: Maths L5
16:00
A Problem Meta-Data Library for Research in SAT
SPEAKER: Markus Iser

ABSTRACT. Experimental data and benchmarks play a crucial role in developing new algorithms and implementations of SAT solvers. Besides comparing and evaluating solvers, they provide the basis for all kinds of experiments, for setting up hypothesis and for testing them. Currently -- even though some initiatives for setting up benchmark databases have been undertaken, and the SAT Competitions provide a ``standardized'' collection of instances -- it is hard to assemble benchmark sets with prescribed properties. Moreover, the origin of SAT instances is often not clear, and benchmark collections might contain duplicates. In this paper we suggest a framework to store meta-data information about SAT instances and provide a framework for collecting, assessing and distributing benchmark metadata.

16:30
The Effect of Scrambling CNFs
SPEAKER: Armin Biere

ABSTRACT. It has been an ongoing debate for a long time about how SAT solvers and in general different or new algorithms should be evaluated and compared both in competitions and more importantly in papers. Evaluations are usually performed on existing benchmarks. Cross-validation and other means to avoid over-fitting are rarely used. In this paper we revisit the old idea of scrambling benchmarks also used in early competitions. Scrambling has the goal to make results of such evaluations more robust. We present a new method for scrambling CNFs, which allows to gradually increase the effect of scrambling, from keeping the scrambled CNF close to the original CNF, to complete random permutation of variables, clauses, and phases of literals. We used this method to scramble benchmarks from the last two SAT competitions and solved them with the best solvers in the main track of the last SAT competition. As expected our experimental results suggest that scrambling has a substantial effect on the performance of individual solvers but surprisingly has little effect on rankings among solvers. As a consequence we argue that only using our method of scrambling is not enough to increase robustness of competitions and evaluations in general.

17:00
Seeking Practical CDCL Insights from Theoretical SAT Benchmarks
SPEAKER: Stephan Gocht

ABSTRACT. Over the last decades Boolean satisfiability (SAT) solvers based on conflict-driven clause learning (CDCL) have developed to the point where they can handle formulas with millions of variables. Yet a deeper understanding of how these solvers can be so successful has remained elusive.

In this work we try to shed light on CDCL performance by using theoretical benchmarks. While these are crafted instances, they have the attractive features of being a) scalable, b) extremal with respect to different proof search parameters, and c) theoretically *easy* in the sense of having short proofs in the resolution proof system underlying CDCL. This allows for a systematic study of solver heuristics and how successfully they implement efficient proof search.

We report results from extensive experiments with different CDCL parameter configurations on a wide range of benchmarks. Our findings include several examples where theory predicts and explains CDCL behaviour, but also raises a number of intriguing questions for further study.

This is a presentation-only submission of a paper that (we hope and believe) will be published at IJCAI '18.

17:30
On the use of solvers with docker technology

ABSTRACT. A convenient way to deploy services on a computer is to use a lightweight virtualization approach like docker.

This talk will present some observations made in the context of deploying SAT solvers on the cloud. 

17:45
Questions and Answers

ABSTRACT. Questions and answers with the audience

16:00-18:00 Session 31P: Multiple-player games (SR)
16:00
Verification of Multi-agent Systems with Imperfect Information and Public Actions

ABSTRACT. We analyse the verification problem for synchronous, perfect recall multi-agent systems with imperfect information against a specification language that includes strategic and epistemic operators. While the verification problem is undecidable, we show that if the agents' actions are public, then verification is 2 EXPTIME -complete. To illustrate the formal framework we consider two epistemic and strategic puzzles with imperfect information and public actions: the muddy children puzzle and the classic game of battleships. This paper has been accepted for publication at AAMAS2017.

16:40
Local Equilibria in Logic-Based Multi-Player Games

ABSTRACT. Game theory provides a well-established framework for the analysis of multi-agent systems. Typically, the analysis of a multi-agent system involves computing the set of equilibria in the associated multi-player game representing the behaviour of the system. As systems grow larger, it becomes increasingly harder to find equilibria in the game -- which represent the rationally stable behaviours of the multi-agent system (the solutions of the game). To address this issue, in this paper, we study the concept of local equilibria, which are defined with respect to (maximal) stable coalitions of agents for which an equilibrium can de found. We focus on the solutions given by the Nash equilibria of Boolean games and iterated Boolean games, two logic-based models for multi-agent systems, in which the players' goals are given by formulae of propositional logic and LTL, respectively.

17:20
Beyond admissibility: Dominance between chains of strategies
SPEAKER: Arno Pauly

ABSTRACT. Admissible strategies, i.e. those that are not dominated by any other strategy, are a typical rationality notion in game theory. In many classes of games this is justified by results showing that any strategy is admissible or dominated by an admissible strategy. However, in games played on finite graphs with quantitative objectives (as used for reactive synthesis), this is not the case.

We consider increasing chains of strategies instead to recover a satisfactory rationality notion based on dominance in such games. We start with some order-theoretic considerations establishing sufficient criteria for this to work. We then turn our attention to generalised safety/reachability games as a particular application. We propose the notion of maximal uniform chain as the desired dominance-based rationality concept in these games. Decidability of some fundamental questions about uniform chains is established.

16:00-18:00 Session 31Q: Applications (TERMGRAPH)
16:00
SLD-Resolution Reduction of Second-Order Horn Fragments - Extended Abstract

ABSTRACT. We introduce the derivation reduction problem, the undecidable problem of finding a finite subset of a set of clauses such that the whole set can be derived using SLD-resolution. We also study the derivation reducibility of various fragments of second-order Horn logic using a graph encoding technique, with particular applications in the field of Inductive Logic Programming.

16:30
A Port-graph Model for Finance

ABSTRACT. We examine the process involved in the design and implementation of a port-graph model to be used for the analysis of an agent-based "rational negligence" model, rational negligence being the term used to describe the phenomenon that occurred during the financial crisis of 2008 where investors chose to trade asset-backed securities without performing independent evaluations of the underlying assets thus aiding in heightening the need for more effective and transparent tools in the modelling of the capital markets.

Graph Transformation Systems (GTS) are natural verification and validation tools: they provide visual, intuitive representations of complex systems while specifying the dynamic behaviour of the system in a formal way. In this paper we propose to use a declarative language, based on strategic port-graph rewriting, a particular kind of graph transformation system where port-graph rewriting rules are controlled by user-defined strategies, as a visual modelling tool to analyse a credit derivative market.

17:00
Finding the Transitive Closure of Functional Dependencies using Strategic Port Graph Rewriting

ABSTRACT. We present a new approach to the logical design of relational databases, based on strategic port graph rewriting. We show how to model relational schemata as attributed port graphs and provide port graph rewriting rules to perform computations on functional dependencies. Using these rules we present a strategic graph program to find the transitive closure of a set of functional dependencies. This program is sound, complete and terminating, given the restriction that there are no cyclical dependencies in the schema.

17:30
Programming by Term Rewriting
SPEAKER: Lee Barnett

ABSTRACT. Term rewriting systems have a simple syntax and semantics and allow for straightforward proofs of correctness. However, they lack the efficiency of imperative programming languages. We define a term rewriting programming style that is designed to be translatable into efficient imperative languages, to obtain proofs of correctness together with efficient execution. This language is designed to facilitate translations into correct programs in imperative languages with assignment statements, iteration, recursion, arrays, pointers, and side effects. Though it is not included in the extended abstract, versions of the language with and without destructive operations such as array assignment are developed, and the relationships between them are sketched. General properties of a translation that suffice to prove correctness of translated programs from correctness of rewrite programs are presented.

16:00-18:00 Session 31R: Contraction (TYDI)
Location: Maths Boardroom
16:00
Some incoherent musings on deep inference and combinatorial proofs

ABSTRACT. In this talk I shall make every effort to be provocative and controversial, in an attempt to fuel introspection at the 20 year mark of deep inference. The talk has three parts:

(a) Is deep inference “incoherent categorical logic”? Here “incoherent” refers to the omission of proof equality, covered by the coherence laws of categorical logic [1]. Is deep inference in danger of isolating itself as a community by not preserving the original language of categorical logic? Hilbert’s 24th problem is on proof equality [2,3]; does deep inference short-change itself by not retaining proof equality as a first-class citizen, as it was in categorical logic?

(b) Linear distributivity (originally known as weak distributivity) [4] is core to the categorical approach to linear logic, hence also deep inference, where it has become known as switch. Classical logic follows with contraction and weakening. Combinatorial proofs [5] are an irredundant abstraction [3] of classical syntactic proofs, yet there is no natural interpretation of switch on combinatorial proofs, while there *is* a natural interpretation of the sequent calculus conjunction rule. Could there be a deeper computational meaning to this, which reflects back on the standard switch-based formulations of deep inference for linear and classical logic?

(c) Extensions of combinatorial proofs. In part (b) I will have shown how easily one may translate a classical propositional syntactic proof into a combinatorial proof (literally a 30-second definition - I promise!). I will then talk about two extensions, where the translation from syntactic proofs is also very easy to explain to a lay audience: (a) intuitionistic combinatorial proofs (joint work with Heijltjes and Strassburger), and (b) classical first-order combinatorial proofs [6].

References and background reading:

[1] Dominic J. D. Hughes. Deep inference proof theory equals categorical proof theory minus coherence. Technical report, October 2004. http://boole.stanford.edu/~dominic/papers/di/di.pdf

[2] Rüdiger Thiele. Hilbert’s Twenty-fourth Problem. American Mathematical Monthly, January 2003. https://www.maa.org/sites/default/files/pdf/upload_library/22/Ford/Thiele1-24.pdf

[3] Dominic J. D. Hughes. Towards Hilbert’s 24th problem: combinatorial proof invariants. Proc. WoLLIC’06, ENTCS 165, 2006.http://boole.stanford.edu/~dominic/papers/invar/invar.pdf

[4] Robin Cockett and Robert Seely. Weakly Distributive Categories. Journal of Pure and Applied Algebra 114:2 1997. http://www.math.mcgill.ca/rags/linear/wdc.ps.gz

[5] Dominic J. D. Hughes. Proofs without syntax. Annals of Mathematics 164 2006. http://annals.math.princeton.edu/wp-content/uploads/annals-v164-n3-p09.pdf or http://arxiv.org/pdf/math/0408282.pdf

[6] Dominic J. D. Hughes. First-order Proofs Without Syntax. Slides for the Berkeley Logic Colloquium, 17 October 2014. http://boole.stanford.edu/~dominic/papers/fopws-blc-2014

17:00
Proof complexity of deep inference: a survey

ABSTRACT. In this talk I will survey the current state of research on the complexity of deep inference proofs in classical propositional logic, and also introduce a recent program of research to further develop this subject. Deep inference calculi, due to Guglielmi and others, differ from traditional systems by allowing proof rules to operate on *any* connective occurring in a formula, as opposed to just the main connective. Consequently such calculi are more fine-grained and admit smaller proofs of benchmark classes of tautologies. As well as plainly theoretical motivations, this is advantageous from the point of view of proof communication, allowing for compact and easy-to-check certificates for proofs. I present a graph-based formalism called 'atomic flows', due to Guglielmi and Gundersen following from previous work by Buss, and a rewriting system on them that models proof normalisation while preserving complexity properties. I show how such an abstraction has been influential in work up to now, allowing the obtention of surprisingly strong upper bounds and simulations in analytic deep inference. The biggest open question in the area is the relative complexity between the minimal system KS and a version that allows a form of sharing, KS+. To this end I will consider systems of bounded arithmetic to serve as uniform counterparts to these nonuniform propositional systems. Due to the peculiarities of deep inference calculi, the search for corresponding arithmetic theories takes us into the largely unexplored territories of intuitionistic and substructural arithmetics, requiring novel interplays of tools at the interface of logic and complexity.

16:00-18:10 Session 31S: Nominal Unification and Formalisations (UNIF)
16:00
Proximity-Based Generalization
SPEAKER: Cleo Pau

ABSTRACT. Proximity relations are reflexive and symmetric fuzzy binary relations. They generalize similarity relations (similarity, in itself, is a generalization of equivalence in fuzzy setting) and have been introduced to deal with certain limitations of latter, related to incorrect representation of fuzzy information in some cases.

Following [1], we consider signatures where some function symbols are allowed to be in a proximity relation with each other. In our opinion, such a representation is more adequate than similarity to deal with possible mismatches between the names of symbols. Our terms are first-order terms, and the proximity relation is extended to them. In [1], a unification algorithm has been introduced for such terms. The problem we deal in this paper is a dual one: We are looking for generalizations, which, roughly, means that for two terms t1 and t2 we want to find a term r such that there exist substitution instances of r which are 'close enough to' (i.e., are in the given proximity relation with) t1 and t2.

Interestingly, the problem of computing a minimal complete set of generalizations with respect to a given proximity relation requires computing all possible maximal vertex-clique partitions in an undirected graph. We develop an algorithm for the all maximal clique partition problem, which is optimal in the sense that, first, it computes each maximal clique partition only once and, second, avoids generating and discarding false answers. Based on this method, we show that proximity-based anti-unification has the finitary type and develop a terminating, sound, and complete algorithm for computing a minimal complete set of generalizations.

[1] P. Julián-Iranzo and C. Rubio-Manzano. Proximity-based unification theory. Fuzzy Sets and Systems 262 (2015) 21–43.

16:30
Towards Generalization Methods for Purely Idempotent Equational Theories
SPEAKER: David Cerna

ABSTRACT. In “Generalisation de termes en theorie equationnelle. Cas associatif-commut- atif”, a pair of terms was presented over the language { f (·, ·), g(·, ·), a, b}, where f and g are interpreted over an idempotent equational theory, i.e. g(x, x) = x and f (x, x) = x, resulting in an infinite set of generalizations. While this result provides an answer to the complexity of the idempotent generalization problem for arbitrarily idempotent equational theories (theories with two or more idempotent functions) the complexity of generalization for equational theories with a single idempotent function symbols was left unsolved. We show that the two idempotent function symbols example can be encoded using a single idempotent function and two uninterpreted constants thus proving that idempotent generalization, even with a single idempotent function symbol, can result in an infinite set of generalizations. Based on this result we discuss approaches to the handling generalization within idempotent equational theories.

17:00
Rewriting with Generalized Nominal Unification

ABSTRACT. We consider rewriting, critical pairs and confluence tests on rewrite rules using nominal notation. Computing critical pairs is done using nominal unification, and rewriting using nominal matching. The progress is that we permit atom variables in the notation and in the unification algorithm, which generalizes previous approaches using usual nominal unification

17:30
Formalization of First-Order Syntactic Unification

ABSTRACT. We present a new formalization in the Isabelle proof assistant of first-order syntactic unification, including a proof of termination. Our formalization follows, almost down to the letter, the ML-code from Baader and Nipkow's book "Term Rewriting and All That" (1998). Correctness is implied by the formalization's similarity to Baader and Nipkow's ML-code, but we have yet to formalize the correctness of the unification algorithm.

17:50
Efficiency of a good but not linear nominal unification algorithm
SPEAKER: Weixi Ma

ABSTRACT. We present a nominal unification algorithm that runs in $O(n \times log(n) \times G(n))$ time, where $G$ is the functional inverse of Ackermann's function. Nominal unification generates a set of variable assignments if there exists one, that makes terms involving binding operations $\alpha$-equivalent. We preserve names while using special representations of de Bruijn numbers to enable efficient name management. We use Martelli-Montanari style multi-equation reduction to generate these name management problems from arbitrary unification terms.

 

18:10-18:40 Session 33: UNIF 2018 Wrap Up (UNIF)

Discussions about UNIF 2018 issues and future of UNIF

19:00-21:00 UNIF Dinner (UNIF)

This is a dinner in a restaurant to be defined different from the FLOC dinner at Balliot College.

19:00-21:00 LFMTP Dinner (LFMTP)

This is a dinner in a restaurant to be defined different from the FLOC dinner at Balliot College.

To register, please send a mail to lfmtp18@easychair.org before July 1st.

19:45-22:00 Workshops dinner at Balliol College (FLoC)

Workshops dinner at Balliol College. Drinks reception from 7.45pm, to be seated by 8:15 (pre-booking via FLoC registration system required; guests welcome).

Location: Balliol College