FLOC 2018: FEDERATED LOGIC CONFERENCE 2018
FSCD PRESENTED PAPERS: PAPERS WITH ABSTRACTS

Editor: Helene Kirchner

Authors, Title and AbstractPaperTalk

ABSTRACT. In this paper we define a framework to address different kinds of problems related to type inhabitation, such as type checking, the emptiness problem, generation of inhabitants and counting, in a uniform way. Our framework uses an alternative representation for types, called the pre-grammar of the type, on which different methods for these problems are based. Furthermore, we define a scheme for a decision algorithm that, for particular instantiations of the parameters, can be used to show different inhabitation related problems to be in PSPACE.

Jul 10 11:00

ABSTRACT. Prefix-constrained rewriting is a strict extension of context-sensitive rewriting. We study the confluence of prefix-constrained rewrite systems, which are composed of rules of the form L:l -> r where L is a regular string language that defines the allowed rewritable positions. The usual notion of Knuth-Bendix's critical pair needs to be extended using regular string languages, and the convergence of all critical pairs is not enough to ensure local confluence. Thanks to an additional restriction we get local confluence, and then confluence for terminating systems, which makes the word problem decidable. Moreover we present an extended Knuth-Bendix completion procedure, to transform a non-confluent prefix-constrained rewrite system into a confluent one.

Jul 12 17:00

ABSTRACT. We propose a new axiomatisation of the alpha-equivalence relation for nominal terms, based on a primitive notion of fixed-point constraint. We show that the standard freshness relation between atoms and terms can be derived from the more primitive notion of permutation fixed-point, and use this result to prove the correctness of the new alpha-equivalence axiomatisation. This gives rise to a new notion of nominal unification, where solutions for unification problems are pairs of a fixed-point context and a substitution. This notion of nominal unifier behaves better than the standard notion based on freshness contexts: nominal unification remains finitary in the presence of equational theories such as commutativity, whereas it becomes infinitary when unifiers are expressed using freshness contexts.

Jul 12 11:00

ABSTRACT. The infinitary lambda calculi pioneered by Kennaway et al. extend the basic lambda calculus by metric completion to infinite terms and reductions. Depending on the chosen metric, the resulting infinitary calculi exhibit different notions of strictness. To obtain infinitary normalisation and infinitary confluence properties for these calculi, Kennaway et al. extend β-reduction with infinitely many ‘⊥-rules’, which contract meaningless terms directly to ⊥. Three of the resulting Böhm reduction calculi have unique infinitary normal forms corresponding to Böhm-like trees.

In this paper we develop a corresponding theory of infinitary lambda calculi based on ideal completion instead of metric completion. We show that each of our calculi conservatively extends the corresponding metric-based calculus. Three of our calculi are infinitary normalising and confluent; their unique infinitary normal forms are exactly the Böhm-like trees of the corresponding metric-based calculi. Our calculi dispense with the infinitely many ⊥-rules of the metric-based calculi. The fully non-strict calculus (111) consists of only β-reduction, while the other two calculi (001 and 101) require two additional rules that precisely state their strictness properties: λx.⊥ → ⊥ and ⊥ M → ⊥.

Jul 11 10:00

ABSTRACT. We study anti-unification for possibly cyclic, unranked term-graphs and develop an algorithm, which computes a minimal complete set of least general generalizations for them. For bisimilar graphs the algorithm computes the join in the lattice generated by a functional bisimulation. Besides, we consider the case when the graph edges are not ordered (modeled by commutativity).

These results generalize anti-unification for ranked and unranked terms to the corresponding term-graphs, and solve also anti-unification problems for rational terms and dags. Our results open a way to widen anti-unification based code clone detection techniques from a tree representation to a graph representation of the code.

Jul 12 10:00

ABSTRACT. Bi-Intuitionistic Linear Logic (BILL) is an extension of Intuitionistic Linear Logic with a par, dual to the tensor, and subtraction, dual to linear implication. It is the logic of categories with a monoidal closed and a monoidal co-closed structure that are related by linear distributivity, a strength of the tensor over the par. It conservatively extends Full Intuitionistic Linear Logic (FILL), which includes only the par.

We give proof nets for the multiplicative, unit-free fragment MBILL-. Correctness is by local rewriting in the style of Danos contractibility. This rewrite relation yields sequentialization into a relational sequent calculus that extends the existing one for FILL. We give a second, geometric correctness condition via Danos-Regnier switching, and demonstrate composition both inductively and as a one-off global operation.

Jul 09 11:00

ABSTRACT. Environments and closures are two of the main ingredients of evaluation in lambda-calculus. A closure is a pair consisting of a lambda-term and an environment, whereas an environment is a list of lambda-terms assigned to free variables. In this paper we investigate some dynamic aspects of evaluation in lambda-calculus considering the quantitative, combinatorial properties of environments and closures. Focusing on two classes of environemnts and closures, namely the so-called plain and closed ones, we consider the problem of their asymptotic counting and effective random generation. We provide an asymptotic approximation of the number of both plain environments and closures of size n. Using the associated generating functions, we construct effective samplers for both classes of combinatorial structures. Finally, we discuss the related problem of asymptotic counting and random generation of closed environemnts and closures.

Jul 10 15:40

ABSTRACT. We consider anti-unification for simply typed lambda terms in associative, commutative, and associative-commutative theories and develop a sound and complete algorithm which takes two lambda terms and computes their generalizations in the form of higher-order patterns. The problem is finitary: the minimal complete set of generalizations contains finitely many elements. We define the notion of optimal solution and investigate special fragments of the problem for which the optimal solution can be computed in linear or polynomial time.

Jul 12 11:30

ABSTRACT. We show that LOGSPACE is characterised by finite orthogonal tail-recursive cons-free constructor term rewriting systems. This result is non-trivial, because in contrast to previous work on characterising LOGSPACE by tail-recursive cons-free programs we do not impose any fixed evaluation strategy. We provide a LOGSPACE algorithm which computes constructor normal forms. We then use this algorithm in the proof of our main result: that simple stream term rewriting systems characterise LOGSPACE-computable stream functions as defined by Ramyaa and Leivant. This result concerns characterising logarithmic-space computation on infinite streams by means of infinitary rewriting.

Jul 10 16:10

ABSTRACT. Like termination, confluence is a central property of rewrite systems. Unlike for termination, however, there exists no known complexity hierarchy for confluence. In this paper we investigate whether the decreasing diagrams technique can be used to obtain such a hierarchy. The decreasing diagrams technique is one of the strongest and most versatile methods for proving confluence of abstract reduction systems, it is complete for countable systems, and it has many well-known confluence criteria as corollaries.

So what makes decreasing diagrams so powerful? In contrast to other confluence techniques, decreasing diagrams employs a labelling of the steps $\to$ with labels from a well-founded order in order to conclude confluence of the underlying unlabelled relation. Hence it is natural to ask how the size of the label set influences the strength of the technique. In particular, what class of abstract reduction systems can be proven confluent using decreasing diagrams restricted to $1$~label, $2$ labels, $3$ labels, and so on?

Surprisingly, we find that two labels suffice for proving confluence for every abstract rewrite system having the cofinality property, thus in particular for every confluent, countable system. We also show that this result stands in sharp contrast to the situation for commutation of rewrite relations, where the hierarchy does not collapse.

Finally, as a background theme, we discuss the logical issue of first-order definability of the notion of confluence.

Jul 12 16:30

ABSTRACT. Over the recent years, the theory of rewriting has been extended in order to provide systematic techniques to show coherence results for strict higher categories. Here, we investigate a further generalization to low-dimensional weak categories, and consider in details the first non-trivial case: presentations of tricategories. By a general result, those are equivalent to the stricter Gray categories, for which we introduce a notion of rewriting system, as well as associated tools: Tietze transformations, critical pairs, termination orders, etc. We show that a finite rewriting system admits a finite number of critical pairs and, as a variant of Newman's lemma in our context, that a convergent rewriting system is coherent, meaning that two parallel 3-cells are necessarily equal. This is illustrated on rewriting systems corresponding to various well-known structures in the context of tricategories (monoids, adjunctions, Frobenius monoids). Finally, we discuss generalizations in arbitrary dimension.

Jul 12 17:30

ABSTRACT. We consider rewriting of a regular language with a left-linear term rewriting system. We show two completeness theorems on equational tree automata completion. The first one shows that, if the set of reachable terms is regular, then completion can compute it. This was known to be true for some term rewriting system classes preserving regularity, but was still an open question in the general case. The proof is not constructive because it depends on the regularity of the set of reachable terms, which is undecidable. The second theorem states that, if there exists a regular over-approximation of the set of reachable terms then completion can compute it (or safely under-approximate it). To carry out those proofs we generalize and improve two results of completion: the Termination and the Upper-Bound theorems. Those theoretical results provide an algorithmic way to safely explore regular approximations with completion. This has been implemented in Timbuk and used to verify safety properties, automatically and efficiently, on first-order and higher-order functional programs.

Jul 11 12:00

ABSTRACT. We introduce the fermionic ZW calculus, a string-diagrammatic language for fermionic quantum computing (FQC). After defining a fermionic circuit model, we present the basic components of the calculus, together with their interpretation, and show how the main physical gates of interest in FQC can be represented in the language. We then list our axioms, and derive some additional equations. We prove that the axioms provide a complete equational axiomatisation of the monoidal category whose objects are quantum systems of finitely many local fermionic modes, with operations that preserve or reverse the parity (number of particles mod 2) of states, and the tensor product, corresponding to the composition of two systems, as monoidal product. We achieve this through a procedure that rewrites any diagram in a normal form. We conclude by showing, as an example, how the statistics of a fermionic Mach-Zehnder interferometer can be calculated in the diagrammatic language.

Jul 09 15:00

ABSTRACT. B-terms are built from the B combinator alone defined by B f g x ≡ f (g x), which is well-known as a function composition operator. This paper investigates an interesting property of B-terms, that is, whether repetitive right applications of a B-term circulates or not. We discuss conditions for B-terms to and not to have the property through a sound and complete equational axiomatization. Specifically, we give examples of B-terms which have the property and show that there are infinitely many B-terms which does not have the property. Also, we introduce a canonical representation of B-terms that is useful to detect cycles, or equivalently, to prove the property, with an efficient algorithm.

Jul 11 09:00

ABSTRACT. We present Tores, a language for logical reasoning which utilizes indexed types and flexible (co)recursion principles to allow encoding of metatheoretic proofs. We particularly target the encoding of proofs using the technique of logical relations. The novel features we introduce are well-founded Mendler-style (co)recursion over indexed data types together with a form of recursion over objects in the index language to build new types. The latter, which we call index-stratified types, are analogue to the concept of large elimination in dependently typed languages. These features combined allow us to encode sophisticated case studies such as normalization for lambda calculi and normalization by evaluation. We prove the soundness of Tores as a programming and proof language via the key theorems of subject reduction and termination.

Jul 10 12:00

ABSTRACT. Higher inductive-inductive types (HIITs) generalise inductive types of dependent type theories in two directions. On the one hand they allow the simultaneous definition of multiple sorts that can be indexed over each other. On the other hand they support equality constructors, thus generalising higher inductive types of homotopy type theory. An example which makes use of both features is the well-typed syntax of type theory where conversion rules are given as equality constructors. In this paper we propose a general definition of HIITs using a domain-specific type theory. A context in this small type theory encodes a HIIT by listing the type formation rules and constructors. The type of the elimination principle and its beta-rules are computed from the context using a variant of the syntactic logical relation translation. We show that for indexed W-types and various examples of HIITs the computed elimination principles are the expected ones. Showing that the thus specified HIITs exist is left as future work. The type theory specifying HIITs was formalised in Agda together with the syntactic translations. A Haskell implementation converts the types of sorts and constructors into valid Agda code which postulates the elimination principles and computation rules.

Jul 10 15:00

ABSTRACT. A categorical model of the multiplicative and exponential fragments of intuitionistic linear logic (MELL), known as a linear category, is a symmetric monoidal closed category with a monoidal coalgebra modality (also known as a linear exponential comonad). Inspired by R. Blute and P. Scott’s work on categories of modules of Hopf algebras as models of linear logic, we study Eilenberg-Moore categories of monads as models of MELL. We define a MELL lifting monad on a linear category as a Hopf monad – in the Bruguieres, Lack, and Virelizier sense – with a mixed distributive law over the monoidal coalgebra modality. As our main result, we show that the linear category structure lifts to Eilenberg-Moore categories of MELL lifting monads. We explain how monoids in the co-Eilenberg-Moore of the monoidal coalgebra modality can induce MELL lifting monads and provide sources for such monoids. Along the way, we also define mixed distributive laws of bimonads over coalgebra modalities and lifting differential category structure to Eilenberg-Moore categories of exponential lifting monads.

Jul 09 12:00

ABSTRACT. We show that universes of fibrations in various models of homotopy type theory have an essentially global character: they cannot be described in the internal language of the presheaf topos from which the model is constructed. We get around this problem by extending the internal language with a modal operator for expressing properties of global elements. In this setting we show how to construct a universe that classifies the Cohen-Coquand-Huber-Mörtberg (CCHM) notion of fibration from their cubical sets model, starting from the assumption that the interval is tiny - a property that the interval in cubical sets does indeed have. This leads to a completely internal development of models of homotopy type theory within what we call crisp type theory.

Jul 10 10:00

ABSTRACT. Clocked Type Theory (CloTT) is a type theory for guarded recursion useful for programming with coinductive types, encoding productivity in types, and for reasoning about advanced programming language features using an abstract form of step-indexing. CloTT has previously been shown to enjoy a number of syntactic properties including strong normalisation, canonicity and decidability of type checking. In this paper we present a denotational semantics for CloTT useful, e.g., for studying future extensions of CloTT with constructions such as path types.

The main challenge for constructing this model is to model the notion of ticks used in CloTT for coinductive reasoning about coinductive types. We build on a category previously used to model guarded recursion, but in this category there is no object of ticks, so tick-assumptions in a context can not be modelled using standard tools. Instead we show how ticks can be modelled using adjoint functors, and how to model the tick constant using a semantic substitution.

Jul 10 09:30

ABSTRACT. Gradually typed languages, including Typed Racket, Typescript, Thorn and Reticulated Python, facilitate interoperability between statically and dynamically typed code, by checking static types when available and applying dynamic type checks when not. However, almost all exisiting research studies gradually typed languages using operational semantics, designed in an ad hoc manner. Furthermore, in the operational setting, questions of program equivalence and other relational properties are difficult to study and for the most part ignored.

In this paper, we propose a type-theoretic and category-theoretic semantics for gradual typing, in the form of gradual type theory, a logic and type theory for (call-by-name) gradual typing. To define the central constructions of gradual typing (the dynamic type, type casts and type error) in a type-theoretic fashion, we extend the theory of types and terms to include gradual type and term precision, internalizing notions of ``more dynamic'' into the type theory and then using these to characterize the constructions of gradual typing uniquely. This includes a novel specification for casts in terms of type and term precision. Combined with the ordinary extensionality ($\eta$) principles that type theory provides, we show that most of the standard operational behavior of casts in a gradually typed language are in fact uniquely determined by our design constraints. This provides a semantic justification for the definitions of casts and also shows that non-standard definitions of casts must violate these principles. We explore a call-by-name type theory in this paper, because it is a simple setting with the necessary extensionality principles, leaving call-by-value to future work.

On the category-theoretic side, we show that our type theory is an internal language of a certain class of double categories called called equipments (in fact, we will use only preorder categories, which are double categories where one direction is posetal), which provides the right algebraic structure with which to interpret precision and casts in gradual typing. We apply this categorical semantics to give a general construction of models of gradual typing, which provides a semantic analogue of Findler and Felleisen's definitions of contracts, and generalizes Dana Scott's domain-theoretic models of dynamic typing.

Jul 10 09:00

ABSTRACT. This paper establishes a bridge between linear logic and mainstream graph theory, building previous work by Retoré (2003). We show that the problem of correctness for MLL+Mix proof nets is equivalent to the problem of uniqueness of a perfect matching. By applying matching theory, we obtain new results for MLL+Mix proof nets: a linear-time correctness criterion, a quasi-linear sequentialization algorithm, and a characterization of the sub-polynomial complexity of the correctness problem. We also use graph algorithms to compute the dependency relation of Bagnol et al. (2015) and the kingdom ordering of Bellin (1997), and relate them to the notion of blossom which is central to combinatorial maximum matching algorithms.

Jul 09 11:30

ABSTRACT. Given a constructor term rewriting system, a narrowing tree for a pair of terms is a finite representation for the space of all possible innermost-narrowing derivations that start with the pair and end with non-narrowable terms. Narrowing trees have grammar representations that can be considered regular tree grammars. Innermost narrowing is a counterpart of constructor-based rewriting, and thus, narrowing trees can be used in analyzing constructor-based rewriting to normal forms. In this paper, using grammar representations, we extend narrowing trees to syntactically deterministic conditional term rewriting systems (SDCTRS) that are constructor systems. We do not directly extend narrowing trees to conditional systems, but we convert a constructor SDCTRS to an equivalent unconditional constructor system that may have extra variables. Narrowing trees for the converted constructor system can work for the original SDCTRS. As an application of narrowing trees, we show that narrowing trees are useful to prove two properties of a normal CTRS: (1) infeasibility of conditional critical pairs and (2) quasi-reductivity.

Jul 11 11:00

ABSTRACT. We consider higher-order recursion schemes as generators of infinite trees. A sort (simple type) is called homogeneous when all arguments of higher order are taken before any arguments of lower order. We prove that every scheme can be converted into an equivalent one (i.e, generating the same tree) that is homogeneous, that is, uses only homogeneous sorts. Then, we prove the same for safe schemes: every safe scheme can be converted into an equivalent safe homogeneous scheme. Furthermore, we compare two definition of safe schemes: the original definition of Damm, and the modern one. Finally, we prove a lemma which illustrates usefulness of the homogeneity assumption. The results are known, but we prove them in a novel way: by directly manipulating considered schemes.

Jul 11 09:30

ABSTRACT. Automated deduction in higher-order program calculi, where properties of transformation rules are demanded, or confluence or other equational properties are requested, can often be done by syntactically computing overlaps (critical pairs) of reduction rules and transformation rules. Since higher-order calculi have alpha-equivalence as fundamental equivalence, the reasoning procedure must deal with it. We define ASD1-unification problems, which are higher-order equational unification problems employing variables for atoms, expressions and contexts, with additional distinct-variable constraints, and which have to be solved w.r.t. alpha-equivalence. Our proposal is to extend nominal unification to solve these unification problems. We succeeded in constructing the nominal unification algorithm NomUnifyASC. We show that NomUnifyASC is sound and complete for these problem class, and outputs a set of unifiers with constraints in nondeterministic polynomial time if the final constraints are satisfiable. We also show that solvability of the output constraints can be decided in NEXPTIME, and for a fixed number of context-variables in NP time. For terms without context-variables and atom-variables, NomUnifyASC runs in polynomial time, is unitary, and extends the classical problem by permitting distinct-variable constraints.

Jul 12 12:00

ABSTRACT. In order to avoid well-know paradoxes associated with self-referential definitions, higher-order dependent type theories stratify the theory using a countably infinite hierarchy of universes (also known as sorts), Type{0} : Type{1} : .... Such type systems are called cumulative if for any type A we have that A : Type{i} implies A : Type{i+1}. The Predicative Calculus of Inductive Constructions (pCIC) which forms the basis of the Coq proof assistant, is one such system.

In this paper we present the Predicative Calculus of Cumulative Inductive Constructions (pCuIC) which extends the cumulativity relation to inductive types. We discuss cumulative inductive types as present in Coq 8.7 and their application to definitional translations.

Jul 10 11:30

ABSTRACT. We propose an abstract completion procedure for logically constrained term rewrite systems (LCTRSs). This procedure can be instantiated to both standard Knuth-Bendix completion and ordered completion for LCTRSs, and we present a succinct and uniform correctness proof. A prototype implementation illustrates the viability of the new completion procedure.

Jul 11 11:30

ABSTRACT. Proof terms are a useful concept for reasoning about computations in term rewriting. Human calculation with proof terms is tedious and error-prone. We present ProTeM, a new tool that offers support for manipulating proof terms that represent multisteps in left-linear rewrite systems.

Jul 12 15:00