View: session overviewtalk overview
09:00 | Sheaf models of classical logic extended by independence relations ABSTRACT. Reasoning involving notions of dependence and independence occurs in a variety of situations, and it is of interest to study dependence and independence as basic logical concepts in their own right. Indeed, this desideratum has been one motivation for the development of the "independence friendly logic" of Hintikka, and for the "dependence logic" programme of Väänänen. In the talk I will show that certain sheaf models naturally support an extension of classical logic with independence relations, leading to logical principles for reasoning about independence. No prior knowledge of sheaf theory will be assumed. |
10:00 | SPEAKER: Anthony Cantor ABSTRACT. In pursuit of a canonistic logic comprised of dualized proof rules, we introduce a sequent calculus system, 2Intx, that is inspired by Wansing's bi-intuitionistic propositional logic 2Int. Though 2Int has canonicity and duality, it defines only natural deduction proof rules and employs an unintuitive Kripke semantics that allows atomic formulas to be both true and false. In addition to defining the sequent calculus rules of 2Intx, we also define a Kripke semantics that only admits models in which atomic formulas are either true or false but not both. Finally, we prove soundness of 2Intx. |
09:00 | On Two Kinds of Higher-Order Model Checking and Higher-Order Program Verification |
09:00 | Confluence in Constraint Handling Rules: An overview from early, fundamental results to recent extensions including state invariants and conf. modulo equivalence. SPEAKER: Henning Christiansen ABSTRACT. Constraint Handling Rules, CHR, is a nondeterministic programming language whose programs consists of rewrite rules over program states, and being able to show confluence may be an important part of a program correctness proof. In fact, most CHR implementations are deterministic, yet confluence is useful as the programmer does not need to consider the details of the underlying execution strategy. The initial results on proving confluence in CHR, developed during to the 1990s, were formulated with respect to a logic-based semantics. This choice gave elegant proofs based on the Critical Pair Theorem and Newman's lemma but can only describe a limited class of programs, namely logical (and terminating) programs. It can be argued that invariants and confluence modulo equivalence are important from a practical point of view. Most programs are developed with a particular set of initial queries in mind, which reduces the set of reachable states. Therefore, taking the induced invariant into account makes a much larger class of programs confluent. Confluence modulo equivalence is a generalization where forked states must reach equivalent states, rather than a common state. For instance, a program may produce redundant data structures such as representing sets as lists, and the equivalence states that the order of the elements does not matter. Both the invariant and the equivalence relation may be tailored for the individual program. In our recent work, we have extended previous methods for proving confluence to include invariants and confluence modulo equivalence. We have focused on a more realistic semantics that reflects the de-facto standard implementations of CHR upon Prolog, including a correct treatment of Prolog's non-logical devices (e.g., var/1, nonvar/1, is/2) and runtime errors. As part of this, we developed a notion of abstract simulations that may be interesting also for other kinds of transition and rewriting systems. The classical critical pair constructions are generalized, having the transition system of interest simulated by another meta-level system (rather that reusing the system itself). This is useful especially for invariants and state equivalences that - by nature - are meta-level statements that typically cannot be expressed in the original system. |
10:00 | SPEAKER: Benjamin Dupont ABSTRACT. The computation of minimal convergent presentations for monoids, categories or higher-dimensional categories appear in low-dimensional combinatorial problems on these structures, such as coherence problems. A method to compute coherent presentations using convergent string rewriting systems was developed following works of Squier. In this approach, coherence results are formulated in terms of confluence diagrams of critical pairs. This work proposes an extension of these methods to string rewriting systems modulo. |
09:00 | Asynchronous games fifteen years later ABSTRACT. The notion of asynchronous game was introduced by the author fifteen years ago in order to investigate the causal structure of innocent strategies in game semantics. Shifting from traditional sequential games to asynchronous games was the critical step behind the discovery of a number of important structural properties of games and logic. This includes a truly concurrent formulation of innocence based on local confluence diagrams, and a positionality theorem for innocent strategies. Asynchronous games also played a central role in the emergence of tensorial logic as the logic of dialogue games, in the same way as linear logic is the logic of proof nets. Since its origins, one guideline of the theory has been the observation that asynchronous games and asynchronous strategies are causal structures of the very same nature: in other words, asynchronous strategies are asynchronous games themselves, appropriately mapped into other asynchronous games. In this paper, we develop this simple idea one step further, and explain how to recover and to compare a number of game semantics with different flavors (alternating and non alternating) from the same categorical combinatorics, performed in the category of small categories. |
09:30 | Algebraic Theories and Control Effects, Back and Forth ABSTRACT. The seminal work of Moggi [6] identified the abstract structure of computational effects as that given by the categorical concept of a monad. Plotkin and Power [7] considered monads induced by algebraic theories from this computational viewpoint and developed a theory of algebraic effects. The second-order algebraic theories of Fiore et al [2, 3] are a conservative extension of the algebraic theories of universal algebra that provide algebraic foundations for simple type theories with variable binding. The theme of this talk is to consider and study them from the perspective of computational effects. Fiore and Staton [5] gave a computational interpretation of the second-order algebraic theory of the substitution algebras of Fiore et al [4] as jump effects. I will introduce more general second-order algebraic theories of inception algebras and equip them with computational interpretations as control effects. These will be justified by a CPS semantics and then related to de Groote's computational interpretation of negation elimination and the excluded middle [1] and to Thielecke's CPS calculus [8]. Computational interpretations of second-order algebraic theories of the lambda calculus will also be considered. In particular, left lambda algebras will be shown to correspond to a coroutine mechanism. References [1] P. de Groote. A Simple Calculus of Exception Handling. In Proc. TLCA'95, LNCS 902, 1995. [2] M. Fiore and C.-K. Hur. Second-order equational logic. In Proc. CSL 2010, LNCS 6247, 2010. [3] M. Fiore and O. Mahmoud. Second-order algebraic theories. In Proc. MFCS 2010, LNCS 6281, 2010. [4] M. Fiore, G. Plotkin and D. Turi. Abstract syntax and variable binding. In Proc. LICS'99, 1999. [5] M. Fiore and S. Staton. Substitution, Jumps, and Algebraic Effects. In Proc. CSL-LICS'14, 2014. [6] E. Moggi. Notions of computation and monads. Information and Computation 93, 1991. [7] G. Plotkin and A.J. Power. Notions of computation determine monads. In Proc. FOSSACS'02, LNCS 2303, 2002. [8] H. Thielecke. Categorical Structure of Continuation Passing Style. PhD thesis, University of Edinburgh, 1997. |
09:00 | ABSTRACT. In this paper, we investigate how to introduce dependent types into the substructural calculi such as the Lambek calculus and linear logic. The motivations of such a move include facilitating a closer correspondence between syntax and semantics in natural language analysis and developing promising applications such as that to concurrency through dependent session types. We shall present two substructural calculi with dependent types: the first containing dependent Lambek types and the second dependent linear types. Technically, the former adheres to the usual assumption that types do not depend on substructural variables (in this case, the Lambek variables), which makes the technical development easier, while the latter allows type dependency on linear variables, which makes the development more challenging as well as more interesting in applications. |
09:20 | SPEAKER: Jiaming Jiang ABSTRACT. In this paper we introduce Commutative/Non-Commutative Logic (CNC logic) and two categorical models for CNC logic. This work abstracts Benton’s Linear/Non-Linear Logic by removing the existence of the exchange structural rule. One should view this logic as composed of two logics; one sitting to the left of the other. On the left, there is intuitionistic linear logic, and on the right is a mixed commutative/non-commutative formalization of the Lambek calculus. Then both of these logics are connected via a pair of monoidal adjoint functors. An exchange modality is then derivable within the logic using the adjunction between both sides. Thus, the adjoint functors allow one to pull the exchange structural rule from the left side to the right side. We then give a categorical model in terms of a monoidal adjunction, and then a concrete model in terms of dialectica Lambek spaces. |
09:55 | SPEAKER: Giulio Guerrieri ABSTRACT. We study the two Girard's translations of intuitionistic implication into linear logic by exploiting the Bang Calculus, a paradigmatic functional language with an explicit box-operator that allows both the call-by-name and call-by value lambda-calculi to be encoded in. We investigate how the bang calculus subsumes both call-by-name and call-by-value lambda-calculi from both a syntactic and a semantic point of view. |
09:00 | Is compositionality all it's cracked up to be? ABSTRACT. Compositionality is taken to be a fundamental principle for formal languages, including programming languages. The practical importance of compositionality is to allow programs to be made modular and code to be reused. Apparent violations of compositionality can be handled formally by enriching the logic used to define the language. The semantics of natural languages is often handled similarly within the formal semantics tradition: apparent cases of non-compositionality (such as the idiom used in the title) can be analysed differently to give a compositional semantics. This approach is controversial, however, since such accounts may have many ramifications. However, if natural language is not fundamentally compositional, it is unclear how it is possible for an indefinitely large number of sentences to be generated/understood by native speakers: this has analogies to the practical considerations in programming. Approaches to computational linguistics which follow the formal semantics tradition assume some form of compositionality, but with broad coverage syntax-based grammars, this has interesting consequences, some of which I will outline. I will further argue that many recent approaches in computational linguistics ignore compositionality in the sense used in the formal tradition. Compositional distributional semantics has been an important area of research, but I will argue that this primarily involves modeling behaviour which is non-compositional or semi-compositional, because of the types of evaluation which are used. In many applications of neural networks, the move away from modularity in favour of end-to-end models removes one of the basic practical arguments for compositionality. Some recent results suggest that commonly used neural models may not always model even very simple forms of compositionality. This in turn suggests there is a danger that such models will fail in unexpected ways. |
10:00 | SPEAKER: Temur Kutsia ABSTRACT. Anti-unification is a well-known method to compute generalizations in logic. Given two objects, the goal of anti-unification is to reflect commonalities between these objects in the computed generalizations, and highlight differences between them. Anti-unification appears to be useful for various tasks in natural language processing. Semantic classification of sentences based on their syntactic parse trees, grounded language learning, semantic text similarity, insight grammar learning, metaphor modeling: This is an incomplete list of topics where generalization computation has been used in one form or another. The major anti-unification technique in these applications is the original method for first-order terms over fixed arity alphabets, introduced by Plotkin and Reynolds in 1970s, and some of its adaptations. The goal of this paper is to give a brief overview about existing linguistic applications of anti-unification, discuss a couple of powerful and flexible generalization computation algorithms developed recently, and argue about their potential use in natural language processing tasks. |
09:00 | Workshop introduction SPEAKER: David Baelde |
09:15 | An Introduction to Cyclic Proofs ABSTRACT. Cyclic proofs have recently been gaining in popularity, both as a proof-theoretic tool for studying logical systems, and as a paradigm for automatic proof search. Over two hour-long talks at the PARIS workshop I hope to give an introduction to cyclic proofs for the uninitiated, to explore some of their applications, to outline their relationships to other techniques and, perhaps, to clear up some common misconceptions. |
09:00 | SPEAKER: Thorsten Ehlers ABSTRACT. In this paper we present new implementation details and benchmarking results for our parallel portfolio solver \topo. In particular, we discuss ideas and implementation details for the exchange of learned clauses in a massively-parallel SAT solver which is designed to run more that $1,000$ solver threads in parallel. Furthermore, we go back to the roots of portfolio SAT solving, and discuss the impact of diversifying the solver by using different restart- , branching- and clause database management heuristics. We show that these techniques can be used to tune the solver towards different problems. However, in a case study on formulas derived from Bounded Model Checking problems we see the best performance when using a rather simple clause exchange strategy. We show details of these tests and discuss possible explanations for this phenomenon. As computing times on massively-parallel clusters are expensive, we consider it especially interesting to share these kind of experimental results. |
09:30 | SPEAKER: Anastasia Leventi-Peetz ABSTRACT. Performing hundreds of test runs and a source-code analysis, we empirically identified improved parameter configurations for the CryptoMiniSat (CMS) 5 for solving cryptographic CNF instances originating from algebraic known-plaintext attacks on 3 rounds encryption of the Small AES-64 model cipher SR(3, 4, 4, 4). We finally became able to reconstruct 64-bit long keys in under an hour real time which, to our knowledge, has never been achieved so far. Especially, not without any assumptions or previous knowledge of key-bits (for instance in the form of side-channels, as in Mohamed et al., Improved Algebraic Side-Channel Attack on AES, 2012). A statistical analysis of the non-deterministic solver runtimes was carried out and command line parameter combinations were defined to yield best runtimes which ranged from under an hour to a few hours in median at the beginning. We proceeded using an Automatic Algorithm Configuration (AAC) tool to systematically extend the search for even better solver configurations with success to deliver even shorter solving times. In this work we elaborate on the systematics we followed to reach our results in a traceable and reproducible way. The ultimate focus of our investigations is to find out if CMS, when appropriately tuned, is indeed capable to attack even bigger and harder problems than the here solved ones. For the domain of cryptographic research, the duration of the solving time plays an inferior role as compared to the practical feasibility of finding a solution to the problem. The perspective scalability of the here presented results is the object of further investigations. |
10:00 | Stedman and Erin Triples encoded as a SAT Problem ABSTRACT. A very old quest in campanology is the search for peals, which can be considered as constrained searches for Hamiltonian cycles of a Cayley graph. Two particularly hard problems are finding bobs-only peals of Stedman Triples and Erin Triples. We show how to efficiently reduce them to boolean satisfiability and use a SAT solver to help find bobs-only peals of Stedman Triples, and express the unsolved problem of bobs-only Erin Triples as an unsolved SAT problem. This approach is based on the author's very efficient general reduction of the Hamiltonian Cycle Problem (HCP) to Boolean Satisfiability (SAT) converting any Hamiltonian Cycle problem with n vertices and m directed edges to a SAT problem with approximately n.log2(m) variables and 2m.(log2(n)+1) clauses. |
09:00 | ABSTRACT. This will be a tutorial on deep inference for general computer scientists. It will connect with the simply-typed lambda-calculus, and with classical logic. No further background knowledge is expected. |
10:00 | SPEAKER: Joseph Paulus ABSTRACT. We explore the formulation of non-idempotent intersection types for the lambda-calculus in deep-inference using the open-deduction formalism. |
09:00 | Handling substitutions via duality ABSTRACT. Many interesting problems concerning intuitionistic and intermediate propositional logics are related to properties of substitutions: among them, besides unification, we have rule admissibility, characterization of projective formulae, definability of maximum and minimum fixpoints, finite periodicity theorems, etc. Since all these questions can be stated in purely category-theoretic terms, they are all sensible to an approach via duality techniques. The available duality for finitely presented Heyting algebras involves both sheaves (giving the appropriate geometric framework) and bounded bisimulations (handling the combinatorics of definability aspects): we show how to use such duality to attack and solve the above problems in a uniform way. [Recent and new results come from joint and ongoing work with Luigi Santocanale] |
10:00 | SPEAKER: Joerg Siekmann ABSTRACT. At this time we only submit the abstract, which I uploaded under paper |
10:00 | SPEAKER: Maciej Bendkowski ABSTRACT. lambda-upsilon is an extension of the lambda-calculus which internalises the calculus of substitutions. In the current paper, we investigate the combinatorial properties of lambda-upsilon focusing on the quantitative aspects of substitution resolution. We exhibit an unexpected correspondence between the counting sequence for lambda-upsilon terms and famous Catalan numbers. As a by-product, we establish effective sampling schemes for random lambda-upsilon terms. We show that typical lambda-upsilon terms represent, in a strong sense, non-strict computations in the classic lambda-calculus. Moreover, typically almost all substitutions are in fact suspended, i.e., unevaluated, under closures. Consequently, we argue that lambda-upsilon is an intrinsically non-strict calculus of explicit substitutions. Finally, we investigate the distribution of various redexes governing the substitution resolution in lambda-upsilon and investigate the quantitative contribution of various substitution primitives. |
11:00 | Some applications of quantitative types inside and outside type theory |
11:00 | SPEAKER: Andreas Nuyts ABSTRACT. Presheaf semantics are an excellent tool for modelling relational preservation properties of (dependent) type theory. They have been applied to parametricity (which is about preservation of relations), univalent type theory (which is about preservation of equivalences), and directed type theory (which is about preservation of morphisms). Of course after going through the endeavour of constructing a presheaf model of type theory, we want type-theoretic profit, i.e. we want internal operations that allow us to write cheap proofs of the 'free' theorems that follow from the preservation property concerned. There is currently no general theory of how we should craft such operations. Cohen et al. introduced the final type extension operation Glue, using which one can prove the univalence axiom and hence also the 'free' equivalence theorems it entails. In previous work with Vezzosi, we showed that Glue and its dual, the initial type extension operation Weld, can be used to internalize parametricity to some extent. Earlier, Bernardy, Coquand and Moulin had introduced completely different 'boundary-filling' operations to internalize parametricity. Each of these operations has to our knowledge only been used in concrete models and hence their expressivity has not been compared. We have done some work to fill the hiatus: we consider and compare the various operators in more general presheaf categories. In this first step, we do not consider fibrancy requirements. |
11:30 | |
12:00 | SPEAKER: Bas Spitters |
11:00 | SPEAKER: Nohra Hage ABSTRACT. We introduce string data structures as combinatorial descriptions of structured words on totally ordered alphabets. The data can be described by words through a reading map and can be constructed by using an insertion algorithm. The insertion map defines a product on datum. We show that the associativity of this product, the cross section property of the data structure, and the confluence of the rewriting system defined by the insertion map are equivalent properties. We explicit a coherent presentation of the monoid presented by the data structure, made of generators, rewriting rules describing the insertion of letters in words and relations among the insertion algorithms. |
11:30 | ABSTRACT. Higher categories are a generalization of standard categories where there are not only $1$-cells between $0$-cells but more generally $n{+}1$-cells between $n$-cells. They are more and more used in mathematics, physics and computer science. They can notably be used to represent algebraic structures. There are several variants going from weak categories, that are the most general formalism but also the hardest to manipulate, to strict categories, simpler but less general. One usually wants both the expressive power of weak categories and the simplicity of strict categories. Semi-strict categories, such as Gray categories in dimension $3$, are an in-between formalism that it used in this work. Here, we are interested in proving \emph{coherence} of certain algebraic structures in dimension $3$ using rewriting, where ``coherence'' is the property that there is at most one $3$-cell between two $2$-cells. It amounts to compute critical pairs of a rewriting system and use a variant of Newmann's lemma. In this setting, an algorithm exists to compute these critical pairs. |
12:00 | ABSTRACT. We are studying rewriting systems over free modules, that is linear combinations of free generators with noninvertible coefficients. We provide a sufficient condition in terms of local confluence restricted to generators for the global rewrite relation to be confluent: this condition is formulated in terms of syzygies. When the coefficients belong to a domain, we equip the set of syzygies with a module structure, which provides a finer criterion: the local confluence has to be checked over a subset of syzygies, namely a generating set for the module structure. |
11:00 | Invited talk: Why and How Does K work? The Logical Infrastructure Behind It ABSTRACT. The K framework was born from our firm belief that an ideal language framework is possible. Specifically, that programming languages must have formal definitions, and that tools for a given language, such as interpreters, compilers, state-space explorers, model checkers, deductive program verifiers, etc., are derived from just one reference formal definition of the language, correct-by-construction and at no additional cost specific to that language. No other semantics for the same language should be needed. Several real languages have been formalized their semantics in K, including C, Java, JavaScript, PHP, Python, Rust, and more recently the Etherem VM (EVM), and the generic K tools have been instantiated to with these languages. In particular, the EVM semantics is used commercially by Runtime Verification to formally verify smart contracts on the Ethereum blockchain. But what is behind K? Why and how does it work? This talk will discuss the logical formalism underlying K, matching logic, a first-order logic (FOL) variant for specifying and reasoning about structure by means of patterns and pattern matching. Matching logic generalizes several logical frameworks important for program analysis, such as: propositional logic, algebraic specification, FOL with equality, (polyadic) modal logics, temporal logics, separation logic, as well as dynamic and Hoare logics. Binders and fixed-points can also be defined in matching logic, and thus the variety of lambda/mu-calculi and substitution-based semantics. Patterns can specify both structural requirements, including separation requirements at any level in any program configuration (not only in the "heap"), as well as logical requirements and constraints. The various languages defined in K, regardless of their size and complexity, become nothing but matching logic theories, and the various tools provided by the K framework, such as interpretion, symbolic execution, search and model checking, as well as full-fledged deductive program verification, become nothing but proof search heuristics in matching logic. |
12:00 | SPEAKER: Ulysse Gérard ABSTRACT. The Abella theorem prover is based on a logic in which relations, and not functions, are defined by induction (and coinduction). Of course, many relations do, in fact, define functions and there is real value in separating functional computation (marked by determinism) from more general deduction (marked by nondeterminism and backtracking). Recent work on focused proof systems for the logic underlying Abella is used in this paper to motivate the design of various extensions to the Abella system. With these extensions to the system (which do not extend the logic), it is possible to fully automate functional computations within the relational setting as soon as a proof is provided that a given relation does, in fact, capture a total function. In this way, we can use Abella to compute functions on data structures that contain bindings. |
12:20 | SPEAKER: Alberto Momigliano ABSTRACT. Contrary to Dijkstra's diktat, testing, and more in general, validation, has found an increasing niche in formal verification, prior or even in alternative to theorem proving. Validation, and in particular, property-based testing (PBT) is quite effective in mechanized meta-theory of programming languages, where theorems have shallow but tedious proofs that may go wrong for fairly banal mistakes in specifications. In this report, we abandon the comfort of high-level object languages and address the validation of abstract machines and typed assembly languages. We concentrate on Appel and Leroy's List-machine benchmark (JAR, 2012), which we tackle both with alphaCheck, the simple model-checker on top of the nominal logic programming alphaProlog and the PBT library FSCheck for F#. This allows us to compare the relative merits of exhaustive-based PBT in a logic programming style versus the more usual randomized functional setting. We uncover one major bug in the published version of the paper, plus several typos and ambiguities thereof. This is particularly striking, as the paper is accompanied by two full formalizations, in Coq and Twelf. Finally, we do a bit of mutation testing on the given model, to asses further the trade-off between exhaustive and randomized data generation. Spoiler alert: the former performs better. |
11:00 | A more precise, more correct stack and register model for CompCert ABSTRACT. The proposed talk describes ongoing work on modeling subregister aliasing in the CompCert formally verified~C compiler. We discuss some possible ways of modeling paired subregisters and their trade-offs. Exploring the design space uncovered long-standing fundamental type errors in CompCert's semantic models; implementing the eventual solution uncovered some more. We are close to finishing a new semantic model in which registers and stack slots are no longer treated as containing symbolic values but tuples of bytes. |
11:30 | Towards a library of formalised undecidable problems in Coq: The undecidability of intuitionistic linear logic SPEAKER: Yannick Forster ABSTRACT. . |
12:00 | A graph-rewriting perspective of the beta-law SPEAKER: Koko Muroya ABSTRACT. This preliminary report studies a graphical version of Plotkin's call-by-value equational theory, in particular its soundness with respect to operational semantics. Although an equational theory is useful in safe program transformation like compiler optimisation, proving its soundness is not trivial, because it involves analysis of interaction between evaluation flow and a particular sub-program of interest. We observe that soundness can be proved in a direct and generic way in the graphical setting, using small-step semantics given by a graph-rewriting abstract machine previously build for evaluation-cost analysis. This would open up opportunities to think of a cost-sensitive equational theory for compiler optimisation, and to prove contextual equivalence directly in the presence of language extensions. |
11:00 | Termination of lambda-calculus linearization methods |
12:00 | SPEAKER: Wen Kokke ABSTRACT. Process calculi based in logic, such as πDill and CP, provide a foundation for deadlock-free concurrent programming. However, there is a mismatch between the structures of operators used as proof terms in previous work, and the term constructs of the standard π-calculus. We introduce the Hypersequent Classical Processes (HCP), which addresses this mismatch. The key insight is to register parallelism in the typing judgements using hypersequents, a technique from logic which generalises judgements from one sequent to many. This allows us to take apart the term constructs used in Classical Processes (CP) to more closely match those of the standard π-calculus. We prove that HCP enjoys subject reduction and progress, and prove several properties relating it back to CP. |
11:00 | ABSTRACT. See attachment. |
11:30 | SPEAKER: Zhaohui Luo ABSTRACT. In type-theoretical semantics, sentences may often be interpreted as judgements, rather than propositions. When interpreting composite sentences such as those involving negations and conditionals, one may want to turn a judgemental interpretation into a proposition in order to obtain an intended semantics. In this paper, we propose a new negation operator $\NOT$ for constructing propositional forms of judgemental interpretations. NOT is introduced axiomatically, with five axiomatised laws to govern its behaviour, and several examples are given to illustrate its use in semantic interpretation. In order to justify NOT, we employ a heterogeneous equality to prove its laws and, since the addition of heterogeneous equality to type theories is consistent, so is our introduction of the NOT operator. Also discussed is how to use the negation operator in event semantics. |
12:00 | SPEAKER: Ribeka Tanaka ABSTRACT. This paper proposes an analysis of paycheck sentences in the framework of Dependent Type Semantics. We account for the anaphora resolution of paycheck pronouns by using dependent function types in dependent type theory. We argue that the presupposition of the possessive NP provides a function that contributes to the paycheck reading. The proposed analysis provides a uniform treatment of paycheck pronouns and standard referential pronouns, without introducing additional formal mechanisms to the system. |
11:00 | ABSTRACT. Circular (ie. non-wellfounded but regular) proofs received increasing interest in recent years with the simultaneous development of their applications and meta-theory: infinitary proof theory is now well-established in several proof-theoretical frameworks such as Martin Löf's inductive predicates, linear logic with fixed points, etc. In the setting of non-wellfounded proofs, a validity criterion is necessary to distinguish, among all infinite derivation trees (aka. pre-proofs), those which are logically valid proofs. A standard approach is to consider a pre-proof to be valid if every infinite branch is supported by a progressing thread. This work focuses on circular proofs for MALL with fixed points. Among all representations of valid circular proofs, a new fragment is described, based on a stronger validity criterion. This new criterion is based on a labelling of formulas and proofs, whose validity is purely local. This allows this fragment to be easily handled, while being expressive enough to still contain all circular embeddings of Baelde's muMALL finite proofs with (co)inductive invariants: in particular deciding validity and computing a certifying labelling can be done efficiently. Moreover the Brotherston-Simpson conjecture holds for this fragment: every labelled representation of a circular proof in the fragment is translated into a standard finitary proof. Finally we explore how to extend these results to a bigger fragment, by relaxing the labelling discipline while retaining (i) the ability to locally certify the validity and (ii) to some extent, the ability to finitize circular proofs. |
11:45 | SPEAKER: Florian Bruse ABSTRACT. Higher-Order Fixpoint Logic (HFL) is an extension of the modal mu-calculus by a typed lambda calculus. As in the mu-calculus, whether the nesting of least and greatest fixpoints increases expressive power is an important question. It is known that at low type theoretic levels, the fixpoint alternation hierarchy is strict. We present classes of structures over which the alternation hierarchy of HFL-formulas at low type level collapses into the alternation-free fragment, albeit at increase in type level by one. |
11:00 | SPEAKER: Adrian Rebola Pardo ABSTRACT. DRAT proofs have become the de facto standard for certifying SAT solvers' results. State-of-the-art DRAT checkers are able to efficiently establish the unsatisfiability of a formula. However, DRAT checking requires unit propagation, and so it is computationally non-trivial. Due to design decisions in the development of early DRAT checkers, the class of proofs accepted by state-of-the-art DRAT checkers differs from the class of proofs accepted by the original definition. In this paper, we formalize the operational definition of DRAT proofs, and discuss practical implications of this difference for generating as well as checking DRAT proofs. We also show that these theoretical differences have the potential to affect whether some proofs generated in practice by SAT solvers are correct or not. |
11:30 | SPEAKER: Michał Karpiński ABSTRACT. A Pseudo-Boolean (PB) constraint is a linear inequality constraint over Boolean variables. A popular idea to solve PB-constraints is to transform them to CNFs (via BDDs, adders and sorting networks) and process them using -- increasingly improving -- state-of-the-art SAT-solvers. Recent research have favored the approach that use Binary Decision Diagrams (BDDs), which is evidenced by several new constructions and optimizations. We show that encodings based on comparator networks can still be very competitive. We present a system description of a PB-solver based on MiniSat+ which we extended by adding a new construction of a selection network called 4-Way Merge Selection Network, with a few optimizations based on other solvers. Experiments show that on many instances of popular benchmarks our technique outperforms other state-of-the-art PB-solvers. |
12:00 | Divide and Conquer: Towards Faster Pseudo-Boolean Solving SPEAKER: Jan Elffers ABSTRACT. The last 20 years have seen dramatic improvements in performance of algorithms for Boolean satisfiability---so-called SAT solvers---and today conflict-driven clause learning (CDCL) solvers are routinely used for real-world problems in a wide range of application areas. One serious short-coming of CDCL, however, is that the underlying method of reasoning is quite weak, which can lead to exponential running times for many simple combinatorial problems. A tantalizing solution is to instead use stronger pseudo-Boolean (PB) reasoning, but so far the promise of exponential gains in performance has failed to materialize---the increased theoretical strength seems hard to harness algorithmically, and in many applications CDCL-based methods are still superior. We propose a modified approach to pseudo-Boolean solving, using division instead of the saturation rule in [Chai and Kuehlmann '05] and other PB solvers. In addition to resulting in a stronger conflict analysis, this also keeps integer coefficient sizes down, which further improves performance. Together with some other optimizations, this yields a solver that performed very well in the Pseudo-Boolean Competitions 2015 and 2016, and that has speed approaching that of CDCL-based methods measured in terms of number of conflicts per second. This is a presentation-only submission of a paper to be published at IJCAI '18. |
11:00 | SPEAKER: Jean-Pierre Jouannaud ABSTRACT. Rewriting with graphs enjoys a long history in computer science, graphs being used to represent not only data structures, but also program structures, and even computational models. Termination and confluence techniques have been elaborated for various generalizations of trees, such as rational trees, directed acyclic graphs, lambda terms and lambda graphs. But the design of tools for rewriting arbitrary graphs has made little progress beyond various categorical definitions dating from the mid seventies and the study of the particular families of graphs listed above. This paper describes the generalization of term rewriting techniques to a general class of multigraphs that we call drags, viz. finite, directed, ordered (multi-)graphs. To this end, we develop a rich algebra of drags which allows us to view graph rewriting in exactly the same way as we view term rewriting. |
11:30 | ABSTRACT. We describe a mathematical framework for equational reasoning about infinite families of string diagrams which is amenable to computer automation. The framework is based on context-free families of string diagrams which we represent using context-free graph grammars. We model equations between infinite families of diagrams using rewrite rules between context-free grammars. Our framework represents equational reasoning about concrete string diagrams and context-free families of string diagrams using double-pushout rewriting on graphs and context-free graph grammars respectively. We prove that our representation is sound by showing that it respects the concrete semantics of string diagrammatic reasoning and we show that our framework is appropriate for software implementation by proving important decidability properties. |
12:00 | SPEAKER: Yuta Takahashi ABSTRACT. We generalize the ordinal diagram system (a strong non-monotonic ordinal notation system) of Takeuti in a quasi-well-ordering form. We sketch two ways of showing well-quasi-orderedness; one by means of Dershowitz-Tzameret’s version of tree-embedding theorem with gap-condition (on the path comparable trees), the other by means of the generalized system of transfinite inductive definition. Although the ordinal diagrams (and other strong ordinal notation systems) could be a good tool for termination proof for certain pattern-based rewrite rule programs, there are some difficulties in applying them, at a slight look; the traditional termination proof paradigm of term rewrite theory uses the monotonicity property and the substitution property, but the strong ordering systems do not have them in general. In this paper, we note some possible uses of the strong well-quasi-ordinal systems for termination proofs by taking an example of pattern-based rewrite rules from the hydra game (of Buchholz). |
11:00 | ABSTRACT. I discuss recent results enabling expressive process calculi to be embedded in extensions of BV. I also discuss perspectives on understanding the true concurrency of these process embeddings. |
11:30 | SPEAKER: Elaine Pimentel ABSTRACT. In this work, we investigate the proof theoretic connections between sequent and nested proof calculi. Specifically, we identify general conditions under which a nested calculus can be transformed into a sequent calculus by restructuring the nested sequent derivation (proof) and shedding extraneous information to obtain a derivation of the same formula in the sequent calculus. These results are formulated generally so that they apply to calculi for intuitionistic, normal and non-normal modal logics. |
12:00 | ABSTRACT. In this talk, we will review the different ways nested sequents have been used to give cut-free deductive systems for various logics, in particular many that cannot be handled in ordinary (Gentzen) calculi, and other applications as interpolation results, realisation theorems for justification logics, etc. |
11:00 | SPEAKER: Andrew M. Marshall ABSTRACT. We study decision procedures for two knowledge problems critical to the verification of security protocols, namely the intruder deduction and the static equivalence problems. These problems can be related to particular forms of context matching and context unification. Both problems are defined with respect to an equational theory and are known to be decidable when the equational theory is given by a subterm convergent term rewrite system. In this note we extend this to consider a subterm convergent equational term rewrite system defined modulo an equational theory, like Commutativity or Associativity-Commutativity. We show that for certain classes of such equational theories, namely the shallow classes, the two knowledge problems remain decidable. |
11:30 | SPEAKER: Ajay Kumar Eeralla ABSTRACT. We consider the problem of unification modulo an equational theory ACh, which consists of a function h which is homomorphic over an associative-commutative operator +. Unification modulo ACh is undecidable, so we define a bounded ACh unification problem. In this bounded version of ACh unification we essentially bound the number of times h can be recursively applied to a term, and only allow solutions that satisfy this bound. There is no bound on the number of occurrences of h in a term, and the + symbol can be applied an unlimited number of times. We give inference rules for solving bounded ACh unification, and we prove that the rules are sound, complete and terminating. We have implemented the algorithm in Maude and give experimental results. We argue that this algorithm is useful in cryptographic protocol analysis. |
12:00 | ABSTRACT. Topological logics are formalisms for reasoning about topological relations between regions. In this paper, we introduce a new inference problem for topological logics, the unifiability problem, which extends the validity problem by allowing one to replace variables by terms before testing for validity. Our main result is that, within the context of the mereotopology of all regular closed polygons of the real plane, unifiable formulas always have finite complete sets of unifiers. |
14:00 | Invited Talk: Scott Domains for Denotational Semantics and Program Extraction ABSTRACT. This talk will highlight the most important results in domain theory from the perspective of program semantics and will explain why they play a crucial role in program extraction from proofs, a topic which, at first glance, seems to have little connection to domain theory. Surprisingly, Scott-domains suffice for the purpose of program extraction, even for the extraction of nondeterministic and concurrent programs. |
14:40 | Robust computability notions for types arising in classical analysis ABSTRACT. A recurring question in computability theory has been whether one can identify a `canonical' notion of computable operation on data of some given kind. If the data are natural numbers (or equivalently finite strings over a finite alphabet), then the existence of such a canonical notion is of course what is claimed by the classical Church-Turing thesis. If the data are themselves of a higher-order nature, then the situation is much more complex, but work of Normann (2000) and Longley (2007) has shown that even widely divergent concepts of higher-order computability will often converge in the class of `hereditarily total' functionals they give rise to. I will outline some recent extensions of these results that imply the existence of a similarly robust computability notion for many kinds of data arising in traditional mathematical practice, particularly complex analysis. To construct (for example) the space of analytic functions on a given complex domain, or some type of higher-order operators acting on this space, one typically needs recourse not just to function spaces, but also to subset and quotient types. These constitute a non-trivial extension from the point of view of computability theory: the computable functions on a subset S of T are not immediately determined by those on T, as there may well be computable functions on S with no computable extension to T. Nevertheless, our new results show that one still obtains a highly robust and `canonical' computability notion for a wide range of mathematical types arising in this way. For many types of interest, this accords with the notion arising in Type Two Effectivity, but our contribution is to show robustness with respect to variations in the choice of the underlying computability model. Whilst we believe that our computability notion has good credentials, it is still not the only such notion on the market for the types we have in mind. Drawing on a theorem of Schroeder (2010), we shall advance the conjecture that it is in fact one of just two natural computability notions arising in the arena of (higher-order) classical analysis. A draft paper covering much of the material to be discussed is available at: http://homepages.inf.ed.ac.uk/jrl/Research/ubiquity-reloaded3.pdf |
15:05 | T^\omega-representations of compact sets through dyadic subbases SPEAKER: Hideki Tsuiki ABSTRACT. We explore representing the compact subsets of a given represented space by infinite sequences over Plotkin's \T. We show that computably compact computable metric spaces admit a representation of their compact subsets in such a way that compact sets are essentially underspecified points. We can even ensure that a name of an n-element compact set contains n-1 occurrences of \bot. We undergo this study effectively and show that such a T^\omega-representation is effectively obtained from structures of computably compact computable metric spaces. Along the way, we introduce the notion of a computable dyadic subbase, and prove that every computably compact computable metric space admits a proper computable dyadic subbase. As an application, we prove some statements about the Weihrauch degree of closed choice for finite subsets of computably compact computable metric spaces. |
14:00 | A homotopy type theory for directed homotopy theory |
14:00 | Wanda: a higher-order termination tool |
14:00 | SPEAKER: Naoki Nishida ABSTRACT. Unravelings, which are transformations of a conditional term rewriting system (CTRS, for short) into an unconditional term rewriting system (TRS, for short), are useful to prove confluence and operational termination of some CTRSs. A simultaneous unraveling has been proposed for normal 1-CTRSs and a sequential one has been proposed for deterministic 3-CTRSs, the class of which includes normal 1-CTRSs. In this paper, we first show that for a normal 1-CTRS, the simultaneously unraveled TRS is orthogonal iff so is the sequentially unraveled one. Then, we show that for a normal 1-CTRS, if the simultaneously unraveled TRS is terminating, then so is the sequentially unraveled one. Finally, we show that for a normal 1-CTRS with termination of the unraveled TRS, the simultaneously unraveled TRS is locally confluent iff so is the sequentially unraveled one. |
14:30 | SPEAKER: T. V. H. Prathamesh ABSTRACT. Confluence is a decidable property of ground rewrite systems. We present a formalization effort in Isabelle/HOL of the decision procedure based on ground tree transducers. |
15:00 | SPEAKER: Sarah Winkler ABSTRACT. On the one hand, ordered completion is a fundamental technique in equational theorem proving that is employed by automated tools. On the other hand, their complexity makes such tools inherently error prone. As a remedy to this situation we give an Isabelle/HOL formalization of ordered rewriting and completion that comes with a formally verified certifier for ordered completion proofs. By validating generated proof certificates, our certifier increases the reliability of ordered completion tools. |
14:00 | SPEAKER: Matrín Copes ABSTRACT. We present a full formalization in Martin-L\"of's Constructive Type Theory of the Standardization Theorem for the Lambda Calculus using first-order syntax with one sort of names for both free and bound variables and Stoughton's multiple substitution. Our version is based on a proof by Ryo Kashima, in which a notion of $\beta$-reducibility with a standard sequence is captured by an inductive relation. The proof uses only structural induction over the syntax and the relations defined, which is possible due to the specific formulation of substitution that we employ. The whole development has been machine-checked using the system Agda. |
14:30 | SPEAKER: Ernesto Copello ABSTRACT. We introduce a universe of regular datatypes with variable binding information, for which we define generic formation, elimination, and induction operators. We then define a generic $\alpha$-equivalence relation over the types of the universe based on name-swapping, and derive iteration and induction principles which work modulo $\alpha$-conversion capturing Barendregt's Variable Convention. We instantiate the resulting framework so as to obtain the $\lambda$-Calculus and System F, for which we derive substitution operations and substitution lemmas for $\alpha$-conversion and substitution composition. The whole work is carried out in Constructive Type Theory and machine-checked by the system Agda. |
15:00 | SPEAKER: David Feller ABSTRACT. Following the introduction of BNF notation by Backus for the Algol 60 report and subsequent notational variants, a metalanguage involving formal “grammars” has developed for discussing structured objects in Computer Science and Mathematical Logic. We refer to this offspring of BNF as Math-BNF or MBNF, to the original BNF and its notational variants just as BNF, and to aspects common to both as BNF-style. MBNF is sometimes called abstract syntax, but we avoid that name because MBNF is in fact a concrete form and has a more abstract form. What all BNF-style notations share is the use of production rules roughly of this form: ◯ ⩴ □₁ | ⋯ | □ₙ Normally, such a rule says “every instance of □ᵢ for i ∈ {1, ..., n} is also an instance of ◯”. MBNF is distinct from BNF in the entities and operations it allows. Instead of strings, MBNF builds arrangements of symbols that we call math-text. Sometimes “syntax” is defined by interleaving MBNF production rules and other mathematical definitions that can contain chunks of math-text. There is no clear definition of MBNF. Readers do not have a document which tells them how MBNF is to be read and must learn MBNF through a process of cultural initiation. To the extent that MBNF is defined, it is largely through examples scattered throughout the literature. This paper gives MBNF examples illustrating some of the differences between MBNF and BNF. We propose a definition of syntactic math text (SMT) which handles many (but far from all) uses of math-text and MBNF in the wild. We aim to balance the goal of being accessible and not requiring too much prerequisite knowledge with the conflicting goal of providing a rich mathematical structure that already supports many uses and has possibilities to be extended to support more challenging cases. |
14:00 | ABSTRACT. This paper studies the so-called generalized multiplicative connectives of linear logic, focusing on the question of finding the ``non-decomposable'' ones, i.e., those that may not be expressed as combinations of the default binary connectives of multiplicative linear logic, Tensor and Par. In particular, we concentrate on generalized connectives of a surprisingly simple form, called ``entangled connectives'', and prove a characterization theorem giving a criterion for identifying the decomposable (resp., undecomposable) entangled connectives. |
14:20 | SPEAKER: Federico Olimpieri ABSTRACT. We show that the normal form of the Taylor expansion of a λ-term is isomorphic to its Böhm tree, improving Ehrhard and Regnier’s original proof along three independent directions. First, we simplify the final step of the proof by following the left reduction strategy directly in the resource calculus, avoiding to introduce an abstract machine ad-hoc. We also introduce a groupoid of permutations of copies of arguments in a rigid variant of the resource calculus, and relate the coefficients of Taylor expansion with this structure, while Ehrhard and Regnier worked with groups of permutations of occurrences of variables. Finally, we extend all the results to a non-deterministic setting: by contrast with previous attempts, we show that the uniformity property that was crucial in Ehrhard and Regnier’s approach can be preserved in this setting. |
14:55 | ABSTRACT. We present a translation from Multiplicative Exponential Linear Logic to a simply-typed lambda calculus with cyclic sharing. This translation is derived from a simple observation on the Int-construction on traced monoidal categories. It turns out that the translation is a mixture of the call-by-name CPS translation and the Geometry of Interaction-based interpretation. |
14:00 | Speakers in vats: simulating model-theoretic alignment with distributional semantics ABSTRACT. One long-standing puzzle in semantics is the ability of speakers to refer successfully in spite of holding different models of the world. This puzzle is famously illustrated by the cup/mug example: if two speakers disagree on whether a specific entity is a cup or a mug (i.e. if their interpretation functions differ), how can they align so that the entity can still be talked about? |
15:00 | SPEAKER: Richard Crouch ABSTRACT. Determining semantic relationships between sentences is essential for machines that understand and reason with natural language. Despite neural networks big successes, end-to-end neural architectures still fail to get acceptable performance for textual inference, maybe due to lack of adequate datasets for learning. Recently large datasets have been constructed e.g. SICK, SNLI, MultiNLI, but it is not clear how trustworthy these datasets are. This paper describes work on an expressive open-source semantic parser GKR that creates graphical representations of sentences used for further semantic processing, e.g. for natural language inference, reasoning and semantic similarity. The GKR is inspired by the Abstract Knowledge Representation (AKR), which separates out conceptual and contextual levels of representation that deal respectively with the subject matter of a sentence and its existential commitments. We recall work investigating SICK and its problematic annotations and propose to use GKR as a better representation for the semantics of SICK sentences. |
14:00 | ABSTRACT. In the setting of classical first-order logic with inductive predicates, we give as example a theory for which the conjectures that cannot be proved without induction reasoning are still not provable by adding non-trivial explicit induction reasoning. |
14:45 | SPEAKER: David Cerna ABSTRACT. In recent years \emph{schematic representations} of proofs by induction have been studied for there interesting proof theoretic properties, i.e.\ allowing extensions of Herbrand's theorem to certain types of inductive proofs. Most of the work concerning these proof theoretic properties presented schematic proofs as sets of proofs connected by \emph{links} together with a global soundness condition. Recently, the $\mathcal{S}\mathit{i}\mathbf{LK}$-calculus was introduced which provides inferences for expanding the sets of proofs within a schematic proof as well as introducing links without violating the soundness condition. In this work we discuss a simplification of the $\mathcal{S}\mathit{i}\mathbf{LK}$-calculus which isolates the essential mechanisms and provides a path towards the automated construction of schematic proofs. |
14:00 | SPEAKER: Neil Ghani ABSTRACT. We introduce open games as a compositional foundation of economic game theory. A compositional approach potentially allows methods of game theory and theoretical computer science to be applied to large-scale economic models for which standard economic tools are not practical. An open game represents a game played relative to an arbitrary environment and to this end we introduce the concept of coutility, which is the utility generated by an open game and returned to its environment. Open games are the morphisms of a symmetric monoidal category and can therefore be composed by categorical composition into sequential move games and by monoidal products into simultaneous move games. Open games can be represented by string diagrams which provide an intuitive but formal visualisation of the information flows. We show that a variety of games can be faithfully represented as open games in the sense of having the same Nash equilibria and off-equilibrium best responses. |
14:40 | ABSTRACT. Consider concurrent, infinite duration, two-player win/lose games played on graphs. If the winning condition satisfies some simple requirement, the existence of Player 1 winning (finite-memory) strategies is equivalent to the existence of winning (finite-memory) strategies in finitely many derived one-player games. Several classical winning conditions satisfy this simple requirement. Under an additional requirement on the winning condition, the non-existence of Player 1 winning strategies from all vertices is equivalent to the existence of Player 2 stochastic strategies winning almost surely from all vertices. Only few classical winning conditions satisfy this additional requirement, but a fairness variant of o |
14:00 | Modeling terms by graphs with structure constraints (two illustrations) ABSTRACT. Highlighting the usefulness of graph techniques for problems that have |
15:00 | SPEAKER: Wolfram Kahl ABSTRACT. Term graph rewriting is important as ``conceptual implementation'' of the execution of functional programs, and of data-flow optimisations in compilers. One way to define term graph transformation rule application is via the well-established and intuitively accessible double-pushout (DPO) approach; we present a new result proving semantics preservation for such DPO-based term graph rewriting. |
14:00 | Compressed Term Unification: Results, uses, open problems, and hopes ABSTRACT. Already in the classic first-order unification problem, the choice of a suitable formalism for term representation has a significant impact in computational efficiency. One can find other instances of this situation in some variants of second-order unification, where representing partial solutions efficiently leads to better algorithms. |
15:00 | SPEAKER: Pavlos Marantidis ABSTRACT. It is well-known that the unification problem for a binary associative-commutative-idempotent function symbol with a unit (ACUI-unification) is polynomial for unification with constants and in NP for general unification. We show that the same is true if we add a finite set of ground identities. |
14:30 | Local-Style Search in the Linear MaxSAT Algorithm: A Computational Study of Solution-Based Phase Saving SPEAKER: Emir Demirović ABSTRACT. We study solution-based phase saving in the linear maxSAT algorithm. Contrary to conventional phase saving, which assigns values to variables based on their most recent assignment during the search, solution-based phasing selects the values according to the best solution found so far. Thus, the algorithm focuses its efforts in the region ``close'' to the current best solution. Such an approach resembles techniques used in local search algorithms, where small incremental perturbations are performed on the best known solution. Although the algorithm has appeared in the literature before, it has not been used in the annual maxSAT evaluations and thus its position among modern solvers is not clear. We implemented solution-based phase saving in Open-WBO and evaluated its performance on benchmarks from the incomplete track of the latest maxSAT evaluation 2017 and the international timetabling competition 2011. The experimental results demonstrate that the approach is highly effective when compared to the baseline linear maxSAT algorithm, outperforming the rank one solvers from the 60 and 300 seconds unweighted competition tracks. When compared to winner of the weighted categories, maxroster, our analysis reveals that the approach provides substantially better results for a number of applications, even competing against specialized timetabling algorithms, although maxroster achieves a higher overall competition score. Hence, we argue solution-based phase saving should be part of the standard machinery for maxSAT. |
15:00 | MLIC: A MaxSAT-Based framework for learning interpretable classification rules SPEAKER: Kuldeep S. Meel ABSTRACT. The wide adoption of machine learning approaches in the industry, government, medicine and science has renewed the interest in interpretable machine learning: many decisions are too important to be delegated to black-box techniques such as deep neural networks or kernel SVMs. Historically, problems of learning interpretable classifiers, including classification rules or decision trees, have been approached by greedy heuristic methods as essentially all the exact optimization formulations are NP-hard. Our primary contribution is a MaxSAT-based framework, called MLIC, which allows principled search for interpretable classification rules expressible in propositional logic. Our approach benefits from the revolutionary advances in the constraint satisfaction community to solve large-scale instances of such problems. In experimental evaluations over a collection of benchmarks arising from practical scenarios we demonstrate its effectiveness: we show that the formulation can solve large classification problems with tens or hundreds of thousands of examples and thousands of features, and to provide a tunable balance of accuracy vs. interpretability. Furthermore, we show that in many problems interpretability can be obtained at only a minor cost in accuracy. The primary objective of the paper is to show that recent advances in the MaxSAT literature make it realistic to find optimal (or very high quality near-optimal) solutions to large-scale classification problems. We also hope to encourage researchers in both interpretable classification and in the constraint programming community to take it further and develop richer formulations, and bespoke solvers attuned to the problem of interpretable ML. [We have submitted the paper concurrently to CP-18 and therefore marked the paper in "presentaiton-only" category] |
16:00 | SPEAKER: Matteo Manighetti ABSTRACT. The usual reading of logical implication φ → ψ as “if φ then ψ” fails in intuitionistic logic: there are formulas φ and ψ such that φ → ψ is not provable, even though ψ is provable whenever φ is provable. Intuitionistic rules apparently cannot derive interesting meta-properties of the logic and, from a computational perspective, the programs corresponding to intuitionistic proofs are not powerful enough. We propose a term assignment corresponding to the admissible rules of intuitionistic propositional logic, and employ it to study the admissible principles. We provide as an example a study of Kreisel-Putnam logic KP: we give a Curry-Howard correspondence for this system, proving the disjunction property and all the good constructive properties. This is a first step in understanding how to the programs of admissible rules look like. |
16:30 | ABSTRACT. We revisit the symmetric mu-mutilde-framework introduced by Curien and Herbelin (later baptised System L by Munch-Maccagnoni) from the perspective of fibred category theory, i.e. using fibred spans and cospans and their relation to (CAT-valued) distributors. The mu-mutilde-framework provides direct computational interpretations for proofs in variants of Gentzen’s classical (multi-conclusioned) sequent calculus LK, exhibiting a duality between call-by-name and call-by-value evaluation. Not globally fixing an evaluation strategy however gives rise to a non-confluent critical pair. This problem has been further analyzed by Munch-Maccagnoni via polarized linear logic and the notion of duploid, where the critical pair is exposed as a failure of associativity of composition. One important motivation to revisit duploids and the mu-mutilde-framework from the perspective of fibred category theory is to understand these in a setting where usually dependent type theories are interpreted and to consider what kind of dependencies might be appropriate for such a symmetric system beyond a mere combination with ”usual” asymmetric intuitionistic dependent types. This question leads in a natural way to consider co- and contravariant reindexing, thereby providing for example a fresh perspective on the mu-mutilde-critical pair as choice between the two notions of reindexing, left and right sequent calculus rules as left and right actions (like for module structures in algebra) and polarity shifts as related to (op)cartesian liftings. It also suggests a refinement of the notions of “call-by-name” and “call-by-value”. This is still work in progress, and feedback will be appreciated. |
17:00 | A Denotational Model of the Typed Lambda-Mu Calculus ABSTRACT. This paper gives a denotational model to the typed lambda-mu calculus. In this model, absurdy type $\bot$ is interpreted as the type of truth values, and each type is divided into infinitely many ranks, each of which forms a Boolean algebra. The double-negation elimination, the signature deduction rule of classical logic, is interpreted as a move from a member of domain $D$ of type $\neg \neg A (= (A \to \bot) \to \bot)$, rank $\leq n$ to a member of domain $D'$ of type $A$, rank $\leq n+1$. $\mu$-variables are interpreted as variables ranging over a certain restricted domain. |
17:30 | ABSTRACT. Cyclic arithmetic, proposed by Simpson '17, is a deduction system based in the language of arithmetic where proofs may be non-wellfounded, as opposed to usual approaches to infinitary proof theory via an omega-rule. Naturally, some form of correctness condition must be imposed to ensure sound reasoning, and this is implemented by a 'trace' condition at the level of the 'flow graph' of the proof (cf. Buss '91). 'Cyclic arithmetic' (CA) itself consists of such proofs that are regular, i.e. that have only finitely many distinct subtrees, and so may be expressed as a finite directed graph (with cycles). It was independently shown by Simpson '17 and Berardi & Tatsuta '17 that CA and PA are equivalent, and recently that logical complexity in the two theories is similar (Das '18). We consider the issue of cut-elimination for CA. Such a procedure cannot preserve regularity of proofs, so the issue is to show that cut-elimination is 'productive'. To this end, 'continuous' cut-elimination procedures have long been studied in the proof theory of arithmetic, originating from Mints '78. However the difficulties arising from the 'repetition' rule, ensuring continuity, and the need to preserve trace conditions seems to suggest that an alternative approach is pertinent. In this work-in-progress, we show how cut-elimination can be similarly achieved by a certain reduction to finitary cut-elimination. We compute certain 'runs' through a non-wellfounded proof which must be finite thanks to the trace condition, and show that these are preserved in the image of cut-elimination. Productivity follows since cut-free runs must be non-empty, and validity follows by the finiteness of runs. The computation of runs, naively, makes use of a semantic oracle, though we believe that this can be replaced by purely syntactic concepts via a 'geometry of interaction' approach to cut-elimination, cf. Girard '89. This would yield a novel proof of the consistency of PA indirectly via CA. REFERENCES Berardi & Tatsuta '17. Equivalence of inductive definitions and cyclic proofs under arithmetic. Proceedings of LICS ’17. Buss '91. The undecidability of k-provability. Annals of Pure and Applied Logic. Das '18. On the logical complexity of cyclic arithmetic. Preprint. Girard '89. Towards a geometry of interaction. Proceedings of Categories in Computer Science and Logic. Mints '78. Finite investigations of transfinite derivations. Journal of Soviet Mathematics. Simpson '17. Cyclic Arithmetic is Equivalent to Peano Arithmetic. Proceedings of FoSSaCS ’17. |
16:00 | Higher-dimensional categories: recursion on extensivity SPEAKER: Thomas Cottrell ABSTRACT. Bicategories have been used for many years to model computational phenomena such as concurrency and binders. The collectivity, Bicat, of bicategories has the structure of a tricategory, which have occasionally appeared explicitly and more often implicitly in the programming semantics literature. But what is a weak n-category in general? Strict n-categories for arbitrary n have been used to model concurrency and in connection with rewriting. So it seems only a matter of time until weak n-categories for arbitrary n prove to be of value for programming semantics too. Here, we explore, enrich, and otherwise mildly generalise a prominent definition by Batanin, as refined by Leinster, together with the surrounding theory: they assumed knowledge of sophisticated mathematics, glossed over the inherent recursion, and did not consider the concerns of programming semantics. |
16:25 | A logical view of complex analytic maps SPEAKER: Mehrdad Maleki ABSTRACT. We introduce the localic derivative of complex analytic functions in the sense of program logic, giving rise to a Stone duality result for these functions. This extends the related work on real-valued functions on real finite dimensional Euclidean spaces to functions of complex variables. |
16:00 | ABSTRACT. In his 1977 Turing Award address, John Backus introduced the model of functional programming called "FP". FP is a descendant of the Herbrand-Godel notion of recursive definablity and the ancestor of the programming language Haskell. One reason that FP is attractive is that it provides "an algebra of functional programs" However, Backus did not believe that basic FP was powerful enough; "FP systems have a number of limitations..... If the set of primitive functions and functional forms is weak, it may not be able to express every computable function." and he moved on to stronger systems. It turns out that, in this respect, Backus was mistaken. Here we shall show that FP can compute every partial recursive function. Indeed we shall show that FP can simulate the behavior of an arbitrary Turing machine. Our method for doing this is the following. We first observe that the equational theory of Cartesian monoids is a fragment of FP. Cartesian monoids are rather simple algebraic structures of which you know many examples. They also travel under many assumed names such as Cantor algebras, Jonsson-Tarski algebras, and Freyd-Heller monoids. This theory, together with fixed points for all Cartesian monoid polynomials, is contained in FP. Now Cartesian monoids with fixed points can be studied using rewrite techniques, learned from lambda calculus, including confluence and standarization. Turing machines can then be simulated; transitions corresponding to fixed points and computations corresponding to standard reductions to normal form. |
16:30 | ABSTRACT. The structure of Chinese monoid appeared in the classification of monoids with the growth function coinciding with that of the plactic monoid. In this work, we deal with the presentations of the Chinese monoid from the rewriting theory perspective using the notion of string data structures. We define a string data structure associated to the Chinese monoid using the insertion algorithm on Chinese staircases. As a consequence, we construct a finite semi-quadratic convergent presentation of the Chinese monoid and we extend it into a finite coherent presentation of this monoid. |
17:00 | SPEAKER: Christophe Cordero ABSTRACT. The associative operad is the quotient of the magmatic operad by the operad congruence identifying the two binary trees of degree $2$. We introduce here a generalization of the associative operad depending on a nonnegative integer $d$, called $d$-comb associative operad, as the quotient of the magmatic operad by the operad congruence identifying the left and the right comb binary trees of degree $d$. We study the case $d = 3$ and provide an orientation of its space of relations by using rewrite systems on trees and the Buchberger algorithm for operads to obtain a convergent rewrite system. |
17:30 | SPEAKER: Antonin Delpeuch ABSTRACT. In the graphical calculus of planar string diagrams, equality is generated by the left and right exchange moves, which swaps the heights of adjacent vertices. We show that for connected diagrams the left- and right-handed exchanges each give strongly normalizing rewrite strategies. We show that these strategies terminate in O(n^3) steps where n is the number of vertices. We also give an algorithm to directly construct the normal form, and hence determine isotopy, in O(n∙m) time, where m is the number of edges. |
16:00 | ABSTRACT. The aim of this work is to characterize three fundamental normalization proprieties in lambda-calculus trough the Taylor expansion. The general proof strategy consists in stating the dependence of ordinary reduction strategies on their resource counterparts and in finding a convenient resource term in the support of the Taylor expansion that behaves well under the considered kind of reduction. |
16:30 | SPEAKER: Domenico Ruoppolo ABSTRACT. The use of intersection type systems in the the study of denotational semantics of the untyped λ-calculus has been a fruitful line of research over the past thirty-five years. In this extended abstract we present a research project aiming at broaden the boundaries of such type-theoretical approach to semantics, possibly to other idealized programming languages. The key idea is to ex- ploit a more abstract view on what a type system is. This view finds its root in higher-dimensional category theory. |
16:00 | Univalent Foundations and the Constructive View of Theories |
16:30 | |
17:00 |
16:00 | SPEAKER: Kousuke Fukui ABSTRACT. Top trees with DAG representation can be used to compress huge ordered-tree data such as XML documents. However, one ordered tree can be represented by several top trees, so it is necessary to efficiently decide which top trees represent the same tree for higher compression rate. In this paper, we give a complete axiom system for the equational theory of top trees, called the cluster algebra. In order to prove the completeness, we introduce a reduction system on cluster algebra, and show the strong normalization and the unique normal form property. |
16:30 | IWC Business Meeting SPEAKER: Aart Middeldorp ABSTRACT.
|
16:00 | Invited talk: A fresh view of call-by-need ABSTRACT. Call-by-need is a lazy evaluation strategy which overwrites an argument with its value the first time it is evaluated, thus avoiding a costly re-evaluation mechanism. It is used in functional programming languages like Haskell and Miranda. In this talk we present a fresh view of call-by-need in three different aspects: We revisit the completeness theorem relating (weak) call-by-need with standard (weak) call-by-name. We relate the syntactical notion of (weak) call-by-need with the corresponding semantical notion of (weak) neededness, based on the theory of residuals. We extend the weak call-by-need strategy, which only computes weak head normal forms of closed terms, to a strong call-by-need strategy which computes strong normal forms of open terms, a notion of reduction which is used in proof assistants like Coq and Agda. We conclude our talk by proposing some future research directions. |
17:00 | SPEAKER: Rodolphe Lepigre ABSTRACT. The Bindlib library for OCaml provides a set of tools for the manipulation of data structures with variable binding. It is very well suited for the representation of abstract syntax trees, and has already been used for the implementation of half a dozen languages and proof assistants (including a new version of the logical framework Dedukti). Bindlib is optimised for fast substitution, and it supports variable renaming. Since the representation of binders is based on higher-order abstract syntax, variable capture cannot arise during substitution. As a consequence, variable names are not updated at substitution time. They can however be explicitly recomputed to avoid “visual capture” (i.e., distinct variables with the same apparent name) when a data structure is displayed. |
17:30 | SPEAKER: Ulysse Gérard ABSTRACT. In this progress report, we highlight the design of the functional programming language MLTS which we have recently proposed elsewhere. This language uses the λ-tree syntax approach to encoding data structures that contain bindings. In this setting, bound variables never become free nor escape their scope: instead, binders in data structures are permitted to move into binding sites within programs. The concrete syntax of MLTS is based on the one for OCaml but includes additional binders within programs that directly support the mobility of bindings. The natural semantics of MLTS can be viewed as a logical theory within the logic G, which forms the basis of the Abella proof system and which includes nominal abstractions and the ∇-quantifier. Here, we provide several examples of MLTS programs. We also illustrate how many Abella relational specifications that are known to specify functions can be rewritten as functions in MLTS. |
16:00 | Denotational Event structure for relaxed memory SPEAKER: Simon Castellan ABSTRACT. We present a new methodology to model relaxed memory, interpreting a program on a given architecture by an event structure, expressing the causality between instructions, according to the optimisations performed by the hardware. The methodology uses standard tools of event structure theory, in particular product of event structures and the correspondence between event-based and execution-based representations. In the talk, we will demonstrate the methodology on a simple weak memory specification: TSO (corresponding to Intel x86 processors). We will then present two applications of this model: one theoretical, showing a strong DRF guarantee, and another, more practical, showing how the model allows tinkering to generate smaller program representation than the state of the art. |
16:30 | A resource modality for RAII SPEAKER: Guillaume Munch-Maccagnoni ABSTRACT. We model exceptions in a linear, effectful setting by relaxing the notion of monadic strength to contexts that are discardable, in the spirit of C++ destructors. We find a ressource modality in linear call-by-push-value, that recalls unique_ptr/Box and move semantics in C++11 and Rust. This resource modality gives rise to an ordered logic in which symmetry (move) and weakening (drop) are available as effectful operations. We explore consequences in language design for resource management in functional programming languages. (2-page abstract) |
17:00 | Experimenting with graded monads: certified grading-based program transformations SPEAKER: Tarmo Uustalu ABSTRACT. In this work, we formalized in Agda graded monads, type and grade inference for computational lambda calculus with a graded monad, instantiate this for three kinds of effect and prove correct a number of program equivalences that depend on grading - to try out the use of graded monads for reasoning about programs in practice. (We submitted the same abstract to TYPES 2018, which is a similar informal workshop with no publication.) |
17:30 | Heaps denote finitely partitioned trees ABSTRACT. We give a denotational semantics of a first-order language with references, in which a computation denotes a dependent partial function. Heaps are represented as finitely partitioned trees. |
16:00 | Applying linear logic semantics to probabilistic programming |
17:00 | ABSTRACT. Proof nets provide permutation-independent representations of proofs and are used to investigate coherence problems for monoidal categories. We investigate a coherence problem concerning Second Order Multiplicative Linear Logic MLL2, that is, the one of characterizing the equivalence over proofs generated by the interpretation of quantifiers by means of ends and coends. By adapting the "rewiring approach" used in the proof net characterization of the free *-autonomous category, we provide a compact representation of proof nets for a fragment of MLL2 related to the Yoneda isomorphism. We prove that the equivalence generated by coends over proofs in this fragment is fully characterized by the rewiring equivalence over proof nets. |
17:35 | SPEAKER: Luca Paolini ABSTRACT. We introduce the functional language IQu which, under the paradigm “quantum data & classical control” and in accordance with the model QRAM, allows to define and manipulate quantum circuits and quantum states on which we can execute partial measurement. IQu tailors a lot of ideas from the design of Idealized Algol (roughly, PCF extended with local stores and assignment) and its side-effect management. These ideas play a crucial role in the language design: each quantum co-processor is formalized by means of a quantum register (storing a quantum state) that can be modified by quantum directives (lists of unitary gates). The linearity of quantum states is assured by a one-to one correspondence between quantum states and quantum registers. We adapt the type system of Idealized Algol for typing both quantum-registers and quantum-directives. The types for quantum registers are parametric on the number of qubits and their linearity is granted for free. IQu operates on quantum circuits as they were classical data so no restriction exists on their duplication. |
16:00 | SPEAKER: Larry Moss ABSTRACT. This paper contributes to symbolic inference from text, including naturally occurring text. The idea is to take sentences in a framework like CCG, and then run a polarizing algorithm like the one in Hu and Moss 2018 to determine inferential polarity markings of all the constituents. From this, it is just a small step to obtain an inference engine which is both simple to describe and implement and at the same time is surprisingly powerful. We have implemented the basic inference step. This paper is work in progress, also going into detail on our projected next steps. The overall goal is to have a working symbolic inference system which covers "in-practice" inference and also is correct and efficient. |
16:30 | Do different syntactic trees yield different logical readings? Some remarks on head variables in typed lambda calculus. SPEAKER: Davide Catta ABSTRACT. A natural question in categorial grammar is the relation between a syntactic analysis s and the logical form, i.e. the logical formula obtained from this syntactic analysis, once provided with semantic lambda terms. More precisely, do different syntactic analyses fed with equal semantic terms, lead to equal logical form? We shall show that when this question is too simply formulated, the answer is "NO" while with some constraints on semantic lambda terms the answer is "yes". |
17:00 | SPEAKER: Inari Listenmaa ABSTRACT. We present a method for finding errors in formalized natural language grammars, by automatically and systematically generating test cases that are intended to be judged by a human oracle. The method works on a per-construction basis; given a construction from the grammar, it generates a finite but complete set of test sentences (typically tens or hundreds), where that construction is used in all possible ways. Our method is an alternative to using a corpus or a treebank, where no such completeness guarantees can be made. The method is language-independent and is implemented for the grammar formalism PMCFG, but also works for weaker grammar formalisms. We evaluate the method on a number of different grammars for different natural languages, with sizes ranging from toy examples to real-world grammars. |
17:30 | SPEAKER: Alexandre Rademaker ABSTRACT. This note discusses work on lexical resources for Portuguese centered around OpenWordNet-PT, an open source wordnet-like resource for Portuguese. We discuss the initial developments, the sister project Nomlex-PT and mostly the applications that were developed in the quest for being able to do reasoning with Portuguese texts. |
16:00 | The size-change principle and circular proofs: checking totality of (co)recursive definitions |
17:15 | SPEAKER: Patrick Bahr ABSTRACT. We give an overview of the syntax and semantics of Clocked Type Theory (CloTT), a dependent type theory for guarded recursion with many clocks, in which one can encode coinductive types and capture the notion of productivity in types. The main novelty of CloTT is the notion of ticks, which allows one to open the delay type modality, and, e.g., define a dependent form of applicative functor action, which can be used for reasoning about coinductive data. In the talk we will give examples of programming and reasoning about guarded recursive and coinductive data in CloTT, and we will present the main syntactic results: Strong normalisation, canonicity and decidability of type checking. If time permits, we will also sketch the main ideas of the denotational semantics for CloTT. |
16:00 | On OBBD Proofs |
17:00 | SPEAKER: Barnaby Martin ABSTRACT. We give lower bounds in Resolution-with-bounded-conjunction, Res$(s)$, for families of contradictions where witnesses are given in the unusual binary encodings. The two families we focus on are the $k$-Clique Formulas and those associated with the (weak) Pigeonhole Principle. If one could give lower bounds in Res$(\log$) for such families under the binary encoding, then these would translate to lower bounds for the more typical unary encoding in Resolution, Res$(1)$. Such a lower bound is not possible for certain very weak Pigeonhole Principles, but might be dreamt of for the $k$-Clique Formulas. |
17:20 | SPEAKER: Fedor Part ABSTRACT. We study the complexity of resolution extended with the ability to count over different characteristics and rings. These systems capture integer and moduli counting, and in particular admit short tree-like refutations for insolvable sets of linear equations. For this purpose, we consider the system ResLin(R), in which proof-lines are disjunction of linear equations over a ring R.(We focus on the case where the variables are Boolean, i.e., we add the Boolean axioms (x=0)\/(x=1), for all variables x.) Extending the work of Itsykson and Sokolov we obtain new lower bounds and separations, as follows: Finite fields: Exponential-size lower bounds for tree-like ResLin(Fp) refutations of Tseitin mod q formulas, for every pair of distinct primes p,q. As a corollary we obtain an exponential-size separation between tree-like ResLin(Fp) and tree-like ResLin(Fq). Exponential-size lower bounds for tree-like ResLin(Fp) refutations of random k-CNF formulas, for every prime p and constant k. Exponential-size lower bounds for tree-like ResLin(F) refutations of the pigeonhole principle, for every field F. All the above hard instances are encoded as CNF formulas. The lower bounds are proved using extensions and modifications of the Prover-Delayer game technique and size-width relations. Characteristic zero fields: Separation of tree-like ResLin(F) and (dag-like) ResLin(F), for characteristic zero fields F. The separating instances are the pigeonhole principle and the Subset Sum principle. The latter is the formula a1 x1 +... +an xn = b, for some b not in the image of the linear form. The lower bound for the Subset Sum principle employs the notion of immunity from Alekhnovich and Razborov. |
17:40 | ABSTRACT. In this work, we extend a well-known connection between linear resolution refutation and read-once branching program by constructing proof systems based on syntactically restricted circuits studied in the field of Knowledge Compilation. While our approach only yield proof systems that are weaker than resolution, they may be used to define proof systems for problems such as #SAT or MaxSAT. This is a work in progress. |
16:00 | SPEAKER: Markus Iser ABSTRACT. Experimental data and benchmarks play a crucial role in developing new algorithms and implementations of SAT solvers. Besides comparing and evaluating solvers, they provide the basis for all kinds of experiments, for setting up hypothesis and for testing them. Currently -- even though some initiatives for setting up benchmark databases have been undertaken, and the SAT Competitions provide a ``standardized'' collection of instances -- it is hard to assemble benchmark sets with prescribed properties. Moreover, the origin of SAT instances is often not clear, and benchmark collections might contain duplicates. In this paper we suggest a framework to store meta-data information about SAT instances and provide a framework for collecting, assessing and distributing benchmark metadata. |
16:30 | SPEAKER: Armin Biere ABSTRACT. It has been an ongoing debate for a long time about how SAT solvers and in general different or new algorithms should be evaluated and compared both in competitions and more importantly in papers. Evaluations are usually performed on existing benchmarks. Cross-validation and other means to avoid over-fitting are rarely used. In this paper we revisit the old idea of scrambling benchmarks also used in early competitions. Scrambling has the goal to make results of such evaluations more robust. We present a new method for scrambling CNFs, which allows to gradually increase the effect of scrambling, from keeping the scrambled CNF close to the original CNF, to complete random permutation of variables, clauses, and phases of literals. We used this method to scramble benchmarks from the last two SAT competitions and solved them with the best solvers in the main track of the last SAT competition. As expected our experimental results suggest that scrambling has a substantial effect on the performance of individual solvers but surprisingly has little effect on rankings among solvers. As a consequence we argue that only using our method of scrambling is not enough to increase robustness of competitions and evaluations in general. |
17:00 | Seeking Practical CDCL Insights from Theoretical SAT Benchmarks SPEAKER: Stephan Gocht ABSTRACT. Over the last decades Boolean satisfiability (SAT) solvers based on conflict-driven clause learning (CDCL) have developed to the point where they can handle formulas with millions of variables. Yet a deeper understanding of how these solvers can be so successful has remained elusive. In this work we try to shed light on CDCL performance by using theoretical benchmarks. While these are crafted instances, they have the attractive features of being a) scalable, b) extremal with respect to different proof search parameters, and c) theoretically *easy* in the sense of having short proofs in the resolution proof system underlying CDCL. This allows for a systematic study of solver heuristics and how successfully they implement efficient proof search. We report results from extensive experiments with different CDCL parameter configurations on a wide range of benchmarks. Our findings include several examples where theory predicts and explains CDCL behaviour, but also raises a number of intriguing questions for further study. This is a presentation-only submission of a paper that (we hope and believe) will be published at IJCAI '18. |
17:30 | On the use of solvers with docker technology ABSTRACT. A convenient way to deploy services on a computer is to use a lightweight virtualization approach like docker. This talk will present some observations made in the context of deploying SAT solvers on the cloud. |
17:45 | Questions and Answers ABSTRACT. Questions and answers with the audience |
16:00 | SPEAKER: Aniello Murano ABSTRACT. We analyse the verification problem for synchronous, perfect recall multi-agent systems with imperfect information against a specification language that includes strategic and epistemic operators. While the verification problem is undecidable, we show that if the agents' actions are public, then verification is 2 EXPTIME -complete. To illustrate the formal framework we consider two epistemic and strategic puzzles with imperfect information and public actions: the muddy children puzzle and the classic game of battleships. This paper has been accepted for publication at AAMAS2017. |
16:40 | SPEAKER: Michael Wooldridge ABSTRACT. Game theory provides a well-established framework for the analysis of multi-agent systems. Typically, the analysis of a multi-agent system involves computing the set of equilibria in the associated multi-player game representing the behaviour of the system. As systems grow larger, it becomes increasingly harder to find equilibria in the game -- which represent the rationally stable behaviours of the multi-agent system (the solutions of the game). To address this issue, in this paper, we study the concept of local equilibria, which are defined with respect to (maximal) stable coalitions of agents for which an equilibrium can de found. We focus on the solutions given by the Nash equilibria of Boolean games and iterated Boolean games, two logic-based models for multi-agent systems, in which the players' goals are given by formulae of propositional logic and LTL, respectively. |
17:20 | SPEAKER: Arno Pauly ABSTRACT. Admissible strategies, i.e. those that are not dominated by any other strategy, are a typical rationality notion in game theory. In many classes of games this is justified by results showing that any strategy is admissible or dominated by an admissible strategy. However, in games played on finite graphs with quantitative objectives (as used for reactive synthesis), this is not the case. We consider increasing chains of strategies instead to recover a satisfactory rationality notion based on dominance in such games. We start with some order-theoretic considerations establishing sufficient criteria for this to work. We then turn our attention to generalised safety/reachability games as a particular application. We propose the notion of maximal uniform chain as the desired dominance-based rationality concept in these games. Decidability of some fundamental questions about uniform chains is established. |
16:00 | SPEAKER: Sophie Tourret ABSTRACT. We introduce the derivation reduction problem, the undecidable problem of finding a finite subset of a set of clauses such that the whole set can be derived using SLD-resolution. We also study the derivation reducibility of various fragments of second-order Horn logic using a graph encoding technique, with particular applications in the field of Inductive Logic Programming. |
16:30 | ABSTRACT. We examine the process involved in the design and implementation of a port-graph model to be used for the analysis of an agent-based "rational negligence" model, rational negligence being the term used to describe the phenomenon that occurred during the financial crisis of 2008 where investors chose to trade asset-backed securities without performing independent evaluations of the underlying assets thus aiding in heightening the need for more effective and transparent tools in the modelling of the capital markets. Graph Transformation Systems (GTS) are natural verification and validation tools: they provide visual, intuitive representations of complex systems while specifying the dynamic behaviour of the system in a formal way. In this paper we propose to use a declarative language, based on strategic port-graph rewriting, a particular kind of graph transformation system where port-graph rewriting rules are controlled by user-defined strategies, as a visual modelling tool to analyse a credit derivative market. |
17:00 | ABSTRACT. We present a new approach to the logical design of relational databases, based on strategic port graph rewriting. We show how to model relational schemata as attributed port graphs and provide port graph rewriting rules to perform computations on functional dependencies. Using these rules we present a strategic graph program to find the transitive closure of a set of functional dependencies. This program is sound, complete and terminating, given the restriction that there are no cyclical dependencies in the schema. |
17:30 | SPEAKER: Lee Barnett ABSTRACT. Term rewriting systems have a simple syntax and semantics and allow for straightforward proofs of correctness. However, they lack the efficiency of imperative programming languages. We define a term rewriting programming style that is designed to be translatable into efficient imperative languages, to obtain proofs of correctness together with efficient execution. This language is designed to facilitate translations into correct programs in imperative languages with assignment statements, iteration, recursion, arrays, pointers, and side effects. Though it is not included in the extended abstract, versions of the language with and without destructive operations such as array assignment are developed, and the relationships between them are sketched. General properties of a translation that suffice to prove correctness of translated programs from correctness of rewrite programs are presented. |
16:00 | SPEAKER: Cleo Pau ABSTRACT. Proximity relations are reflexive and symmetric fuzzy binary relations. They generalize similarity relations (similarity, in itself, is a generalization of equivalence in fuzzy setting) and have been introduced to deal with certain limitations of latter, related to incorrect representation of fuzzy information in some cases. Following [1], we consider signatures where some function symbols are allowed to be in a proximity relation with each other. In our opinion, such a representation is more adequate than similarity to deal with possible mismatches between the names of symbols. Our terms are first-order terms, and the proximity relation is extended to them. In [1], a unification algorithm has been introduced for such terms. The problem we deal in this paper is a dual one: We are looking for generalizations, which, roughly, means that for two terms t1 and t2 we want to find a term r such that there exist substitution instances of r which are 'close enough to' (i.e., are in the given proximity relation with) t1 and t2. Interestingly, the problem of computing a minimal complete set of generalizations with respect to a given proximity relation requires computing all possible maximal vertex-clique partitions in an undirected graph. We develop an algorithm for the all maximal clique partition problem, which is optimal in the sense that, first, it computes each maximal clique partition only once and, second, avoids generating and discarding false answers. Based on this method, we show that proximity-based anti-unification has the finitary type and develop a terminating, sound, and complete algorithm for computing a minimal complete set of generalizations. [1] P. Julián-Iranzo and C. Rubio-Manzano. Proximity-based unification theory. Fuzzy Sets and Systems 262 (2015) 21–43. |
16:30 | SPEAKER: David Cerna ABSTRACT. In “Generalisation de termes en theorie equationnelle. Cas associatif-commut- atif”, a pair of terms was presented over the language { f (·, ·), g(·, ·), a, b}, where f and g are interpreted over an idempotent equational theory, i.e. g(x, x) = x and f (x, x) = x, resulting in an infinite set of generalizations. While this result provides an answer to the complexity of the idempotent generalization problem for arbitrarily idempotent equational theories (theories with two or more idempotent functions) the complexity of generalization for equational theories with a single idempotent function symbols was left unsolved. We show that the two idempotent function symbols example can be encoded using a single idempotent function and two uninterpreted constants thus proving that idempotent generalization, even with a single idempotent function symbol, can result in an infinite set of generalizations. Based on this result we discuss approaches to the handling generalization within idempotent equational theories. |
17:00 | SPEAKER: Yunus David Kerem Kutz ABSTRACT. We consider rewriting, critical pairs and confluence tests on rewrite rules using nominal notation. Computing critical pairs is done using nominal unification, and rewriting using nominal matching. The progress is that we permit atom variables in the notation and in the unification algorithm, which generalizes previous approaches using usual nominal unification |
17:30 | SPEAKER: Anders Schlichtkrull ABSTRACT. We present a new formalization in the Isabelle proof assistant of first-order syntactic unification, including a proof of termination. Our formalization follows, almost down to the letter, the ML-code from Baader and Nipkow's book "Term Rewriting and All That" (1998). Correctness is implied by the formalization's similarity to Baader and Nipkow's ML-code, but we have yet to formalize the correctness of the unification algorithm. |
17:50 | SPEAKER: Weixi Ma ABSTRACT. We present a nominal unification algorithm that runs in $O(n \times log(n) \times G(n))$ time, where $G$ is the functional inverse of Ackermann's function. Nominal unification generates a set of variable assignments if there exists one, that makes terms involving binding operations $\alpha$-equivalent. We preserve names while using special representations of de Bruijn numbers to enable efficient name management. We use Martelli-Montanari style multi-equation reduction to generate these name management problems from arbitrary unification terms.
|
Discussions about UNIF 2018 issues and future of UNIF
This is a dinner in a restaurant to be defined different from the FLOC dinner at Balliot College.
This is a dinner in a restaurant to be defined different from the FLOC dinner at Balliot College.
To register, please send a mail to lfmtp18@easychair.org before July 1st.
Workshops dinner at Balliol College. Drinks reception from 7.45pm, to be seated by 8:15 (pre-booking via FLoC registration system required; guests welcome).