FLOC 2018: FEDERATED LOGIC CONFERENCE 2018
NLCS ON SATURDAY, JULY 7TH

View: session overviewtalk overviewside by side with other conferences

09:00-10:30 Session 23J: Invited and Contributed
09:00
Is compositionality all it's cracked up to be?

ABSTRACT. Compositionality is taken to be a fundamental principle for formal languages, including programming languages.  The practical importance of compositionality is to allow programs to be made modular and code to be reused. Apparent violations of compositionality can be handled formally by enriching the logic used to define the language.  The semantics of natural languages is often handled similarly within the formal semantics tradition: apparent cases of non-compositionality (such as the idiom used in the title) can be analysed differently to give a compositional semantics.  This approach is controversial, however, since such accounts may have many ramifications.  However, if natural language is not fundamentally compositional, it is unclear how it is possible for an indefinitely large number of sentences to be generated/understood by native speakers: this has analogies to the practical considerations in programming.  Approaches to computational linguistics which follow the formal semantics tradition assume some form of compositionality, but with broad coverage syntax-based grammars, this has interesting consequences, some of which I will outline.  I will further argue that many recent approaches in computational linguistics ignore compositionality in the sense used in the formal tradition.  Compositional distributional semantics has been an important area of research, but I will argue that this primarily involves modeling behaviour which is non-compositional or semi-compositional, because of the types of evaluation which are used.  In many applications of neural networks, the move away from modularity in favour of end-to-end models removes one of the basic practical arguments for compositionality.  Some recent results suggest that commonly used neural models may not always model even very simple forms of compositionality.  This in turn suggests there is a danger that such models will fail in unexpected ways.

10:00
Anti-Unification and Natural Language Processing
SPEAKER: Temur Kutsia

ABSTRACT. Anti-unification is a well-known method to compute generalizations in logic. Given two objects, the goal of anti-unification is to reflect commonalities between these objects in the computed generalizations, and highlight differences between them.

Anti-unification appears to be useful for various tasks in natural language processing. Semantic classification of sentences based on their syntactic parse trees, grounded language learning, semantic text similarity, insight grammar learning, metaphor modeling: This is an incomplete list of topics where generalization computation has been used in one form or another. The major anti-unification technique in these applications is the original method for first-order terms over fixed arity alphabets, introduced by Plotkin and Reynolds in 1970s, and some of its adaptations.

The goal of this paper is to give a brief overview about existing linguistic applications of anti-unification, discuss a couple of powerful and flexible generalization computation algorithms developed recently, and argue about their potential use in natural language processing tasks.

10:30-11:00Coffee Break
11:00-12:30 Session 26K: Contributed papers
Chair:
11:00
Propositional Attitude Operators in Homotopy Type Theory

ABSTRACT. See attachment.

11:30
Propositional Forms of Judgemental Interpretations
SPEAKER: Zhaohui Luo

ABSTRACT. In type-theoretical semantics, sentences may often be interpreted as judgements, rather than propositions. When interpreting composite sentences such as those involving negations and conditionals, one may want to turn a judgemental interpretation into a proposition in order to obtain an intended semantics. In this paper, we propose a new negation operator $\NOT$ for constructing propositional forms of judgemental interpretations. NOT is introduced axiomatically, with five axiomatised laws to govern its behaviour, and several examples are given to illustrate its use in semantic interpretation. In order to justify NOT, we employ a heterogeneous equality to prove its laws and, since the addition of heterogeneous equality to type theories is consistent, so is our introduction of the NOT operator. Also discussed is how to use the negation operator in event semantics.

12:00
Paychecks, Presupposition, and Dependent Types
SPEAKER: Ribeka Tanaka

ABSTRACT. This paper proposes an analysis of paycheck sentences in the framework of Dependent Type Semantics. We account for the anaphora resolution of paycheck pronouns by using dependent function types in dependent type theory. We argue that the presupposition of the possessive NP provides a function that contributes to the paycheck reading. The proposed analysis provides a uniform treatment of paycheck pronouns and standard referential pronouns, without introducing additional formal mechanisms to the system.

12:30-14:00Lunch Break
14:00-15:30 Session 28I: Contributed papers
Chair:
14:00
Speakers in vats: simulating model-theoretic alignment with distributional semantics

ABSTRACT. One long-standing puzzle in semantics is the ability of speakers to refer successfully in spite of holding different models of the world. This puzzle is famously illustrated by the cup/mug example: if two speakers disagree on whether a specific entity is a cup or a mug (i.e. if their interpretation functions differ), how can they align so that the entity can still be talked about?

Another puzzle, coming to us through lexical and distributional semantics, is that word meaning seems to be infinitely flexible, indeed much more so than the traditional notion of sense would have it. This makes the alignment process between speakers even more unpredictable.

In this talk, I will report on a series of experiments aiming at investigating what differences in language use can tell us about the ability of speakers to align at a model-theoretic level. Since speaker-dependent data is extremely hard to obtain, I propose a new methodology to 'spawn' speakers from a reference distributional semantic space, corresponding to different types of variations in language use. I show how and where alignment is disturbed, and give a theoretical account of how such perturbations relate to potentially catastrophic differences in world representations.

15:00
Graph Knowledge Representations for SICK

ABSTRACT. Determining semantic relationships between sentences is essential for machines that understand and reason with natural language. Despite neural networks big successes, end-to-end neural architectures still fail to get acceptable performance for textual inference, maybe due to lack of adequate datasets for learning. Recently large datasets have been constructed e.g. SICK, SNLI, MultiNLI, but it is not clear how trustworthy these datasets are. This paper describes work on an expressive open-source semantic parser GKR that creates graphical representations of sentences used for further semantic processing, e.g. for natural language inference, reasoning and semantic similarity. The GKR is inspired by the Abstract Knowledge Representation (AKR), which separates out conceptual and contextual levels of representation that deal respectively with the subject matter of a sentence and its existential commitments. We recall work investigating SICK and its problematic annotations and propose to use GKR as a better representation for the semantics of SICK sentences.

15:30-16:00Coffee Break
16:00-18:00 Session 31K: Contributed talks
16:00
Automated Reasoning from Polarized Parse Trees
SPEAKER: Larry Moss

ABSTRACT. This paper contributes to symbolic inference from text, including naturally occurring text. The idea is to take sentences in a framework like CCG, and then run a polarizing algorithm like the one in Hu and Moss 2018 to determine inferential polarity markings of all the constituents. From this, it is just a small step to obtain an inference engine which is both simple to describe and implement and at the same time is surprisingly powerful. We have implemented the basic inference step. This paper is work in progress, also going into detail on our projected next steps. The overall goal is to have a working symbolic inference system which covers "in-practice" inference and also is correct and efficient.

16:30
Do different syntactic trees yield different logical readings? Some remarks on head variables in typed lambda calculus.
SPEAKER: Davide Catta

ABSTRACT. A natural question in categorial grammar is the relation between a syntactic analysis s and the logical form, i.e. the logical formula obtained from this syntactic analysis, once provided with semantic lambda terms. More precisely, do different syntactic analyses fed with equal semantic terms, lead to equal logical form? We shall show that when this question is too simply formulated, the answer is "NO" while with some constraints on semantic lambda terms the answer is "yes". 

17:00
Automatic test suite generation for PMCFG grammars

ABSTRACT. We present a method for finding errors in formalized natural language grammars, by automatically and systematically generating test cases that are intended to be judged by a human oracle. The method works on a per-construction basis; given a construction from the grammar, it generates a finite but complete set of test sentences (typically tens or hundreds), where that construction is used in all possible ways. Our method is an alternative to using a corpus or a treebank, where no such completeness guarantees can be made. The method is language-independent and is implemented for the grammar formalism PMCFG, but also works for weaker grammar formalisms. We evaluate the method on a number of different grammars for different natural languages, with sizes ranging from toy examples to real-world grammars.

17:30
OpenWordNet-PT: Taking Stock

ABSTRACT. This note discusses work on lexical resources for Portuguese centered around OpenWordNet-PT, an open source wordnet-like resource for Portuguese. We discuss the initial developments, the sister project Nomlex-PT and mostly the applications that were developed in the quest for being able to do reasoning with Portuguese texts.

19:45-22:00 Workshops dinner at Balliol College

Workshops dinner at Balliol College. Drinks reception from 7.45pm, to be seated by 8:15 (pre-booking via FLoC registration system required; guests welcome).

Location: Balliol College