View: session overviewtalk overview
09:00 | SPEAKER: Alwen Tiu ABSTRACT. We introduce a novel type system for enforcing secure information flow in an imperative language. Our work is motivated by the problem of statically checking potential information leakage in Android applications. To this end, we design a lightweight type system featuring Android permission model, where the permissions are statically assigned to applications and are used to enforce access control in the applications. We take inspiration from a type system by Banerjee and Naumann (BN) to allow security types to be dependent on the permissions of the applications. A novel feature of our type system is a typing rule for conditional branching induced by permission testing, which introduces a merging operator on security types, allowing more precise security policies to be enforced. The soundness of our type system is proved with respect to non-interference. In addition, a type inference algorithm is presented for the underlying security type system, by reducing the inference problem to a constraint solving problem in the lattice of security types. |
09:30 | SPEAKER: Vineet Rajani ABSTRACT. Language-based information flow control (IFC) tracks dependencies within a program using sensitivity labels and prohibits public outputs from depending on secret inputs. In particular, literature has proposed several type systems for tracking these dependencies. On one extreme, there are fine-grained type systems (like Flow Caml) that track dependence at the level of individual values, by labeling all values individually. On the other extreme are coarse-grained type systems (like SLIO) that track dependence coarsely, by associating a single label with an entire computation context and not labeling all values individually. In this paper, we show that, despite their glaring differences, both these styles are, in fact, equally expressive. To do this, we show a semantics- and type-preserving translation from a coarse-grained type system to a fine-grained one and vice-versa. The forward translation isn't surprising, but the backward translation is: It requires a construct to arbitrarily limit the scope of a context label in the coarse-grained type system (e.g., HLIO's ``toLabeled'' construct), as well as a small change to how labels are interpreted in coarse-grained type systems. Along the way, we show how to build logical relation models of both fine-grained and coarse-grained type systems (with full higher-order state). We use these relations to prove the two type systems and our translations sound. |
10:00 | SPEAKER: Christian Müller ABSTRACT. Our goal is to certify absence of information leaks in multi-agent workflows, such as conference management systems like \textsc{EasyChair}. These workflows can be executed by any number of agents some of which may form coalitions against the system. Therefore, checking noninterference is a challenging problem. Our paper offers two main contributions: First, a technique is provided to translate noninterference (in presence of various agent capabilities and declassification conditions) into universally quantified invariants of an instrumented new workflow program. Second, general techniques are developed for checking and inferring universally quantified inductive invariants for workflow programs. In particular, a large class of workflows is identified where inductiveness of invariants is decidable, as well as a smaller, still useful class of workflows where the weakest inductive universal invariant implying the desired invariant, is effectively computable. The new algorithms are implemented and applied to certify noninterference for workflows arising from conference management systems. |
09:00 | SPEAKER: Keisuke Nakano ABSTRACT. B-terms are built from the B combinator alone defined by B f g x ≡ f (g x), which is well-known as a function composition operator. This paper investigates an interesting property of B-terms, that is, whether repetitive right applications of a B-term circulates or not. We discuss conditions for B-terms to and not to have the property through a sound and complete equational axiomatization. Specifically, we give examples of B-terms which have the property and show that there are infinitely many B-terms which does not have the property. Also, we introduce a canonical representation of B-terms that is useful to detect cycles, or equivalently, to prove the property, with an efficient algorithm. |
09:30 | ABSTRACT. We consider higher-order recursion schemes as generators of infinite trees. A sort (simple type) is called homogeneous when all arguments of higher order are taken before any arguments of lower order. We prove that every scheme can be converted into an equivalent one (i.e, generating the same tree) that is homogeneous, that is, uses only homogeneous sorts. Then, we prove the same for safe schemes: every safe scheme can be converted into an equivalent safe homogeneous scheme. Furthermore, we compare two definition of safe schemes: the original definition of Damm, and the modern one. Finally, we prove a lemma which illustrates usefulness of the homogeneity assumption. The results are known, but we prove them in a novel way: by directly manipulating considered schemes. |
10:00 | ABSTRACT. The infinitary lambda calculi pioneered by Kennaway et al. extend the basic lambda calculus by metric completion to infinite terms and reductions. Depending on the chosen metric, the resulting infinitary calculi exhibit different notions of strictness. To obtain infinitary normalisation and infinitary confluence properties for these calculi, Kennaway et al. extend β-reduction with infinitely many ‘⊥-rules’, which contract meaningless terms directly to ⊥. Three of the resulting Böhm reduction calculi have unique infinitary normal forms corresponding to Böhm-like trees. In this paper we develop a corresponding theory of infinitary lambda calculi based on ideal completion instead of metric completion. We show that each of our calculi conservatively extends the corresponding metric-based calculus. Three of our calculi are infinitary normalising and confluent; their unique infinitary normal forms are exactly the Böhm-like trees of the corresponding metric-based calculi. Our calculi dispense with the infinitely many ⊥-rules of the metric-based calculi. The fully non-strict calculus (111) consists of only β-reduction, while the other two calculi (001 and 101) require two additional rules that precisely state their strictness properties: λx.⊥ → ⊥ and ⊥ M → ⊥. |
09:00 | SPEAKER: Hira Syeda ABSTRACT. Operating system (OS) kernels achieve isolation between user-level processes using multi-level page tables and translation lookaside buffers (TLBs). Controlling the TLB correctly is a fundamental security property --- yet all large-scale formal OS verification projects leave the TLB as an assumption. We present a logic for reasoning about low-level programs in the presence of TLB address translation. We extract invariants and necessary conditions for correct TLB operation that mirror the informal reasoning of OS engineers. Our program logic reduces to a standard logic for user-level reasoning, reduces to side-condition checks for kernel-level reasoning, and can handle typical OS kernel tasks such as context switching and page table manipulations. |
09:30 | SPEAKER: Lukas Humbel ABSTRACT. Modern computing platforms are inherently complex and diverse: a heterogeneous collection of cores, interconnects, programmable memory transla- tion units, and devices means that there is no single physical address space, and each core or DMA device may see other devices at different physical addresses. This is a problem because correct operation of system software relies on correct configuration of these interconnects, and current operating systems (and asso- ciated formal specifications) make assumptions about global physical addresses which do not hold. We present a formal model in Isabelle/HOL to express this complex addressing hardware that captures the intricacies of different real plat- forms or Systems-on-Chip (SoCs), and demonstrate its expressivity by showing, as an example, the impossibility of correctly configuring a MIPS R4600 TLB as specified in its documentation. Such a model not only facilitates proofs about hardware, but is used to generate correct code at compile time and device config- uration at runtime in the Barrelfish research OS. |
10:00 | SPEAKER: João Paulo Pizani Flor ABSTRACT. Modelling digital circuits has been a fertile ground for functional programming and theorem proving. There have been numerous fruitful efforts to describe, simulate, and verify circuit in using functional languages, and such languages have also often been used as the host of hardware embedded DSLs. When dependently-typed languages are used as host, we can have the hardware models simulated (with an executable specification) and verified in the same language, and properties can be proven not only of specific circuits, but of circuit generators describing entire (infinite) families of circuits. In this paper we describe a deep embedding of a typed DSL for hardware description and verification, called λπ-Ware, using the dependently-typed language Agda as host. We define several common recursion schemes with the primitives of this DSL, in both combinational and sequential versions, and show how known circuits can be expressed in terms of these recursion patterns. Finally, we make clear a notion of equivalence between circuits with different levels of parallelism, and prove the core equivalence property between the parallel and sequential versions of our vector iteration primitive. Circuits defined using the common recursion schemes can then easily be implemented in more or less parallel architectures with the guarantee that their behaviour will be equivalent up to timing. |
09:00 | SPEAKER: Ralf Rothenberger ABSTRACT. We study non-uniform random k-SAT on n variables with an arbitrary probability distribution p on the variable occurrences. The number t = t(n) of randomly drawn clauses at which random formulas go from asymptotically almost surely (a.a.s.) satisfiable to a.a.s. unsatisfiable is called the satisfiability threshold. Such a threshold is called sharp if it approaches a step function as n increases. We show that a threshold t(n) for random k-SAT with an ensemble (p_n)_{n\in\N} of arbitrary probability distributions on the variable occurrences is sharp if ||p||_2 = O_n(t^(-2/k)) and ||p||_infinity = o_n(t^(-k/(2k-1)) * log^(-(k-1)/(2k-1))(t)). This result generalizes Friedgut’s sharpness result from uniform to non-uniform random k-SAT and implies sharpness for thresholds of a wide range of random k-SAT models with heterogeneous probability distributions, for example such models where the variable probabilities follow a power-law distribution. |
09:30 | SPEAKER: Stephan Gocht ABSTRACT. We initiate a proof complexity theoretic study of subsystems of cutting planes (CP) modelling proof search in conflict-driven pseudo-Boolean (PB) solvers. These algorithms combine restrictions such as that addition of constraints should always cancel a variable and/or that so-called saturation is used instead of division. It is known that on CNF inputs cutting planes with cancelling addition and saturation is essentially just resolution. We show that even if general addition is allowed, this proof system is still polynomially simulated by resolution with respect to proof size as long as coefficients are polynomially bounded. As a further way of delineating the proof power of subsystems of CP, we propose to study a number of easy (but tricky) instances of problems in NP. Most of the formulas we consider have short and simple tree-like proofs in general CP, but the restricted subsystems seem to reveal a much more varied landscape. Although we are not able to formally establish separations between different subsystems of CP -which would require major technical breakthroughs in proof complexity- these formulas appear to be good candidates for obtaining such separations. We believe that a closer study of these benchmarks is a promising approach for shedding more light on the reasoning power of pseudo-Boolean solvers. |
10:00 | SPEAKER: Jacobo Toran ABSTRACT. We characterize several complexity measures for the resolution of Tseitin formulas in terms of a two person cop-robber game. Our game is a slight variation of the the one Seymour and Thomas used in order to characterize the tree-width parameter. For any undirected graph, by counting the number of cops needed in our game in order to catch a robber in it, we are able to exactly characterize the width, variable space and depth measures for the resolution of the Tseitin formula corresponding to that graph. We also give an exact game characterization of resolution variable space for any formula. We show that our game can be played in a monotonous way. This implies that the corresponding resolution measures on Tseitin formulas correspond to those under the restriction of regular resolution. Using our characterizations we improve the existing complexity bounds for Tseitin formulas showing that resolution width, depth and variable space coincide up to a logarithmic factor, and that variable space is bounded by the clause space times a logarithmic factor. |
10:00 | ABSTRACT. In a recent paper, Herbelin developed a calculus dPAω in which constructive proofs for the axioms of countable and dependent choices could be derived via the encoding of a proof of countable universal quantification as a stream of it components. However, the property of normalization (and therefore the one of soundness) was only conjectured. The difficulty for the proof of normalization is due to the simultaneous presence of dependent dependent types (for the constructive part of the choice), of control operators (for classical logic), of coinductive objects (to encode functions of type N → A into streams (a₀, a₁, ...)) and of lazy evaluation with sharing (for these coinductive objects). Building on previous works, we introduce in this paper a variant of dPAω presented as a sequent calculus. On the one hand, we take advantage of a variant of Krivine classical realizability we developed to prove the normalization of classical call-by-need. On the other hand, we benefit of dL, a classical sequent calculus with dependent types in which type safety is ensured using delimited continuations together with a syntactic restriction. By combining the techniques developed in these papers, we manage to define a realizability interpretation à la Krivine of our calculus that allows us to prove normalization and soundness. |
10:20 | SPEAKER: Anders Mörtberg ABSTRACT. Cubical type theory provides a constructive justification to certain aspects of homotopy type theory such as Voevodsky's univalence axiom. This makes many extensionality principles, like function and propositional extensionality, directly provable in the theory. This paper describes a constructive semantics, expressed in a presheaf topos with suitable structure inspired by cubical sets, of some higher inductive types. It also extends cubical type theory by a syntax for the higher inductive types of spheres, torus, suspensions, truncations and pushouts. All of these types are justified by the semantics and have judgmental computation rules for all constructors, including the higher dimensional ones, and the universes are closed under these type formers. |
10:05 | SPEAKER: Andreas Eggers ABSTRACT. One does not need the gift of clairvoyance to predict that in the near future autonomously driving cars will occupy a significant place in our everyday life. In fact, in designated and even public test-drive areas it is reality even now. Autonomous driving comes with the ambition to make road traffic safer, more efficient, more economic, and more comfortable -- and thus to ``make the world a bit better''. Recent accidents with automatic cars resulting in severe injuries and death, however, spark a debate on the safety validation of autonomous driving in general. The biggest challenge for autonomous driving to become a reality is thus most likely not the actual development of intelligent vehicles themselves but their rigorous validation that would justify the necessary level of confidence. It is common sense that classical test approaches are by far not feasible in this emerging area of autonomous driving as these would induce billions of kilometers of real-world driving in each release cycle. To cope with the tremendous complexity of traffic situations that a self-driving vehicle must deal with -- without doing any harm to any other traffic participants -- a promising approach to safety validation is virtual simulation, i.e.\ driving the huge amount of test kilometers in a virtual but realistic simulation environment. A particular interest here is in the validation of the behavior of the autonomous car in rather critical traffic scenarios. In this position paper, we concentrate on one important aspect in virtual safety validation of autonomous driving, namely on the specification and concretization of critical traffic scenarios. This aspect gives rise to complex and challenging constraint systems to be solved, to which we believe the SC$^2$ community may give essential contributions by their rich variety of diverse methods and techniques. |
11:00 | SPEAKER: Naoki Nishida ABSTRACT. A narrowing tree for a constructor term rewriting system and a pair of terms is a finite representation for the space of all possible innermost-narrowing derivations that start with the pair and end with non-narrowable terms. Narrowing trees have grammar representations that can be considered regular tree grammars. Innermost narrowing is a counterpart of constructor-based rewriting, and thus, narrowing trees can be used in analyzing constructor-based rewriting to normal forms. In this paper, using grammar representations, we extend narrowing trees to syntactically deterministic conditional term rewriting systems that are constructor systems. We show that narrowing trees are useful to prove two properties of a normal conditional term rewriting system: one is infeasibility of conditional critical pairs and the other is quasi-reducibility. |
11:30 | SPEAKER: Sarah Winkler ABSTRACT. We propose an abstract completion procedure for logically constrained term rewrite systems (LCTRSs). This procedure can be instantiated to both standard Knuth-Bendix completion and ordered completion for LCTRSs, and we present a succinct and uniform correctness proof. A prototype implementation illustrates the viability of the new completion procedure. |
12:00 | ABSTRACT. We consider rewriting of a regular language with a left-linear term rewriting system. We show two completeness theorems on equational tree automata completion. The first one shows that, if the set of reachable terms is regular, then completion can compute it. This was known to be true for some term rewriting system classes preserving regularity, but was still an open question in the general case. The proof is not constructive because it depends on the regularity of the set of reachable terms, which is undecidable. The second theorem states that, if there exists a regular over-approximation of the set of reachable terms then completion can compute it (or safely under-approximate it). To carry out those proofs we generalize and improve two results of completion: the Termination and the Upper-Bound theorems. Those theoretical results provide an algorithmic way to safely explore regular approximations with completion. This has been implemented in Timbuk and used to verify safety properties, automatically and efficiently, on first-order and higher-order functional programs. |
11:00 | SPEAKER: Joshua Schneider ABSTRACT. Bounded natural functors (BNFs) provide a modular framework for the construction of (co)datatypes in higher-order logic. Their functorial operations, the mapper and relator, are restricted to a subset of the parameters, namely those where recursion can take place. For certain applications, such as free theorems, data refinement, quotients, and generalised rewriting, it is desirable that these operations do not ignore the other parameters. In this paper, we generalise BNFs such that the mapper and relator act on both covariant and contravariant parameters. Our generalisation, BNFCC, is closed under functor composition and least and greatest fixpoints. In particular, every (co)datatype is a BNFCC. We prove that subtypes inherit the BNFCC structure under conditions that generalise those for the BNF case. We also identify sufficient conditions under which a BNFCC preserves quotients. Our development is formalised abstractly in Isabelle/HOL in such a way that it integrates seamlessly with the existing parametricity infrastructure. |
11:30 | SPEAKER: William Farmer ABSTRACT. We are interested in algorithms that manipulate mathematical expressions in mathematically meaningful ways. Expressions are syntactic, but most logics do not allow one to discuss syntax. CTT_qe is a version of Church's type theory that includes quotation and evaluation operators, akin to quote and eval in the Lisp programming language. Since the HOL logic is also a version of Church's type theory, we decided to add quotation and evaluation to HOL Light to demonstrate the implementability of CTT_qe and the benefits of having quotation and evaluation in a proof assistant. The resulting system is called HOL Light QE. Here we document the design of HOL Light QE and the challenges that needed to be overcome. The resulting implementation is freely available. |
12:00 | ABSTRACT. Tactics are often featured in proof assistants to simplify the interactive development of proofs by allowing domain-specific automation. Moreover, tactics are also helpful to check the output of automatic theorem provers because the provers often omit details. We use meta-programming to define a tactic language for the Dedukti logical framework which can be used both for checking certificates produced by automatic provers and for developing Dedukti proofs interactively. More precisely, we propose a dependently-typed tactic language for first-order logic in Meta Dedukti and an untyped certificate language built on top of the tactic language. We show the expressivity of these languages on three examples: a decision procedure for minimal logic, a transfer tactic, and a certifying resolution certificate checker. |
11:00 | SPEAKER: Antti Kuusisto ABSTRACT. It was recently shown by van den Broeck at al. that the symmetric weighted first-order model counting problem (WFOMC) for sentences of two-variable logic FO2 is in polynomial time, while it is Sharp-P_1 complete for FO3-sentences. We extend the result for FO2 in two independent directions: to sentences of the form "phi and \forall\exists^=1 psi" with phi and psi in FO2, and to sentences formulated in the uniform one-dimensional fragment of FO, a recently introduced extension of two-variable logic with the capacity to deal with relation symbols of all arities. Note that the former generalizes the extension of FO2 with a functional relation symbol. We also identify a complete classification of first-order prefix classes according to whether WFOMC is in polynomial time or Sharp-P_1 complete. |
11:20 | SPEAKER: Antoine Mottet ABSTRACT. The logic MMSNP is a restricted fragment of existential second-order logic which allows to express many interesting queries in graph theory and finite model theory. The logic was introduced by Feder and Vardi who showed that every MMSNP sentence is computationally equivalent to a finite-domain constraint satisfaction problem (CSP); the involved probabilistic reductions were derandomized by Kun using explicit constructions of expander structures. We present a new proof of the reduction to finite-domain CSPs which does not rely on the results of Kun. Our approach uses the fact that every MMSNP sentence describes a finite union of CSPs for countably infinite ω-categorical structures; moreover, by a recent result of Hubicka and Nesetril, these structures can be expanded to homogeneous structures with finite relational signature and the Ramsey property. This allows us to use the universal-algebraic approach to study the computational complexity of MMSNP. In particular, we verify the more general Bodirsky-Pinsker dichotomy conjecture for CSPs in MMSNP. |
11:40 | SPEAKER: Emanuel Kieronski ABSTRACT. We consider an extension of the unary negation fragment of first-order logic in which arbitrarily many binary symbols may be required to be interpreted as equivalence relations. We show that this extension has the finite model property. More specifically, we show that every satisfiable formula has a model of at most doubly exponential size. We argue that the satisfiability (= finite satisfiability) problem for this logic is \TwoExpTime-complete. We also transfer our results to a restricted variant of the guarded negation fragment with equivalence relations. |
12:00 | SPEAKER: Jacek Krzaczkowski ABSTRACT. Satisfiability of Boolean circuits is NP-complete in general but becomes polynomial time when restricted either to monotone gates or linear gates. We go outside Boolean realm and consider circuits built of any fixed set of gates on an arbitrary large finite domain. From the complexity point of view this is connected with solving equations over finite algebras. We want to characterize finite algebras A with polynomial time algorithm deciding if an equation over A has a solution. We are also looking for polynomial time algorithms deciding if two circuits over a finite algebra compute the same function. Although we have not managed to solve these problems in the most general setting we have obtained such a characterization (in terms of nice structural algebraic properties) for a very broad class of algebras from congruence modular varieties, including groups, rings, lattices and their extensions. |
12:20 | SPEAKER: Grzegorz Głuch ABSTRACT. For a given set of queries (which are formulae in some query language) {Q_1, Q_2, ... Q_k} and for another query Q we say that {Q_1, Q_2, ... Q_k} determines Q if -- informally speaking -- for every database D, the information contained in the views Q_1(D), Q_2(D), ... Q_k(D) is sufficient to compute Q(D). Query Determinacy Problem is the problem of deciding, for given {Q_1, Q_2, ... Q_k} and Q whether {Q_1, Q_2, ... Q_k} determines Q. Many versions of this problem, for different query languages, were studied in database theory. In this paper we solve a problem stated in [CGLV02] showing that Query Determinacy Problem is undecidable for the Regular Path Queries -- the paradigmatic query language of graph databases. --------- [CGLV02] D. Calvanese, G. De Giacomo, M. Lenzerini, and M.Y. Vardi; Lossless regular views; Proc. of the twenty-first ACM SIGMOD-SIGACT- SIGART symposium on Principles of Database Systems, PODS 2002 |
11:00 | SPEAKER: Sean Moss ABSTRACT. We present two Dialectica-like constructions for models of intensional Martin-Löf type theory based on Gödel's original Dialectica interpretation and the Diller-Nahm variant, bringing dependent types to categorical proof theory. We set both constructions within a logical predicates style theory for display map categories where we show that 'quasifibred' versions of dependent products and universes suffice to construct their standard counterparts. To support the logic required for dependent products in the first construction, we propose a new semantic notion of finite sum for dependent types, generalizing finitely-complete extensive categories. The second avoids extensivity assumptions using biproducts in a Kleisli category for a fibred additive monad. |
11:20 | SPEAKER: James Laird ABSTRACT. We describe a dependent type theory, and a denotational model for it, that incorporates both intensional and extensional semantic universes. In the former, terms and types are interpreted as strategies on certain graph games, which are concrete data structures of a generalized form, and in the latter as stable functions on event domains. The concrete data structures themselves form an event domain, with which we may interpret an (extensional) universe type of (intensional) types. A dependent game corresponds to a stable function into this domain; we use its trace to define dependent product and sum constructions as it captures precisely how unfolding moves combine with the dependency to shape the possible interaction in the game. Since each strategy computes a stable function on CDS states, we can lift typing judgements from the intensional to the extensional setting, giving an expressive type theory with recursively defined type families and type operators. We define an operational semantics for intensional terms, giving a functional programming language based on our type theory, and prove that our semantics for it is computationally adequate. By extending it with a simple non-local control operator on intensional terms, we can precisely characterize behaviour in the intensional model. We demonstrate this by proving full abstraction and full completeness results. |
11:40 | SPEAKER: Hiroshi Unno ABSTRACT. Existing approaches to temporal verification of higher-order functional programs have either sacrificed compositionality in favor of achieving automation or vice-versa. In this paper we present a dependent-refinement type \& effect system to ensure that well-typed programs satisfy given temporal properties, and also give an algorithmic approach---based on deductive reasoning over a fixpoint logic---to typing in this system. The first contribution is a novel type-and-effect system capable of expressing \emph{dependent temporal} effects, which are fixpoint logic predicates on event sequences and program values, extending beyond the (non-dependent) temporal effects used in recent proposals. Temporal effects facilitate compositional reasoning whereby the temporal behavior of program parts are summarized as effects and combined to form those of the larger parts. As a second contribution, we show that type checking and typability for the type system can be reduced to solving first-order fixpoint logic constraints. Finally, we present a novel deductive system for solving such constraints. The deductive system consists of rules for reasoning via invariants and well-founded relations, and is able to reduce formulas containing both least and greatest fixpoints to predicate-based reasoning. |
12:00 | SPEAKER: Vincent Rahli ABSTRACT. Church-Turing computability was extended by Brouwer who considered non-lawlike computability in the form of free choice sequences. Those are essentially unbounded sequences whose elements are chosen freely, i.e. not subject to any law. In this work we develop a new type theory BITT, which is an extension of the type theory of the Nuprl proof assistant, that embeds the notion of choice sequences. Supporting the evolving, non-deterministic nature of these objects required major modi cations to the under- lying type theory. Though the construction of a choice sequence is non-deterministic, once certain choices were made, they must remain consistent. To ensure this, BITT uses the underlying library as state and store choices as they are created. Another salient feature of BITT is that it uses a Beth-like semantics to account for the dynamic nature of choice sequences. We formally define BITT and use it to interpret and validate essential axioms governing choice sequences. These results provide a foundation for a fully intuitionistic version of Nuprl. |
12:20 | ABSTRACT. We present Quantitative Type Theory, a Type Theory that records usage information for each variable in a judgement, based on a previous system by McBride. The usage information is used to give a realizability semantics using a variant of Linear Combinatory Algebras, refining the usual realizability semantics of Type Theory by accurately tracking resource behaviour. We define the semantics in terms of Quantitative Categories with Families, a novel extension of Categories with Families for modelling resource sensitive type theories. |
11:00 | SPEAKER: Pascal Fontaine ABSTRACT. We report on a prototypical tool for Satisfiability Modulo Theory solving for quantifier-free formulas in Non-linear Real Arithmetic or, more precisely, real closed fields, which uses a computer algebra system as the main component. This is complemented with two heuristic techniques, also stemming from computer algebra, viz.~interval constraint propagation and subtropical satisfiability. Our key idea is to make optimal use of existing knowledge and work in the symbolic computation community, reusing available methods and implementations to the most possible extent. Experimental results show that our approach is surprisingly efficient in practice. |
11:25 | SPEAKER: John Abbott ABSTRACT. CoCoALib is a C++ software library offering operations on polynomials, ideals of polynomials, and related objects. The principal developers of CoCoALib are members of the SC-square project. We give an overview of the latest developments of the library, especially those relating to the project SC-square. The CoCoA software suite includes also the programmable, interactive system CoCoA-5. Most of the operations in CoCoALib are also accessible via CoCoA-5. The programmability of CoCoA-5 together with its interactivity help in fast prototyping and testing conjectures. |
11:50 | ABSTRACT. The recognition that Symbolic Computation tools could benefit from techniques from the world of Satisability Checking was a primary motivation for the founding of the SC2 community. These benefits could be further demonstrated by the existence of "SMT-like" queries in legacy computer algebra systems; that is, computations which seek to decide satisfiability or identify a satisfying witness. The Maple CAS has been under continuous development since the 1980s and its core symbolic routines incorporate many heuristics, some of which are "SMT-like". We describe ongoing work to compose an inventory of such "SMT-like" queries extracted from the built-in Maple library, most of which were added long before the inclusion in Maple of explicit software links to SAT/SMT tools. Some of these queries are expressible in the SMT-LIB format using existing logics, and it hoped that an analysis of those that cannot be so expressed could help inform future development of the SMT-LIB standard. |
12:15 | ABSTRACT. Combining methods from satisfiability checking with methods from symbolic computation promises to solve challenging problems in various areas of theory and application. We look at the basically equivalent problem of proving statements directly in a non-clausal setting, when additional information on the underlying domain is available in form of specific properties and algorithms. We demonstrate on a concrete example several heuristic techniques for the automation of natural-style proving of statements from elementary analysis. The purpose of this work in progress is to generate proofs similar to those produced by humans, by combining automated reasoning methods with techniques from computer algebra. Our techniques include: the S-decomposition method for formulae with alternating quantifiers, quantifier elimination by cylindrical algebraic decomposition, analysis of terms behaviour in zero, bounding the epsilon-bounds, rewriting of expressions involving absolute value, algebraic manipulations, and identification of equal terms under unknown functions. These techniques are being implemented in the Theorema system and are able to construct automatically natural-style proofs for numerous examples including: convergence of sequences, limits and continuity of functions, uniform continuity, and other. |
12:00 | SPEAKER: Samuel Yeom ABSTRACT. Machine learning algorithms, when applied to sensitive data, pose a distinct threat to privacy. A growing body of prior work demonstrates that models produced by these algorithms may leak specific private information in the training data to an attacker, either through the models' structure or their observable behavior. However, the underlying cause of this privacy risk is not well understood beyond a handful of anecdotal accounts that suggest overfitting and influence might play a role. This paper examines the effect that overfitting and influence have on the ability of an attacker to learn information about the training data from machine learning models, either through training set membership inference or attribute inference attacks. Using both formal and empirical analyses, we illustrate a clear relationship between these factors and the privacy risk that arises in several popular machine learning algorithms. We find that overfitting is sufficient to allow an attacker to perform membership inference and, when the target attribute meets certain conditions about its influence, attribute inference attacks. Interestingly, our formal analysis also shows that overfitting is not necessary for these attacks and begins to shed light on what other factors may be in play. Finally, we explore the connection between membership inference and attribute inference, showing that there are deep connections between the two that lead to effective new attacks. |
14:00 | SPEAKER: Matthew England ABSTRACT. We consider problems originating in economics that may be solved automatically using mathematical software. We present and make freely available a new benchmark set of such problems. The problems have been shown to fall within the framework of non-linear real arithmetic, and so are in theory soluble via Quantifier Elimination (QE) technology as usually implemented in computer algebra systems. Further, they all can be phrased in prenex normal form with only existential quantifiers and so are also admissible to those Satisfiability Module Theory (SMT) solvers that support the QF_NRA logic. There is a great body of work considering QE and SMT application in science and engineering, but we demonstrate here that there is potential for this technology also in the social sciences. |
14:30 | SPEAKER: Alexander Cowen-Rivers ABSTRACT. Cylindrical Algebraic Decomposition (CAD) is an important tool within computational real algebraic geometry, capable of solving many problems for polynomial systems over the reals. It has long been studied by the Symbolic Computation community, and has found recent interest in the Satisfiability Checking community. The present report describes a proof of concept implementation of an Incremental CAD algorithm in \textsc{Maple}, where CADs are built and then refined as additional polynomial constraints are added. The aim is to make CAD suitable for use as a theory solver for SMT tools who search for solutions by constantly reformulating logical formula and querying whether a logical solution is admissible. We describe experiments for the proof of concept, which clearly display the computational advantages compared to iterated re-computation. In addition, the project implemented this work under the recently verified Lazard projection scheme (with corresponding Lazard valuation). |
15:00 | SPEAKER: Rebecca Haehn ABSTRACT. The cylindrical algebraic decomposition is a popular method for quantifier elimination for non-linear real algebra. We use it as a theory solver in the context of satisfiability-modulo-theories solving to solve a sequence of related problems consisting of a conjunction of constraints. To improve our technique in practice, we consider different optimizations in the projection phase based on equational constraints. We review the existing ideas and propose how to modify an existing implementation. Though one could expect a significant speed-up for large problems, we discuss why our expectations are somewhat lower, in particular, because satisfiable problems tend not to benefit from the modifications at all. Finally, we evaluate several different implementations and demonstrate a significant improvement due to the usage of equational constraints. |
16:00 | SPEAKER: Lara Schmid ABSTRACT. In random sample voting, only a randomly chosen subset of all eligible voters vote. We propose Alethea, the first random sample voting protocol that satisfies end-to-end verifiability and receipt-freeness. Our protocol makes explicit the distinction between the human voters and their devices. This allows us to make more fine-grained statements about the required capabilities and trust assumptions of each agent than is possible in previous work. Consequently, we can make more precise statements about the adversary models under which our protocol is secure. Alethea is the first formally verified random sample voting protocol. This necessitates defining new security properties, in particular that the sample group is chosen randomly and that the chosen voters remain anonymous. We model the protocol and its properties in a symbolic model amenable to tool support. We verify most of the properties automatically and complete our analysis using traditional hand-written proofs when automatic analysis is either infeasible or when we desire a more fine-grained analysis than our symbolic abstractions allow. |
16:30 | SPEAKER: François Dupressoir ABSTRACT. We present a machine-checked security analysis of Belenios-- a deployed voting protocol which has already been used in more than 200 elections. Belenios extends Helios with an explicit registration authority to obtain eligibility guarantees. We offer two main results. First, we build upon a recent framework for proving ballot privacy in EasyCrypt. Inspired by our application to Belenios, we adapt and extend the privacy security notions to account for protocols that include a registration phase. Our analysis identifies a trust assumption which is missing in the existing (pen and paper) analysis of Belenios: ballot privacy does not hold if the registrar misbehaves, even if, the role of the registrar is seemingly to provide eligibility guarantees. Second, we develop a novel framework for proving strong verifiability in EasyCrypt and apply it to Belenios. In the process, we clarify several aspects of the pen-and-paper proof, e.g. how to deal with revote policies. Together, our results yield the first machine-checked analysis of both privacy and verifiability properties for a deployed electronic voting protocol. Last, but not least, we identify several issues regarding the applicability of existing definitions of privacy and verifiability to systems other than Helios. While we show how to adapt the definitions to the particular case of Belenios, our findings indicate a need for the development of less specific security notions for electronic voting protocols with registration authorities. |
16:00 | SPEAKER: Julian Parsert ABSTRACT. Utility functions form an essential part of game theory and theoretical economics. In order to guarantee the existence of these utility functions sufficient properties are assumed in an axiomatic manner. In this paper we discuss these axioms, and a formal proof of the existence of linear utility functions. Furthermore, we consider the von-Neumann-Morgenstern Utility Theorem, its importance for game theory and economics, and a formalization thereof. The formalization includes precise definitions of the underlying concepts including continuity and independence of preferences. We make the dependencies more precise and highlight some consequences for a formalization of game theory. |
16:20 | SPEAKER: Simon Wimmer ABSTRACT. We present a formalization of Probabilistic Timed Automata (PTA) for which we try to follow the formula ``MDP + TA = PTA'' as far as possible: our work starts from existing formalizations of Markov Decision Processes (MDP) and Timed Automata (TA) and combines them modularly. We prove the fundamental result for Probabilistic Timed Automata: the region construction that is known from Timed Automata carries over to the probabilistic setting. In particular, this allows us to prove that minimum and maximum reachability probabilities can be computed via a reduction to MDP model checking, including the case where one wants to disregard unrealizable behavior. |
16:40 | SPEAKER: Alexandra Mendes ABSTRACT. Despite great advances in computer-assisted proof systems, writing formal proofs using a traditional computer is still challenging due to mouse-and-keyboard interaction. This leads to scientists often resorting to pen and paper to write their proofs. However, when handwriting a proof, there is no formal guarantee that the proof is correct. In this paper we address this issue and present the initial steps towards a system that allows users to handwrite proofs using a pen-based device and that communicates with an external theorem prover to support the users throughout the proof writing process. We focus on calculational proofs, whereby a theorem is proved by a chain of formulae, each transformed in some way into the next. We present the implementation of a proof-of-concept prototype that can formally verify handwritten calculational proofs without the need to learn the specific syntax of theorem provers. |
17:00 | SPEAKER: Andréia B. Avelar ABSTRACT. This work presents the ongoing specification and formalization in the PVS proof assistant of some definitions and theorems of ring theory in abstract algebra, and briefly presents some of the results intended to be formalized. So far, some important theorems from ring theory were specified and formally proved, like the First Isomorphism Theorem, the Binomial Theorem and the lemma establishing that every finite integral domain with cardinality greater than one is a field. The goal of the project in progress is to specify and formalize in PVS the main theorems from ring theory presented in undergraduate textbooks of abstract algebra, but in the short term the authors intended to formalize: (i) the Second and the Third Isomorphism Theorems for rings; (ii) the primality of the characteristic of a ring without zero divisors; (iii) definitions of prime and maximal ideals and theorems related with those concepts. The developed formalization applies mainly a part of the NASA PVS library for abstract algebra specified in the theory algebra. |
17:20 | SPEAKER: Ramana Kumar ABSTRACT. LCF-style provers emphasise that all results are secured by logical inference, and yet their current facilities for code extraction or code generation fall short of this high standard. This paper argues that extraction mechanisms with a small trusted computing base (TCB) ought to be used instead, pointing out that the recent CakeML and Œuf projects show that this is possible in HOL and within reach in Coq. |
16:00 | SPEAKER: Hoda Abbasizanjani ABSTRACT. Minimally Unsatisfiable clause-sets (MUs) are building blocks for understanding Minimally Unsatisfiable Sub-clause-sets (MUSs), and they are also interesting in their own right, as hardest unsatisfiable clause-sets. There are two important previously known but isolated characterisations for MUs, both with ingenious but complicated proofs: Characterising 2-CNF MUs, and characterising MUs with deficiency 2 (two more clauses than variables). Via a novel connection to Minimal Strong Digraphs (MSDs), we give short and intuitive new proofs of these characterisations, revealing an underlying common structure. We obtain unique isomorpism type descriptions of normal forms, for both cases. An MSD is a strongly connected digraph, where removing any arc destroys the strong connectedness. We introduce the new class DFM of special MUs, which are in close correspondence to MSDs. The known relation between 2-CNFs and digraphs is used, but in a simpler and more direct way. The non-binary clauses in DFMs can be ignored. For a binary clause (a or b) normally both arcs (-a -> b) and (-b -> a) need to be considered, but here we have a canonical choice of one of them. The definition of DFM is indeed very simple: an MU, where all clauses except of two are binary, while the two non-binary clauses are the two "full monotone clauses", the disjunction of all variables and the disjunction of all negated variables. We expect this basic class of MUs to be very useful in the future. Every variables occurs at least twice positively and twice negatively ("non-singularity"), and its isomorphism problem is GI-complete, while we have polytime decision, and likely we have good properties related to MUS-search. Via "saturations" and "marginalisations", adding resp. removing literal occurrences, as well as via "non-singular extensions", adding new variables, this class reaches out to apparently unrelated classes. |
16:30 | SPEAKER: Ryan Berryhill ABSTRACT. Computing minimal (or even just small) certificates is a central problem in automated reasoning and, in particular, in automated formal verification. For unsatisfiable formulas in CNF such certificates take the form of Minimal Unsatisfiable Subsets (MUSes) and have a wide range of applications. As a formula can have multiple MUSes and different MUSes provide different insights on unsatisfiability, commonly studied problems include computing a smallest MUS (SMUS) or computing all MUSes (AllMUS) of a given unsatisfiable formula. In this paper, we consider certificates to safety properties in the form of Minimal Safe Inductive Invariants (MSISes), and we develop algorithms for exploring such certificates by computing a smallest MSIS (SMSIS) or computing all MSISes (AllMSIS) of a given safe inductive invariant. More precisely, we show how two well-known algorithm CAMUS or MARCO for MUS enumeration can be adapted to MSIS enumeration as well. |
16:00 | SPEAKER: Jan Horacek ABSTRACT. In this paper we consider formulas that are conjunctions of linear clauses, i.e., of linear equations. Such formulas are very interesting because they encode CNF constraints that are typically very hard for SAT solvers. We introduce a new proof system SRES that works with linear clauses and show that SRES is implicationally and refutationally complete. Algebraically speaking, linear clauses correspond to products of linear polynomials over a ring of Boolean polynomials. That is why SRES can certify if a product of linear polynomials lies in the ideal generated by some other such products, i.e., the SRES calculus decides the ideal membership problem. Furthermore, an algorithm for certifying inconsistent systems of the above shape is described. We also establish the connection with an another combined proof system R(lin). |
16:30 | SPEAKER: Daniela Ritirc ABSTRACT. Generating and automatically checking proofs independently increases confidence in the results of automated reasoning tools. The use of computer algebra is an essential ingredient in recent substantial improvements to scale verification of arithmetic gate-level circuits, such as multipliers, to large bit-widths. There is also a large body of work on theoretical aspects of propositional algebraic proof systems in the proof complexity community starting with the seminal paper introducing the polynomial calculus. We show that the polynomial calculus provides a frame-work to define a practical algebraic calculus (PAC) proof format to capture low-level algebraic proofs needed in scalable gate-level verification of arithmetic circuits. We apply these techniques to generate proofs obtained as by-product of verifying gate-level multipliers using state-of-the-art techniques. Our experiments show that these proofs can be checked efficiently with independent tools. |
17:00 | SPEAKER: Prathamesh T. V. H. ABSTRACT. Unknot recognition is one of the fundamental questions in low dimensional topology. In this work, we show that this problem can be encoded as a validity problem in the existential fragment of the first-order theory of real closed fields. This encoding is derived using a well-known result on \su representations of knot groups by Kronheimer-Mrowka. We further show that applying existential quantifier elimination to the encoding enables an \unknot algorithm with a complexity of the order $2^{\OO(n)}$, where $n$ is the number of crossings in the given knot diagram. Our algorithm is simple to describe and has the same runtime as the currently best known unknot recognition algorithms. This leads to an interesting class of problems, of interest to both SMT solving and knot theory. |
17:00 | LICS Awards: The Kleene Award and the Test of Time Award |
17:45 | Remembering Martin Hofmann |
FLoC banquet at Examination Schools. Drinks and food available from 7pm (pre-booking via FLoC registration system required; guests welcome).