View: session overviewtalk overviewside by side with other conferences
09:00 | SPEAKER: Alwen Tiu ABSTRACT. We introduce a novel type system for enforcing secure information flow in an imperative language. Our work is motivated by the problem of statically checking potential information leakage in Android applications. To this end, we design a lightweight type system featuring Android permission model, where the permissions are statically assigned to applications and are used to enforce access control in the applications. We take inspiration from a type system by Banerjee and Naumann (BN) to allow security types to be dependent on the permissions of the applications. A novel feature of our type system is a typing rule for conditional branching induced by permission testing, which introduces a merging operator on security types, allowing more precise security policies to be enforced. The soundness of our type system is proved with respect to non-interference. In addition, a type inference algorithm is presented for the underlying security type system, by reducing the inference problem to a constraint solving problem in the lattice of security types. |
09:30 | SPEAKER: Vineet Rajani ABSTRACT. Language-based information flow control (IFC) tracks dependencies within a program using sensitivity labels and prohibits public outputs from depending on secret inputs. In particular, literature has proposed several type systems for tracking these dependencies. On one extreme, there are fine-grained type systems (like Flow Caml) that track dependence at the level of individual values, by labeling all values individually. On the other extreme are coarse-grained type systems (like SLIO) that track dependence coarsely, by associating a single label with an entire computation context and not labeling all values individually. In this paper, we show that, despite their glaring differences, both these styles are, in fact, equally expressive. To do this, we show a semantics- and type-preserving translation from a coarse-grained type system to a fine-grained one and vice-versa. The forward translation isn't surprising, but the backward translation is: It requires a construct to arbitrarily limit the scope of a context label in the coarse-grained type system (e.g., HLIO's ``toLabeled'' construct), as well as a small change to how labels are interpreted in coarse-grained type systems. Along the way, we show how to build logical relation models of both fine-grained and coarse-grained type systems (with full higher-order state). We use these relations to prove the two type systems and our translations sound. |
10:00 | SPEAKER: Christian Müller ABSTRACT. Our goal is to certify absence of information leaks in multi-agent workflows, such as conference management systems like \textsc{EasyChair}. These workflows can be executed by any number of agents some of which may form coalitions against the system. Therefore, checking noninterference is a challenging problem. Our paper offers two main contributions: First, a technique is provided to translate noninterference (in presence of various agent capabilities and declassification conditions) into universally quantified invariants of an instrumented new workflow program. Second, general techniques are developed for checking and inferring universally quantified inductive invariants for workflow programs. In particular, a large class of workflows is identified where inductiveness of invariants is decidable, as well as a smaller, still useful class of workflows where the weakest inductive universal invariant implying the desired invariant, is effectively computable. The new algorithms are implemented and applied to certify noninterference for workflows arising from conference management systems. |
12:00 | SPEAKER: Samuel Yeom ABSTRACT. Machine learning algorithms, when applied to sensitive data, pose a distinct threat to privacy. A growing body of prior work demonstrates that models produced by these algorithms may leak specific private information in the training data to an attacker, either through the models' structure or their observable behavior. However, the underlying cause of this privacy risk is not well understood beyond a handful of anecdotal accounts that suggest overfitting and influence might play a role. This paper examines the effect that overfitting and influence have on the ability of an attacker to learn information about the training data from machine learning models, either through training set membership inference or attribute inference attacks. Using both formal and empirical analyses, we illustrate a clear relationship between these factors and the privacy risk that arises in several popular machine learning algorithms. We find that overfitting is sufficient to allow an attacker to perform membership inference and, when the target attribute meets certain conditions about its influence, attribute inference attacks. Interestingly, our formal analysis also shows that overfitting is not necessary for these attacks and begins to shed light on what other factors may be in play. Finally, we explore the connection between membership inference and attribute inference, showing that there are deep connections between the two that lead to effective new attacks. |
16:00 | SPEAKER: Lara Schmid ABSTRACT. In random sample voting, only a randomly chosen subset of all eligible voters vote. We propose Alethea, the first random sample voting protocol that satisfies end-to-end verifiability and receipt-freeness. Our protocol makes explicit the distinction between the human voters and their devices. This allows us to make more fine-grained statements about the required capabilities and trust assumptions of each agent than is possible in previous work. Consequently, we can make more precise statements about the adversary models under which our protocol is secure. Alethea is the first formally verified random sample voting protocol. This necessitates defining new security properties, in particular that the sample group is chosen randomly and that the chosen voters remain anonymous. We model the protocol and its properties in a symbolic model amenable to tool support. We verify most of the properties automatically and complete our analysis using traditional hand-written proofs when automatic analysis is either infeasible or when we desire a more fine-grained analysis than our symbolic abstractions allow. |
16:30 | SPEAKER: François Dupressoir ABSTRACT. We present a machine-checked security analysis of Belenios-- a deployed voting protocol which has already been used in more than 200 elections. Belenios extends Helios with an explicit registration authority to obtain eligibility guarantees. We offer two main results. First, we build upon a recent framework for proving ballot privacy in EasyCrypt. Inspired by our application to Belenios, we adapt and extend the privacy security notions to account for protocols that include a registration phase. Our analysis identifies a trust assumption which is missing in the existing (pen and paper) analysis of Belenios: ballot privacy does not hold if the registrar misbehaves, even if, the role of the registrar is seemingly to provide eligibility guarantees. Second, we develop a novel framework for proving strong verifiability in EasyCrypt and apply it to Belenios. In the process, we clarify several aspects of the pen-and-paper proof, e.g. how to deal with revote policies. Together, our results yield the first machine-checked analysis of both privacy and verifiability properties for a deployed electronic voting protocol. Last, but not least, we identify several issues regarding the applicability of existing definitions of privacy and verifiability to systems other than Helios. While we show how to adapt the definitions to the particular case of Belenios, our findings indicate a need for the development of less specific security notions for electronic voting protocols with registration authorities. |
FLoC banquet at Examination Schools. Drinks and food available from 7pm (pre-booking via FLoC registration system required; guests welcome).