Publications 
Elette Boyle, Rio LaVigne, and Vinod Vaikuntanathan
Adversarially Robust PropertyPreserving Hash Functions
ITCS 2019  Innovations in Theoretical Computer Science.
[Abstract]
Propertypreserving hashing is a method of compressing a large input x into a short hash h(x) in such a way that given h(x) and h(y), one can compute a property P (x, y) of the original inputs. The idea of propertypreserving hash functions underlies sketching, compressed sensing and localitysensitive hashing.
Propertypreserving hash functions are usually probabilistic: they use the random choice of a hash function from a family to achieve compression, and as a consequence, err on some inputs. Traditionally, the notion of correctness for these hash functions requires that for every two inputs x and y, the probability that h(x) and h(y) mislead us into a wrong prediction of P (x, y) is negligible. As observed in many recent works (incl. Mironov, Naor and Segev, STOC 2008; Hardt and Woodruff, STOC 2013; Naor and Yogev, CRYPTO 2015), such a correctness guarantee assumes that the adversary (who produces the offending inputs) has no information about the hash function, and is too weak in many scenarios.
We initiate the study of adversarial robustness for propertypreserving hash functions, provide definitions, derive broad lower bounds due to a simple connection with communication complexity, and show the necessity of computational assumptions to construct such functions. Our main positive results are two candidate constructions of propertypreserving hash functions (achieving different parameters) for the (promise) gapHamming property which checks if x and y are "too far" or "too close." Our first construction relies on generic collisionresistant hash functions, and our second on a variant of the syndrome decoding assumption on lowdensity parity check codes.
Elette Boyle, Geoffroy Couteau, Niv Gilboa, and Yuval Ishai
Compressing Vector OLE
CCS 2018  ACM Conference on Computer and Communications Security.
[Abstract]
Oblivious linearfunction evaluation (OLE) is a secure twoparty protocol allowing a receiver to learn any linear combination of a pair of field elements held by a sender. OLE serves as a common building block for secure computation of arithmetic circuits, analogously to the role of oblivious transfer (OT) for boolean circuits.
A useful extension of OLE is vector OLE (VOLE), allowing the receiver to learn any linear combination of two vectors held by the sender. In several applications of OLE, one can replace a large number of instances of OLE by a small number of instances of VOLE. This motivates the goal of amortizing the cost of generating long instances of VOLE.
We suggest a new approach for fast generation of pseudorandom instances of VOLE via a deterministic local expansion of a pair of short correlated seeds and no interaction. This provides the first example of compressing a nontrivial and useful cryptographic correlation with good concrete efficiency. Our VOLE generator can be used as a generalpurpose tool to enhance the efficiency of a host of cryptographic applications. These include secure arithmetic computation and noninteractive zeroknowledge proofs with reusable preprocessing.
Our VOLE generators are based on a novel combination of function secret sharing (FSS) for multipoint functions and noisy linear codes. Their security can be based on variants of the learning parity with noise (LPN) assumption over large fields, which resist known attacks with good concrete parameters. We provide several different constructions that offer tradeoffs between different efficiency measures and the underlying intractability assumptions.
Elette Boyle, Ran Cohen, Deepesh Data, and Pavel Hubáček
Must the Communication Graph of MPC Protocols be an Expander?
CRYPTO 2018  International Conference on Cryptology.
[Abstract]
[PDF]
Secure multiparty computation (MPC) on incomplete communication networks has been studied within two primary models: (1) Where a partial network is fixed a priori, and thus corruptions can occur dependent on its structure, and (2) Where edges in the communication graph are determined dynamically as part of the protocol. Whereas a rich literature has succeeded in mapping out the feasibility and limitations of graph structures supporting secure computation in the fixedgraph model (including strong classical lower bounds), these bounds do not apply in the latter dynamicgraph setting, which has recently seen exciting new results, but remains relatively unexplored.
In this work, we initiate a similar foundational study of MPC within the dynamicgraph model. As a first step, we investigate the property of graph expansion. All existing protocols (implicitly or explicitly) yield communication graphs which are expanders, but it is not clear whether this is inherent.
Our results consist of two types (for constant fraction of corruptions):
 Upper bounds: We demonstrate secure protocols whose induced communication graphs are not expander graphs, within a wide range of settings (computational, information theoretic, with low locality, even with low locality and adaptive security) each assuming some form of inputindependent setup.
 Lower bounds: In the setting without setup and adaptive corruptions, we demonstrate that for certain functionalities, no protocol can maintain a nonexpanding communication graph against all adversarial strategies. Our lower bound relies only on protocol correctness (not privacy), and requires a surprisingly delicate argument.
More generally, we provide a formal framework for analyzing the evolving communication graph of MPC protocols, giving a starting point for studying the relation between secure computation and further, more general graph properties.
Elette Boyle, Yuval Ishai, and Antigoni Polychroniadou
Limits of Practical Sublinear Secure Computation
CRYPTO 2018  International Conference on Cryptology.
[Abstract]
[PDF]
Secure computations on big data call for protocols that have sublinear communication complexity in the input length. While fully homomorphic encryption (FHE) provides a general solution to the problem, employing it on a large scale is currently quite far from being practical. This is also the case for secure computation tasks that reduce to weaker forms of FHE such as "somewhat homomorphic encryption" or singleserver private information retrieval (PIR).
Quite unexpectedly, Aggarwal, Mishra, and Pinkas (Eurocrypt 2004), Brickell and Shmatikov (Asiacrypt 2005), and shelat and Venkitasubramaniam (Asiacrypt 2015) have shown that in several natural instances of secure computation on big data, there are practical sublinear communication protocols that only require sublinear local computation and minimize the use of expensive publickey operations. This raises the question of whether similar protocols exist for other natural problems.
In this paper we put forward a framework for separating "practical" sublinear protocols from "impractical" ones, and establish a methodology for identifying "provably hard" bigdata problems that do not admit practical protocols. This is akin to the use of NPcompleteness to separate hard algorithmic problems from easy ones. We show that while the previous protocols of Aggarwal et al., Brickell and Shmatikov, and shelat and Venkitasubramaniam are indeed classified as being "practical" in this framework, slight variations of the problems they solve and other natural computational problems on big data are hard.
Our negative results are established by showing that the problem at hand is "PIRhard" in the sense that any secure protocol for the problem implies PIR on a large database. This imposes a barrier on the local computational cost of secure protocols for the problem. We also identify a new natural relaxation of PIR that we call semiPIR, which is useful for establishing "intermediate hardness" of several practically motivated secure computation tasks. We show that semiPIR implies slightly sublinear PIR via an adaptive blackbox reduction and that ruling out a stronger blackbox reduction would imply a major breakthrough in complexity theory. We also establish informationtheoretic separations between semiPIR and PIR, showing that some problems that we prove to be semiPIRhard are not PIRhard.
Elette Boyle, Saleet Klein, Alon Rosen, and Gil Segev
Securing Abe's Mixnet Against Malicious Verifiers via Witness Indistinguishability
SCN 2018  Conference on Security and Cryptography for Networks.
[Abstract]
[PDF]
We show that the simple and appealing unconditionally sound mixnet due to Abe (Asiacrypt'99) can be augmented to further guarantee anonymity against malicious verifiers. This additional guarantee implies, in particular, that when applying the FiatShamir transform to the mixnet's underlying subprotocols, anonymity is provably guaranteed for {\em any} hash function.
As our main contribution, we demonstrate how anonymity can be attained, even if most subprotocols of a mixnet are merely witness indistinguishable (WI). We instantiate our framework with two variants of Abe's mixnet. In the first variant, ElGamal ciphertexts are replaced by an alternative, yet equally efficient, "lossy" encryption scheme. In the second variant, new "dummy" vote ciphertexts are injected prior to the mixing process, and then removed.
Our techniques center on new methods to introduce additional witnesses to the subprotocols within the proof of security. This, in turn, enables us to leverage the WI guarantees against malicious verifiers. In our first instantiation, these witnesses follow somewhat naturally from the lossiness of the encryption scheme, whereas in our second instantiation they follow from leveraging combinatorial properties of the Benesnetwork. These approaches may be of independent interest.
Finally, we demonstrate cases in Abe's original mixnet (without modification) where only one witness exists, such that if the WI proof leaks information on the (single) witness in these cases, then the system will not be anonymous against malicious verifiers.
Elette Boyle, Abhishek Jain, Manoj Prabhakaran, and ChingHua Yu
The Bottleneck Complexity of Secure Multiparty Computation
ICALP 2018  International Colloquium on Automata, Languages, and Programming.
[Abstract]
[PDF]
In this work, we initiate the study of bottleneck complexity as a new communication efficiency measure for secure multiparty computation (MPC). Roughly, the bottleneck complexity of an MPC protocol is defined as the maximum communication complexity required by any party within the protocol execution.
We observe that even without security, bottleneck communication complexity is an interesting measure of communication complexity for (distributed) functions and propose it as a fundamental area to explore. While achieving O(n) bottleneck complexity (where n is the number of parties) is straightforward, we show that: (1) achieving sublinear bottleneck complexity is not always possible, even when no security is required. (2) On the other hand, several useful classes of functions do have o(n) bottleneck complexity, when no security is required.
Our main positive result is a compiler that transforms any (possibly insecure) efficient protocol with fixed communicationpattern for computing any functionality into a secure MPC protocol while pre serving the bottleneck complexity of the underlying protocol (up to security parameter overhead). Given our compiler, an efficient protocol for any function f with sublinear bottleneck complexity can be trans formed into an MPC protocol for f with the same bottleneck complexity.
Along the way, we build cryptographic primitives ? incremental fullyhomomorphic encryption, succinct noninteractive arguments of knowledge with IDbased simulationextractability property and verifiable protocol execution ? that may be of independent interest.
Marshall Ball, Elette Boyle, Tal Malkin, and Tal Moran
Exploring the Boundaries of TopologyHiding Computation
Advances in Cryptology  EUROCRYPT 2018.
[Abstract]
[PDF]
Topologyhiding computation (THC) is a form of multiparty computation over an incomplete communication graph that maintains the privacy of the underlying graph topology. In a line of recent works [Moran, Orlov & Richelson, TCC'15, Hirt et al. CRYPTO'16, Akavia & Moran EUROCRYPT'17, Akavia et al. CRYPTO'17], THC protocols for securely computing any function in the semihonest setting have been constructed. In addition, it was shown by Moran et al. that in the failstop setting THC with negligible leakage on the topology is impossible.
In this paper, we further explore the feasibility boundaries of THC.
 We show that even against semihonest adversaries, topologyhiding broadcast on a small (4node) graph implies oblivious transfer; in contrast, trivial broadcast protocols exist unconditionally if topology can be revealed.
 We strengthen the lower bound of Moran et al., identifying and extending a relation between the amount of leakage on the underlying graph topology that must be revealed in the failstop setting, as a function of the number of parties and communication round complexity: Any nparty protocol leaking delta bits for 0 < delta < 1 must have Omega(n/delta) rounds.
We then present THC protocols providing closetooptimal leakage rates, for unrestricted graphs on n nodes against a failstop adversary controlling a dishonest majority of the n players. These constitute the first general failstop THC protocols. Specifically, for this setting we show:
 A THC protocol that leaks at most one bit and requires O(n^2)rounds.
 A THC protocol that leaks at most delta bits for arbitrarily small nonnegligible delta, and requires O(n^3/delta) rounds.
Elette Boyle, Niv Gilboa, Yuval Ishai, Huijia Rachel Lin, and Stefano Tessaro
Foundations of Homomorphic Secret Sharing
ITCS 2018  Innovations in Theoretical Computer Science.
[Abstract]
[PDF]
Homomorphic secret sharing (HSS) is the secret sharing analogue of homomorphic encryption. An HSS scheme supports a local evaluation of functions on shares of one or more secret inputs, such that the resulting shares of the output are short. Some applications require the stronger notion of additive HSS, where the shares of the output add up to the output over some finite Abelian group. While some strong positive results for HSS are known under specific cryptographic assumptions, many natural questions remain open.
We initiate a systematic study of HSS, making the following contributions.
 A definitional framework. We present a general framework for defining HSS schemes that unifies and extends several previous notions from the literature, and cast known results within this framework.
 Limitations. We establish limitations on informationtheoretic multiinput HSS with short output shares via a relation with communication complexity. We also show that additive HSS for nontrivial functions, even the AND of two input bits, implies noninteractive key exchange, and is therefore unlikely to be implied by publickey encryption or even oblivious transfer.
 Applications. We present two types of applications of HSS. First, we construct 2round protocols for secure multiparty computation from a simple constantsize instance of HSS. As a corollary, we obtain 2round protocols with attractive asymptotic efficiency features under the Decision Diffie Hellman (DDH) assumption. Second, we use HSS to obtain nearly optimal worstcase to averagecase reductions in P. This in turn has applications to fine grained averagecase hardness and verifiable computation.
Elette Boyle
Recent Advances in Function & Homomorphic Secret Sharing (Invited Talk)
Indocrypt 2017  the International Conference on Cryptology in India.
[Abstract]
[PDF]
Function Secret Sharing (FSS) and Homomorphic Secret Sharing (HSS) are two extensions of standard secret sharing, which support rich forms of homomorphism on secret shared values.
 An mparty FSS scheme for a given function family F enables splitting a
function f: {0,1}^n > G from F (for Abelian group G) into m succinctly
described functions f_1,...,f_m such that strict subsets of the f_i hide f,
and f(x) = f_1(x) + ... + f_m(x) for every input x.
 An mparty HSS is a dual notion, where an input x is split into shares
x^1,...,x^m, such that strict subsets of x^i hide x, and one can recover the
evaluation P(x) of a program P on x given homomorphically evaluated share
values Eval(x^1,P),...,Eval(x^m,P).
In the last few years, many new constructions and applications of FSS and HSS
have been discovered, yielding implications ranging from efficient private
database manipulation and secure computation protocols, to worstcase to
averagecase reductions.
In this treatise, we introduce the reader to the background required to
understand these developments, and give a roadmap of recent advances (up to
October 2017).
Elette Boyle, Yuval Ishai, Rafael Pass, and Mary Wootters
Can We Access a Database Both Locally and Privately?
TCC 2017  Theory of Cryptography Conference.
[Abstract]
[PDF]
We consider the following strong variant of private information retrieval (PIR). There is a large database x that we want to make publicly available. To this end, we post an encoding X of x together with a short public key pk in a publicly accessible repository. The goal is to allow any client who comes along to retrieve a chosen bit x_i by reading a small number of bits from X, whose positions may be randomly chosen based on i and pk, such that even an adversary who can fully observe the access to X does not learn information about i.
Towards solving the above problem, we study a weaker secret key variant where the data is encoded and accessed by the same party. This primitive, that we call an oblivious locally decodable code (OLDC), is independently motivated by applications such as searchable symmetric encryption. We reduce the publickey variant of PIR to OLDC using an ideal form of obfuscation that can be instantiated heuristically with existing indistinguishability obfuscation candidates, or alternatively implemented with small and stateless tamperproof hardware.
Finally, a central contribution of our work is the first proposal of an OLDC candidate. Our candidate is based on a secretly permuted ReedMuller code. We analyze the security of this candidate against several natural attacks and leave its further study to future work.
Elette Boyle, Geoffroy Couteau, Niv Gilboa, Yuval Ishai, and Michele Orrù
Homomorphic Secret Sharing: Optimizations and Applications
CCS 2017  ACM Conference on Computer and Communications Security.
[Abstract]
[PDF]
We continue the study of Homomorphic Secret Sharing (HSS), recently introduced by Boyle et al. (Crypto 2016, Eurocrypt 2017). A (2party) HSS scheme splits an input x into shares (x0, x1) such that (1) each share computationally hides x, and (2) there exists an efficient homomorphic evaluation algorithm Eval such that for any function (or "program") P from a given class it holds that Eval(x0,P)+Eval(x1,P)=P(x). Boyle et al. show how to construct an HSS scheme for branching programs, with an inverse polynomial error, using discretelog type assumptions such as DDH.
We make two types of contributions.
 Optimizations. We introduce new optimizations that speed up the previous optimized implementation of Boyle et al. by more than a factor of 30, significantly reduce the share size, and reduce the rate of leakage induced by selective failure.
 Applications. Our optimizations are motivated by the observation that there are natural application scenarios in which HSS is useful even when applied to simple computations on short inputs. We demonstrate the practical feasibility of our HSS implementation in the context of such applications.
Elette Boyle, Niv Gilboa, and Yuval Ishai
GroupBased Secure Computation: Optimizing Rounds, Communication, and Computation
Advances in Cryptology  EUROCRYPT 2017.
[Abstract]
[PDF]
A recent work of Boyle et al. (Crypto 2016) suggests that "groupbased" cryptographic protocols, namely ones that only rely on a cryptographically hard (Abelian) group, can be surprisingly powerful. In particular, they present succinct twoparty protocols for securely computing branching programs and NC1 circuits under the DDH assumption, providing the first alternative to fully homomorphic encryption.
In this work we further explore the power of groupbased secure computation protocols, improving both their asymptotic and concrete efficiency. We obtain the following results.
 Blackbox use of group. We modify the succinct protocols of Boyle et al. so that they only make a blackbox use of the underlying group, eliminating an expensive nonblackbox setup phase.
 Round complexity. For any constant number of parties, we obtain 2round MPC protocols based on a PKI setup under the DDH assumption. Prior to our work, such protocols were only known using fully homomorphic encryption or indistinguishability obfuscation.
 Communication complexity. Under DDH, we present a secure 2party protocol for any NC1 or logspace computation with n input bits and m output bits using n + (1 + o(1))m + poly(\lambda) bits of communication, where \lambda is a security parameter. In particular, our protocol can generate n instances of bitoblivioustransfer using (4 + o(1))n bits of communication. This gives the first constantrate OT protocol under DDH.
 Computation complexity. We present several techniques for improving the computational cost of the share conversion procedure of Boyle et al., improving the concrete efficiency of groupbased protocols by several orders of magnitude.
Elette Boyle, Niv Gilboa, and Yuval Ishai
Function Secret Sharing: Improvements and Extensions
CCS 2016  ACM Conference on Computer and Communications Security.
[Abstract]
[PDF]
Function Secret Sharing (FSS), introduced by Boyle et al. (Eurocrypt 2015), provides a way for additively secretsharing a function from a given function family F. More concretely, an mparty FSS scheme splits a function $f : {0,1}^n \to G, for some abelian group G, into functions f_1,...,f_m, described by keys k_1,...,k_m, such that f = f_1 + ... + f_m and every strict subset of the keys hides f. A Distributed Point Function (DPF) is a special case where F is the family of point functions, namely functions f_{a,b} that evaluate to b on the input a and to 0 on all other inputs.
FSS schemes are useful for applications that involve privately reading from or writing to distributed databases while minimizing the amount of communication. These include different flavors of private information retrieval (PIR), as well as a recent application of DPF for largescale anonymous messaging.
We improve and extend previous results in several ways:
 Simplified FSS constructions. We introduce a tensoring operation for FSS which is used to obtain a conceptually simpler derivation of previous constructions and present our new constructions.
 Improved 2party DPF. We reduce the key size of the PRGbased DPF scheme of Boyle et al. roughly by a factor of 4 and optimize its computational cost. The optimized DPF significantly improves the concrete costs of 2server PIR and related primitives.
FSS for new function families. We present an efficient PRGbased 2party FSS scheme for the family of decision trees, leaking only the topology of the tree and the internal node labels. We apply this towards FSS for multidimensional intervals. We also present a general technique for extending FSS schemes by in creasing the number of parties.
 Verifiable FSS. We present efficient protocols for verifying that keys (k_1,...,k_m), obtained from a potentially malicious user, are consistent with some f \in F. Such a verification may be critical for applications that involve private writing or voting by many users.
Elette Boyle, Niv Gilboa, and Yuval Ishai
Breaking the CircuitSize Barrier for Secure Computation Under DDH
CRYPTO 2016  International Conference on Cryptology.
Best Paper Award
[Abstract]
[PDF]
Under the Decisional DiffieHellman (DDH) assumption, we present a 2outof2 secret sharing scheme that supports a compact evaluation of branching programs on the shares. More concretely, there is an evaluation algorithm \Eval with a single bit of output, such that if an input $w \in {0,1}^n$ is shared into $(w^0,w^1)$, then for any deterministic branching program P of size S we have that \Eval(P,w0) \xor \Eval(P,w1)=P(w), except with at most \delta failure probability. The running time of the sharing algorithm is polynomial in n and the security parameter \lambda, and that of \Eval is polynomial in S, \lambda, and 1/\delta. This applies as a special case to boolean formulas of size S or boolean circuits of depth log S. We also present a publickey variant that enables homomorphic computation on inputs contributed by multiple clients. The above result implies the following DDHbased applications:
 A secure 2party computation protocol for evaluating any branching program of size S, where the communication complexity is linear in the input size and only the running time grows with S.
 A secure 2party computation protocol for evaluating any layered boolean circuit of size S and m outputs with communication complexity O(S/logS)+m*poly(\lambda).
A 2party {\em function secret sharing} scheme, as defined by Boyle et al. (Eurocrypt 2015), for general branching programs (with inverse polynomial error probability).
 A 1round 2server {\em private information retrieval} scheme supporting general searches expressed by branching programs.
Elette Boyle and Moni Naor Is There an Oblivious RAM Lower Bound?
ITCS 2016  Innovations in Theoretical Computer Science.
[Abstract] [PDF]
An Oblivious RAM (ORAM), introduced by Goldreich and Ostrovsky (JACM 1996), is a (probabilistic) RAM that hides its access pattern, i.e. for every input the observed locations accessed are similarly distributed. Great progress has been made in recent years in minimizing the overhead of ORAM constructions, with the goal of obtaining the smallest overhead possible.
We revisit the lower bound on the overhead required to obliviously simulate programs, due to Goldreich and Ostrovsky. While the lower bound is fairly general, including the offline case, when the simulator is given the reads and writes ahead of time, it does assume that the simulator behaves in a ?balls and bins? fashion. That is, the simulator must act by shuffling data items around, and is not allowed to have sophisticated encoding of the data.
We prove that for the offline case, showing a lower bound without the above restriction is related to the size of the circuits for sorting. Our proof is constructive, and uses a bitslicing approach which manipulates the bit representations of data in the simulation. This implies that without obtaining yet unknown superlinear lower bounds on the size of such circuits, we cannot hope to get lower bounds on offline (unrestricted) ORAMs.
Elette Boyle,
KaiMin Chung, and Rafael Pass Oblivious Parallel RAM
TCC 2016  Theory of Cryptography Conference.
[Abstract] [PDF]
A machine is said to be {\em oblivious} if the sequences of memory accesses made by the machine for two inputs of the same running time are identically (or close to identically) distributed. Oblivious RAM (ORAM) compilers  compilers that turn any RAM program $\Pi$ into an oblivious RAM $\Pi'$, while only incurring a "small", polylogarithmic slowdown  have been extensively studied since the work of Goldreich and Ostrovsky (J.ACM'96) and have numerous fundamental applications. These compilers, however, do not level parallelism: even if $\Pi$ can be highly parallelized, $\Pi'$ will be inherently sequential.
In this work we present the first {\em Oblivious Parallel RAM (OPRAM)} compiler, which compiles any PRAM into an oblivious PRAM while only incurring a polylogarithmic slowdown.
Elette Boyle, Niv Gilboa, Yuval Ishai Function Secret Sharing
Advances in Cryptology  EUROCRYPT 2015. [Abstract] [PDF]
Motivated by the goal of securely searching and pdating distributed data, we introduce and study the notion of function secret sharing (FSS). This new notion is a natural generalization of distributed point functions (DPF), a primitive that was recently introduced by Gilboa and Ishai (Eurocrypt 2014). Given a positive integer $p \ge 2$ and a class $\mathcal{F}$ of functions $f: \{0,1\}^n \to \mathbb{G}$ where $\mathbb{G}$ is an Abelian group, a $p$party FSS scheme for $\mathcal{F}$ allows one to split each $f \in \mathcal{F}$ into $p$ succinctly described functions $f_i: \{0,1\}^n \to \mathbb{G}$, $1 \le i \le p$, such that: (1) $\sum_{i1=}^p f_i = f$, and (2) any strict subset of the $f_i$ hides $f$. Thus, an FSS for $\mathcal{F}$ can be thought of as a method for succinctly performing an "additive secret sharing" of functions from $\mathcal{F}$.The original definition of DPF coincides with a twoparty FSS for the class of point functions, namely the class of functions that have a nonzero output on at most one input.
We present two types of results. First, we obtain efficiency improvements and extensions of the original DPF construction. Then, we initiate a systematic study of general FSS, providing some constructions and establishing relations with other cryptographic primitives. More concretely, we obtain the following main results:
1. Improved DPF. We present an improved (twoparty) DPF construction from a pseudorandom generator (PRG), reducing the length of the key describing each $f_i$ from $O(\lambda \cdot n^{\log_2(3)})$ to $O(\lambda n)$, where $\lambda$ is the PRG seed length.
2. Multiparty DPF. We present the first nontrivial construction of a $p$party DPF for $p \ge 3$, obtaining a nearquadratic improvement over a naive construction that additively shares the truthtable of $f$. This construction too can be based on any PRG.
3. FSS for simple functions. We present efficient PRGbased FSS constructions for natural function classes that extend point functions, including interval functions and partial matching functions.
4. A study of general FSS. We show several relations between general FSS and other cryptographic primitives. These include a construction of general FSS via obfuscation, an indication for the implausibility of constructing general FSS from weak cryptographic assumptions such as the existence of oneway functions, a completeness result, and a relation with pseudorandom functions.
Elette Boyle,
KaiMin Chung, and Rafael Pass LargeScale Secure Computation
CRYPTO 2015  International Conference on Cryptology. [Abstract] [PDF]
We are interested in secure computation protocols in settings where the number of parties is huge and their data even larger. Assuming the existence of a singleuse broadcast channel (per player), we demonstrate statistically secure computation protocols for computing (multiple) arbitrary dynamic RAM programs over parties' inputs, handling (1/3eps) fraction static corruptions, while preserving up to polylogarithmic factors the computation and memory complexities of the RAM program. Additionally, our protocol is load balanced and has polylogarithmic communication locality.
Elette Boyle
and Rafael Pass Limits of Extractability Assumptions with Distributional Auxiliary Input
Asiacrypt 2015  International Conference on the Theory and Applications of Cryptology and Information Security. [Abstract] [PDF]
Extractability, or "knowledge," assumptions have recently gained popularity in the cryptographic community, leading to the study of primitives such as extractable oneway functions, extractable hash functions, succinct noninteractive arguments of knowledge (SNARKs), and (publiccoin) differinginputs obfuscation ((PC)diO), and spurring the development of a wide spectrum of new applications relying on these primitives. For most of these applications, it is required that the extractability assumption holds even in the presence of attackers receiving some auxiliary information that is sampled from some fixed efficiently computable distribution Z.
We show that, assuming the existence of publiccoin collisionresistant hash functions, there exists an efficient distributions Z such that either
 PCdiO for Turing machines does not exist, or
 extractable oneway functions w.r.t. auxiliary input Z do not exist.
A corollary of this result shows that additionally assuming existence of fully homomorphic encryption with decryption in NC1, there exists an efficient distribution Z such that either
 SNARKs for NP w.r.t. auxiliary input Z do not exist, or
 PCdiO for NC1 circuits does not exist.
To achieve our results, we develop a ?succinct punctured program? technique, mirroring the powerful punctured program technique of Sahai and Waters (STOC'14), and present several other applications of this new technique. In particular, we construct succinct perfect zero knowledge SNARGs and give a universal instantiation of random oracles in fulldomain hash applications, based on PCdiO.
As a final contribution, we demonstrate that even in the absence of auxiliary input, care must be taken when making use of extractability assumptions. We show that (standard) diO w.r.t. any distribution D over programs and boundedlength auxiliary input is directly implied by any obfuscator that satisfies the weaker indistinguishability obfuscation (iO) security notion and diO for a slightly modified distribution D' of programs (of slightly greater size) and no auxiliary input. As a consequence, we directly obtain negative results for (standard) diO in the absence of auxiliary input.
Elette Boyle,
KaiMin Chung, and Rafael Pass
On Extractability (a.k.a. DifferingInputs) Obfuscation
TCC 2014  Theory of Cryptography Conference. [Abstract] [PDF]
We initiate the study of extractability obfuscation, a notion first suggested by Barak et al. (JACM 2012): An extractability obfuscator eO for a class of algorithms M guarantees that if an efficient attacker A can distinguish between obfuscations eO(M_1), eO(M_2) of two algorithms M_1,M_2 \in M, then A can efficiently recover (given M_1 and M_2) an input on which M_1 and M_2 provide different outputs.
 We rely on the recent candidate virtual blackbox obfuscation constructions to provide candidate constructions of extractability obfuscators for NC^1; next, following the blueprint of Garg et~al. (FOCS 2013), we show how to bootstrap the obfuscator for NC^1 to an obfuscator for all nonuniform polynomialtime Turing machines. In contrast to the construction of Garg et al., which relies on indistinguishability obfuscation for NC^1, our construction enables succinctly obfuscating nonuniform Turing machines (as opposed to circuits), without turning runningtime into description size.
 We introduce a new notion of functional witness encryption, which enables encrypting a message m with respect to an instance x, language L, and function f, such that anyone (and only those) who holds a witness w for x \in L can compute f(m,w) on the message and particular known witness. We show that functional witness encryption is, in fact, equivalent to extractability obfuscation.
 We demonstrate other applications of extractability extraction, including the first construction of fully (adaptivemessage) indistinguishabilitysecure functional encryption for an unbounded number of key queries and unbounded message spaces.
 We finally relate indistinguishability obfuscation and extractability obfuscation and show special cases when indistinguishability obfuscation can be turned into extractability obfuscation.
Elette Boyle,
Shafi Goldwasser, and Ioana Ivan
Functional Signatures and Pseudorandom Functions
PKC 2014  International Conference on Practice and Theory of PublicKey Cryptography. [Abstract] [PDF]
In this paper, we introduce two new cryptographic primitives: functional digital signatures and functional pseudorandom functions.
In a functional signature scheme, in addition to a master signing key that can be used to sign any message, there are signing keys for a function f, which allow one to sign any message in the range of f. As a special case, this implies the ability to generate keys for predicates P, which allow one to sign any message m, for which P(m) = 1.
We show applications of functional signatures to constructing succinct noninteractive arguments and delegation schemes. We give several general constructions for this primitive based on different computational hardness assumptions, and describe the tradeoffs between them in terms of the assumptions they require and the size of the signatures.
In a functional pseudorandom function, in addition to a master secret key that can be used to evaluate the pseudorandom function F on any point in the domain, there are additional secret keys for a function f, which allow one to evaluate F on any y for which there exists an x such that f(x)=y. As a special case, this implies pseudorandom functions with selective access, where one can delegate the ability to evaluate the pseudorandom function on inputs y for which a predicate P(y)=1 holds. We define and provide a sample construction of a functional pseudorandom function family for prefixfixing functions.
Elette Boyle,
Sanjam Garg, Abhishek Jain, Yael Tauman Kalai, and Amit Sahai
Secure Computation Against Adaptive Auxiliary Information
CRYPTO 2013  International Conference on Cryptology. [Abstract] [PDF]
We study the problem of secure twoparty and multiparty computation (MPC) in a setting where a cheating polynomialtime adversary can corrupt an arbitrary subset of parties and, in addition, learn arbitrary auxiliary information on the entire states of all honest parties (including their inputs and random coins), in an adaptive manner, throughout the protocol execution. We formalize a definition of multiparty computation secure against adaptive auxiliary information (AAIMPC), that intuitively guarantees that such an adversary learns no more than the function output and the adaptive auxiliary information. In particular, if the auxiliary information contains only partial, "noisy," or computationally invertible information on secret inputs, then only such information should be revealed.
We construct a universally composable AAI twoparty and multiparty computation protocol that realizes any (efficiently computable) functionality against malicious adversaries in the common reference string model, based on the linear assumption over bilinear groups and the nth residuosity assumption. Apart from theoretical interest, our result has interesting applications to the regime of leakageresilient cryptography.
At the heart of our construction is a new tworound oblivious transfer protocol secure against malicious adversaries who may receive adaptive auxiliary information. This may be of independent interest.
Elette Boyle,
Shafi Goldwasser, and Stefano Tessaro
Communication Locality in Secure Multiparty Computation:
How to Run Sublinear Algorithms in a Distributed Setting
TCC 2013  Theory of Cryptography Conference. [Abstract] [PDF]
We devise multiparty computation protocols for general secure function evaluation with the property that each party is only required to communicate with a small number of dynamically chosen parties. More explicitly, starting with n parties connected via a complete and synchronous network, our protocol requires each party to send messages to (and process messages from) at most polylog(n) other parties using polylog(n) rounds. It achieves secure computation of any polynomialtime computable randomized function f under cryptographic
assumptions, and tolerates up to (1/3  \eps)n statically scheduled Byzantine faults.
We then focus on the particularly interesting setting in which the function to be computed is a sublinear algorithm: An evaluation of f depends on the inputs of at most q = o(n) of the parties, where the identity of these parties can be chosen randomly and possibly adaptively. Typically, q = polylog(n). While the sublinear query complexity of f makes it pos sible in principle to dramatically reduce the communication complexity of our general protocol, the challenge is to achieve this while maintaining security: in particular, while keeping the identities of the selected inputs completely hidden. We solve this challenge, and we provide a protocol for securely computing such sublinear f that runs in polylog(n) + O(q) rounds, has each party communicating with at most q*polylog(n) other parties, and supports message sizes polylog(n) (l + n), where l is the parties' input size.
Our optimized protocols rely on a multisignature scheme, fully homomorphic encryption (FHE), and simulationsound adaptive NIZK arguments. However, we remark that multisignatures and FHE are used to obtain our bounds on message size and round complexity. Assuming only standard digital signatures and publickey encryption, one can still obtain the property that each party only communicates with polylog(n) other parties. We emphasize that the scheduling of faults can depend on the initial PKI setup of digital signatures and the NIZK parameters
Elette Boyle,
Shafi Goldwasser, Abhishek Jain, and Yael Tauman Kalai
MultiParty Computation Secure Against Continual Memory Leakage
STOC 2012  ACM Symposium on Theory of Computing. [Abstract] [PDF]
We construct a multiparty computation (MPC) protocol that is secure even if a malicious adversary, in addition to corrupting 1\eps fraction of all parties for an arbitrarily small constant \eps > 0, can leak information about the secret state of each honest party. This leakage can be continuous for an unbounded number of executions of the MPC protocol, computing different functions on the same or different set of inputs. We assume a (necessary) "leakfree" preprocessing stage.
We emphasize that we achieve leakage resilience without weakening the security guarantee of classical MPC. Namely, an adversary who is given leakage on honest parties' states, is guaranteed to learn nothing beyond the input and output values of corrupted parties. This is in contrast with previous works on leakage in the multiparty protocol setting, which weaken the security notion, and only guarantee that a protocol which leaks l bits about the parties' secret states, yields at most l bits of leakage on the parties' private inputs. For some functions, such as voting, such leakage can be detrimental.
Our result relies on standard cryptographic assumptions, and our security parameter is polynomially related to the number of parties.
Elette Boyle,
Shafi Goldwasser, and Yael Tauman Kalai
LeakageResilient Coin Tossing
DISC 2011  The International Symposium on Distributed Computing.
Invited to Distributed Computing. [Abstract] [PDF]
The ability to collectively toss a common coin among n parties in the presence of faults is an important primitive in the arsenal of randomized distributed protocols. In the case of dishonest majority, it was shown to be impossible to achieve less than 1/r bias in O(r) rounds (Cleve STOC '86). In the case of honest majority, in contrast, unconditionally secure O(1)round protocols for generating common perfectly unbiased coins follow from general completeness theorems on multiparty secure protocols in the perfectly secure channels model (e.g., BGW, CCD STOC '88).
However, in the multiparty protocols with faulty minority, parties need to generate and hold local secret values which are assumed to be perfectly hidden from malicious parties: an assumption which is crucial to proving the resulting common coin is unbiased. This assumption unfortunately does not seem to hold in practice, as attackers can launch sidechannel attacks on the local state of honest parties and leak information on their secrets.
In this work, we present an O(1)round protocol for collectively generating an unbiased common coin, in the presence of leakage on the local state of the honest parties. We tolerate t <= (1/3  \eps) n computationallyunbounded Byzantine faults and in addition a \Omega(1)fraction leakage on each (honest) party's secret state. Our results hold in the memory leakage model (of Akavia, Goldwasser, Vaikuntanathan '08) adapted to the distributed setting.
Another contribution of our work is a tool we use to achieve collective coin flipping  leakageresilient verifiable secret sharing. Informally, this is a variant of ordinary VSS in which secrecy guarantees are maintained even if information is leaked on individual shares of the secret.
Elette Boyle,
Gil Segev, and Daniel Wichs
Fully LeakageResilient Signatures
Advances in Cryptology  EUROCRYPT 2011.
Invited to Journal of Cryptology. [Abstract] [PDF]
A signature scheme is fully leakage resilient (Katz and Vaikuntanathan, ASIACRYPT '09) if it is existentially unforgeable under an adaptive chosenmessage attack even in a setting where an adversary may obtain bounded (yet arbitrary) leakage information on all intermediate values that are used throughout the lifetime of the system. This is a strong and meaningful notion of security that captures a significantly wide range of sidechannel attacks.
One of the main challenges in constructing fully leakageresilient signature schemes is dealing with leakage that may depend on the random bits used by the signing algorithm, and constructions of such schemes are known only in the randomoracle model. Moreover, even in the randomoracle model, known schemes are only resilient to leakage of less than half the length of their signing key.
In this paper we construct the first fully leakageresilient signature schemes without random oracles. We present a scheme that is resilient to any leakage of length (1o(1))L bits, where L is the length of the signing key. Our approach relies on generic cryptographic primitives, and at the same time admits rather efficient instantiations based on specific numbertheoretic assumptions. In addition, we show that our approach extends to the continualleakage model, recently introduced by Dodis, Haralambiev, LopezAlt and Wichs (FOCS '10), and by Brakerski, Tauman Kalai, Katz and Vaikuntanathan (FOCS '10). In this model the signing key is allowed to be refreshed, while its corresponding verification key remains fixed, and the amount of leakage is assumed to be bounded only in between any two successive key refreshes.
Elette Boyle
and Federico Echenique
Sequential Entry in ManytoOne Matching Markets
Social Choice and Welfare, Springer, Vol 33(1), June 2009: pp 8799. [Abstract] [PDF]
We study sequential bargaining in manytoone matching markets. We show that there is an advantage to entering late in the market, and that the last agent to enter the market will receive his or her best partner in a stable matching, extending the results of Blum and Rothblum (J Econ Theory 103(2):429443, 2002) and Cechlárová (Randomized matching mechanism revisited. Mimeo, Safarik University, 2002) for the marriage model. We also discuss the relation between sequential bargaining and a possible alternative formulation based on the NTU Shapley value.
Elette Boyle
and Robert McEliece
Asymptotic Weight Enumerators of Randomly Punctured, Expurgated, and Shortened Code Ensembles
46th Annual Allerton Conference on Communication, Control, and Computing, 2008. [Abstract] [PDF]
In this paper, we examine the effect of random puncturing, expurgating, and shortening on the asymptotic weight enumerator of certain linear code ensembles. We begin by discussing the actions of the three alteration methods on individual codes. We derive expressions for the average resulting code weight enumerator under each alteration. We then extend these results to the spectral shape of linear code ensembles whose original spectral shape is known, and demonstrate our findings on two specific code ensembles: the Shannon ensemble and the regular (j, k) Gallager ensemble.
Elette Boyle
Navigation on Mars: Validation of the Mars Science Laboratory Rover Hazard Avoidance Algorithm
Caltech Undergraduate Research Journal Vol 5, 2006: pp 2125.
[Abstract]
The Mars Science Laboratory (MSL) is a longrange, longduration
roving laboratory planned for launch in 2009. While humans can issue commands to the rover on Mars, the delay in transmission means navigation functions requiring immediate action, such as recognizing hazards and locating pathways of safety, must be controlled autonomously. The autonomous navigation software currently used in Mars rovers is designed for use in terrains with minimal inclines and few obstacles. However, for MSL, NASA is developing a more advanced version of the Mars Exploration Rover (MER) GESTALT hazard detection and avoidance software. Ensuring that GESTALT functions correctly and effectively is a crucial task, since a malfunction could result in failure of the entire mission. Before use on Mars, GESTALT performance must be verified in a crosssection of potential terrains, including regions with rocks, craters, and slopes. The purpose of the summer research project was to test GESTALT in terrains of a range of rock hazard densities.
