Abstract:

Say Alice and Bob hold private inputs x and y, and wish to compute a function f(x,y) privately in the information theoretic sense; that is, each party should learn nothing beyond f(x,y). However, the communication channel available to them is noisy and might introduce errors in the transmission between the two parties. Moreover, assume the channel is adversarial in the sense that it knows the protocol that Alice and Bob are running, and maliciously introduces errors to disrupt the communication, subject to some bound on the total number of errors. A fundamental question in this setting is to design a protocol that remains private in the presence of large number of errors.

If Alice and Bob are only interested in computing f(x,y) correctly, and not privately, then quite robust protocols are known that can tolerate a constant fraction of errors. However, none of these solutions is applicable in the setting of privacy, as they inherently leak information about the partiesâ€™ inputs.

In this talk we show that privacy and error-resilience are contradictory goals. In particular, we show that for every constant c > 0, there exists a function f(x,y) which is privately computable in the error-less setting, but for which no private and correct protocol is resilient against a c-fraction of errors.

Joint work with Amit Sahai and Akshay Wadia, http://eprint.iacr.org/2013/259.