Our research aims to formalize the conceptual notion of privacy.
We think of a privacy-jeopardizing-mechanism as a process of signal publication according to probability distribution which is determined by the value of a secret. Consider any two privacy-jeopardizing-mechanisms: it is natural to ask which mechanism is preferable in the context of privacy. We follow a Decision Theory flavored approach and capture the natural preference in such situations by listing some ordinal axioms. These axioms apply a preference relation model. We prove that the common f-divergence represents this relation.
We also follow the reverse direction of the axiomatization approach in order to characterize differential-privacy, which is an ad-hoc standard in Computer Science literature. The preference relation model, which is represented by differential-privacy, demonstrates some properties which are not natural in our eyes.
Our study leads to recommendation on measuring privacy loss by f-divergence functions, such as KL-divergence or Hellinger-distance.
Joint work with Prof. Rann Smorodinsky and Prof. Kobbi Nissim.