Colloquium

The IDC CS Colloquium
 

Gail Gilboa Freedman: "On Characterization of Privacy Loss"

Our research aims to formalize the conceptual notion of privacy.

We think of a privacy-jeopardizing-mechanism as a process of signal publication according to probability distribution which is determined by the value of a secret. Consider any two privacy-jeopardizing-mechanisms: it is natural to ask which mechanism is preferable in the context of privacy. We follow a Decision Theory flavored approach and capture the natural preference in such situations by listing some ordinal axioms. These axioms apply a preference relation model. We prove that the common f-divergence represents this relation.

We also follow the reverse direction of the axiomatization approach in order to characterize differential-privacy, which is an ad-hoc standard in Computer Science literature. The preference relation model, which is represented by differential-privacy, demonstrates some properties which are not natural in our eyes.

Our study leads to recommendation on measuring privacy loss by f-divergence functions, such as KL-divergence or Hellinger-distance.

Joint work with Prof. Rann Smorodinsky and Prof. Kobbi Nissim.

31/03/2016 - 13:30

Jessica Cauchard: "On body and Out of body Interactions"

Mobile devices have become ubiquitous over the last decade, changing the way we interact with technology and with one another. Mobile devices were at first personal devices carried in our hands or pockets. They are now changing form to fit our lifestyles and an increasingly demanding amount and diversity of information to display. My research focuses on the design, development, and evaluation of novel interaction techniques with mobile devices using a human-centered approach.

07/04/2016 - 13:30

Erez Kantor: "Replication and Erasure Codes in Modern Protocols"

Modern information networks often require maintaining multiple copies of the same data items for the purpose of coping with failures, robustness to errors, and fast access. The traditional approach for the design of data multiplication protocols relies on replication. A more recent powerful tool that facilitates low storage costs, as well as low communication costs, is erasure codes that transform a message of k symbols into a code-word with n symbols such that the original message can be recovered from a subset of the n symbols.

26/01/2016 - 13:30

Dvir Netanely: "Unsupervised analysis of high-throughput genomic data for the identification of breast cancer subtypes"

A major theme in current bioinformatics deals with the computational analysis of biological data produced by modern high-throughput measurement technologies. These technologies enable the examination of a tissue sample from different biological aspects. Each technology produces high-resolution profile composed of hundreds to thousands of features that describe that sample on a certain biological level.

14/01/2016 - 13:30

Zohar Yakhini: "Computer science and high throughput measurement in molecular biology"

Modern molecular biology measurement techniques such as microarrays, sequencing and mass-spectrometry produce large amounts of data and are often applied to large sets of samples. To interpret these data scientists apply statistics and data mining techniques that are tuned to identify certain structures in the data and to assess their statistical significance. Personalized medicine is largely driven by findings that stem from a combination of high throughput measurement and effective data analysis.

17/03/2016 - 13:30

Dana Fisman: "Towards Synthesis in Real Life"

System synthesis refers to the task of automatically generating an executable component of a system (e.g. a software or hardware component) from a specification of the component's behavior. The traditional formalization of the problem assumes the specification is given by a logical formalism. The computational problem is typically intractable. Recent trends to system synthesis relax the problem definition and consider a variety of inputs including logical requirements, incomplete programs, and examples behaviors.

31/12/2015 - 13:30

Oren Freifeld: "From representation to inference: respecting and exploiting mathematical structures in computer vision and machine learning"

Stochastic analysis of real-world signals consists of 3 main parts: mathematical representation; probabilistic modeling; statistical inference. For it to be effective, we need mathematically-principled and practical computational tools that take into consideration not only each of these components by itself but also their interplay.

07/01/2016 - 13:30

Aharon Bar Hillel: "Large scale feature selection for visual representation learning"

Training accurate visual classifiers from large data sets critically depend on learning the right representation for the problem. I will discuss a representation learning framework based on an iterative interaction of two components: a feature generator suggesting candidate features, and a feature selector choosing among them.

24/12/2015 - 13:30