Colloquium (C)

Uri Shaham - Deep Learning for Representation Learning

In this talk I will present two deep learning-based algorithms for representation learning.
In the first half of the talk I will present SpectralNet, a  deep learning approach for spectral clustering, which is scalable and allows for straight-forward out of sample extension.
In the second half of the talk I will present a deep learning approach for recovery of a single independent component of interest, given another component as a condition.

02/12/2021 - 13:30

Yehuda Dar - Generalization in Overparameterized Machine Learning

Deep neural networks are highly overparameterized models, i.e., they are highly complex with typically many more parameters than the number of training data examples. Such overparameterized models are usually learned to perfectly fit their training data; yet, in sharp contrast to conventional machine learning guidelines, they still generalize extremely well to inputs outside of their training dataset.

18/11/2021 - 13:30

Jonathan Mosheiff - Derandomization of elementary random codes

An error correcting code should ideally be 1) of large rate, 2) noise tolerant and 3) efficiently decodable. Elementary probabilistic constructions, such as random linear codes, achieve an excellent trade-off between the first two objectives, but unfortunately, decoding them is believed to be algorithmically hard. This motivates us to derandomize these constructions in a way that preserves noise tolerance, while adding structure that can be used for algorithmic purposes.

23/12/2021 - 11:30

Tal Golan - Bridging visual object recognition and deep neural network models by means of model-driven experimentation

Deep neural networks (DNNs) provide the leading stimulus-computable model of biological visual object recognition, but their power and flexibility come at a price. Due to their capacity to absorb massive data, distinct DNN models often make very similar predictions when tested on stimuli sampled from their training distribution. To enable continual refinement and improvement of DNNs as scientific hypotheses about biological vision, we must be able to compare alternative models efficiently.

25/11/2021 - 13:30

Ofir Lindenbaum - Machine Learning for Scientific Discovery

The computational resource growth in natural science motivates the use of machine learning for automated scientific discovery. However, unstructured empirical datasets are often high dimensional, unlabeled, and imbalanced. Therefore, discarding irrelevant (i.e., noisy and information-poor) features is essential for the automated discovery of governing parameters in scientific environments. To address this challenge, I will present Gaussian Stochastic Gates (STG), which rely on a probabilistic relaxation of the L0 norm of the number of selected features.

21/01/2021 - 13:30

Alon Cohen - Between Online Learning and Reinforcement Learning

In this talk I will describe some of my work around Online Learning and Reinforcement Learning.
Online Learning is a classic sub-domain of Machine Learning that has provided endless contributions to fields such as Statistical Learning, Optimization, Decision Making and others.
Unlike Reinforcement Learning which focuses on planning long-term decisions in the face of a non-adversarial environment, Online Learning focuses on making short-term decisions in the face of an adversary - and doing so efficiently.

14/01/2021 - 13:30

Alon Kipnis - Two-sample problem for large, sparse, high-dimensional distributions under rare/weak perturbations

Consider two samples, each obtained by independent draws from two possibly different distributions over the same finite and large alphabet (features). We would like to test whether the two distributions are identical, or not. We propose a method to perform a two-sample test of this form by taking feature-by-feature p-values based on a binomial allocation model, combining the p-values using Higher Criticism. Performance on real-world data (e.g.

18/01/2021 - 19:00

Long-Tail Entity Representation Learning via Variational Bayesian Networks

Entity representation learning is an active research field. In the last decade, both the NLP and recommender systems communities introduced many methods for mapping words and items to vectors in a latent space. The vast majority of these methods utilize implicit relations (e.g. co-occurrences of words in text, co-consumption of items by users, etc.) for learning the latent entity vectors.

07/01/2021 - 13:30

Kira Goldner: Mechanism Design for Social Good

Society is run by algorithms, and in many cases, these algorithms interact with participants who have a stake in the outcome. The participants may behave strategically in an attempt to "game the system," resulting in unexpected or suboptimal outcomes. In order to accurately predict an algorithm's outcome and quality, we must design it to be robust to strategic manipulation. This is the subject of algorithmic mechanism design, which borrows ideas from game theory and economics to design robust algorithms.

31/12/2020 - 13:30