Colloquium (C)

Ariel Kulik - Constrained Resource Allocation via Iterative Randomized Rounding

Constrained resource allocation problems can often be modeled as variants of the classic Bin Packing and Knapsack problems. The study of these problems has had a great impact on the development of algorithmic tools, ranging from simple dynamic programming to involved linear programs and rounding techniques. I will present a new and simple algorithmic approach for obtaining efficient approximations for such problems, based on iterative randomized rounding of Configuration LPs.

15/02/2024 - 11:30

Bracha Laufer - Leveraging Structures in Complex Spaces for Robust, Reliable and Efficient Learning

With the great promise of the data-science revolution, a major open question is whether the underlying models are efficient and trustworthy, especially when deployed in complex real-world settings. In this talk I will show how these challenges can be approached from a data-driven perspective, relying on the geometry and the statistical patterns hidden in the data to establish strong notions of robustness and reliability.

12/01/2023 - 11:30

Moshe Babaioff - Complexity-Performance Tradeoffs in Mechanism Design

Online computational platforms that directly engage with users must account for the strategic behavior of self-interested individuals. The goal of mechanism design is to optimize an objective, such as efficiency or revenue, in such scenarios, i.e., when the agents that participate in the mechanisms act strategically. In many fundamental computational settings the theoretical optimal mechanisms are highly complex and thus are not practical.

05/01/2023 - 13:30

Amit Levi - On structure and performance in the era of (really) big data

The influx of data witnessed during the last decade gave rise to groundbreaking applications in data sciences and machine learning. However, due to hardware constraints, the volume of data grows much faster than the growth of the available computational resources. Such modern setting poses new challenges for algorithm design as more efficient methods are needed. One way to obtain such methods is by exploiting the underlying structure of the data. 

26/01/2023 - 11:30

Amos Korman - An Algorithmic Perspective to Collective Behavior

In this talk, I will present a new interdisciplinary approach that I have been developing in recent years, aiming to build a bridge between the fields of algorithm theory and collective (animal) behavior. Ideally, an algorithmic perspective on biological phenomena can provide a level of fundamental understanding that is difficult to achieve using typical computational tools employed in this area of research (e.g., differential equations or computer simulations).

26/01/2023 - 13:30

Itay Safran - The Interconnection Between Approximation, Optimization and Generalization in Deep Learning Theory.

The modern study of deep learning theory can be crudely partitioned into three major aspects: Approximation, which is concerned with the ability of a given neural network architecture to approximate various objective functions; optimization, which deals with when we can or cannot guarantee that a certain optimization algorithm will converge to a network with small empirical loss; and generalization, which asks how well the network we trained is able to generalize to previously unseen examples.

19/01/2023 - 13:30

Or Zamir - Algorithmic Applications of Hypergraph and Partition Containers

We present a general method to convert algorithms into faster algorithms for almost-regular input instances. Informally, an almost-regular input is an input in which the maximum degree is larger than the average degree by at most a constant factor. This family of inputs vastly generalizes several families of inputs for which we commonly have improved algorithms, including bounded-degree inputs and random inputs.

29/12/2022 - 13:30

Rotem Dror: Experiment Design and Evaluation of Empirical Models in Natural Language Processing

The research field of Natural Language Processing (NLP) puts a lot of emphasis on empirical results. It seems that models are reaching state-of-the-art and even “super-human” performance on language understanding tasks on a daily basis, thanks to large datasets and powerful models with billions of parameters. However, the existing evaluation methodologies are lacking in rigor, leaving the field susceptible to erroneous claims. In this talk, I will describe efforts to build a solid framework for evaluation and experiment design for diverse NLP tasks.

22/12/2022 - 13:30

Michal Moshkovitz -Building the Foundations of Explainable and Interpretable Machine Learning

Machine learning (ML) is integrated into our society, it is present in the judicial, health,  transportation, and financial systems. As the integration increases, the necessity of ML transparency increases. The fields of explainable and interpretable ML attempt to add transparency to ML: either by adding explanations to a given black-box ML model or by building a model which is interpretable and self-explanatory.

15/12/2022 - 13:30