The modern study of deep learning theory can be crudely partitioned into three major aspects: Approximation, which is concerned with the ability of a given neural network architecture to approximate various objective functions; optimization, which deals with when we can or cannot guarantee that a certain optimization algorithm will converge to a network with small empirical loss; and generalization, which asks how well the network we trained is able to generalize to previously unseen examples. Despite being studied independently for the most part, we will demonstrate how considering all aspects simultaneously gives rise to end-to-end learnability results, which will establish a rich interconnection between the three aspects. This highlights the importance of studying the individual pieces as a whole to better understand the bigger picture, and to improve our theoretical understanding of the unprecedented practical success of deep learning.
Itay Safran is a Post-Doctoral Research Associate at Purdue University, and a former Postdoctoral Research Fellow at Princeton University. He received his MSc and PhD from the Weizmann Institute of Science, and a dual BSc in mathematics and computer science from Ben-Gurion University of the Negev. Itay's research focuses on theoretical machine learning, with emphasis on the theory of deep learning. For his achievements, he was recognized with the Dan David Scholarship Prize for outstanding doctoral students of exceptional promise in the field of artificial intelligence, and an excellence postdoctoral scholarship awarded by the Council for Higher Education in Israel to highly distinguished PhDs in data science fields.