Tal Golan - Bridging visual object recognition and deep neural network models by means of model-driven experimentation

Deep neural networks (DNNs) provide the leading stimulus-computable model of biological visual object recognition, but their power and flexibility come at a price. Due to their capacity to absorb massive data, distinct DNN models often make very similar predictions when tested on stimuli sampled from their training distribution. To enable continual refinement and improvement of DNNs as scientific hypotheses about biological vision, we must be able to compare alternative models efficiently. I suggest taking a model-driven experimental design approach to achieve this aim. I developed a methodology for synthesizing controversial stimuli: inputs (e.g., images) for which two or more models make conflicting predictions. Empirically measured responses to such stimuli are guaranteed to provide evidence against at least one of the models. Therefore, controversial stimuli allow contrasting the validity of different computational models efficiently. I will present results from experiments employing this approach to compare models of human visual categorization; the translation of this method to the domain of natural language; and the application of stimulus optimization for comparing DNN models of neural representation, using the representation of human faces as a test case. I will outline how stimuli driven by DNN models can form a third category of experimental stimuli in cognitive and systems neuroscience, complementing and bridging parameterized artificial stimuli (e.g., Gabor patches and random-dot stimuli) and complex natural images and videos.

Date and Time: 
Thursday, November 25, 2021 - 13:30 to 14:30
Speaker: 
Tal Golan
Location: 
C110
Speaker Bio: 

Tal Golan is a Postdoctoral Research Scientist at Columbia University's Zuckerman Mind Brain and Behavior Institute, New York. He received his Ph.D. from the Edmond and Lily Safra Center for Brain Sciences at the Hebrew University in 2018 after studying cortical representation during eye movements using human intracranial recordings and fMRI. His current work at Columbia focuses on building interfaces between cognitive neuroscience and AI.