Recent years have shown adoption of new software trends such as machine learning, programmable computer networks, and blockchain frameworks. Several software bugs and attacks have demonstrated the importance of bringing formal guarantees to such systems. Techniques from verification, automated reasoning and program synthesis were shown successful in providing such guarantees for software systems. My research extends these techniques to these new software trends.
In this talk, I will focus on deep learning and will present two novel, complementary methods, leveraging abstract interpretation and logical constraints, for making deep learning models more reliable and interpretable.
First, I will present AI2, a certifier for formally proving that a neural network satisfies a given property (e.g., robustness). The key idea is to leverage abstract interpretation, shown successful for program analysis, to soundly over-approximate the network’s behavior. AI2 enables the analysis of large convolutional models, for the first time. Then, I will present DL2, a method which bridges logic and differentiable reasoning, allowing one to both pose interesting queries on deep models, to interpret the network’s behavior, as well as training them to satisfy formal properties which capture domain knowledge.
Finally, I will briefly discuss few promising research directions where applications of automated reasoning can be effective in solving key challenges in other application domains (e.g., programmable networks and blockchains) as well as reasoning about security (e.g., differential privacy).
Dana Drachsler-Cohen is an ETH Postdoctoral Fellow at the department of Computer Science, ETH Zurich. Her research interests span verification, automated reasoning, program synthesis and applying them to application domains such as machine learning, security, and computer networks. She obtained her PhD from the Computer Science Department at the Technion in 2017.