Presentation: Interpretable Machine Learning Products
Abstract
Interpretable models are easier to improve. Regulators and society can better trust them to be safe and nondiscriminatory. They can also offer insights that can be used to change real-world outcomes for the better. But because there is a central tension between accuracy and interpretability interpretability can be hard to ensure.
I'll explore both the product case for interpretability and the academic research that is starting to make the inner workings of black box models such as deep neural networks easier to understand. In particular, I'll look at the application of a new open source tool called LIME to customer churn, image classification and black box NLP models.