Track: Deep Learning in Practice

Location: Embarcadero

Day of week: Tuesday

Recent advances in hardware, the sheer amount of available data, and algorithmic innovations have made deep learning one of the most active areas of machine learning research.

In "Deep Learning in Practice" we'll learn where those breakthroughs are making contact with the real world and production workloads. We'll explore deep learning applied rich and structured data for search at Aibrnb, distributed deep learning applied to private data, and learn how reinforcement learning is not just for robots and videogames.

We'll also hear a personal perspective on the relationship between production machine learning and  mainstream software engineering based on a career at Cloudera, Adobe and And we'll acquire an arsenal of tools for debugging deep neural networks in use at Cardiogram.

There's a lot of hype around deep learning, and it's a rapidly evolving field. The Deep Learning in Practive track is a hand-picked selection of talks from industry experts about putting these ideas into practice in the real world.

Track Host: Mike Lee Williams

Research engineer @Cloudera Fast Forward Labs

Mike Lee Williams does applied research into computer science, statistics and machine learning at Cloudera Fast Forward Labs. While getting his PhD in astrophysics he spent 2% of his time observing the heavens in beautiful far west Texas, and the other 98% trying to figure out how to fit straight lines to data. He once did a postdoc at the Max Planck Institute for Extraterrestrial Physics, which, amazingly, is a real place.

10:40am - 11:20am

Applying Deep Learning To Airbnb Search

Searching for homes is the primary mechanism guests use to find the place they want to book at Airbnb. The goal of search ranking is to find guests the best possible options while rewarding the most deserving hosts. Ranking at Airbnb is a quest to understand the needs of the guests and the quality of the hosts to strike the best match possible. Applying machine learning to this challenge is one of the biggest success stories at Airbnb. Much of the initial gains were driven by a gradient boosted decision tree model. The gains, however, plateaued over time. This talk discusses the work done in applying neural networks in an attempt to break out of that plateau. The talk focuses on the elements we found useful in applying neural networks to a real life product. To other teams embarking on similar journeys, we hope this account of our struggles and triumphs will provide some useful pointers. Bon voyage!

Malay Haldar, Machine Learning Engineer @Airbnb

11:40am - 12:20pm

Machine Learning Engineering - A New Yet Not So New Paradigm

ML Engineering is a relatively newly defined role at organizations which often refers to a specific subset within a broad spectrum of skills in the goal of either building ML driven products or enabling ML capabilities across products. 
It is interesting that although it is a new role, we still leverage expertise from various parallel domains. For example: principles and tools in distributed and high performance computing are used in optimizing training and inference pipelines. Ideas and tools which are quite common place in scientific high performance computing domain like using vectors instructions, Cudann, profiling of GPUs, prefetching, compiler optimizations are finding its way into the industry. Similarly, best practices from the data intensive distributed computing space like leveraging spark for preprocessing. Micro services architectures and design principles are being leveraged to deploy atomic ML capabilities in a scalable and robust way. 
Unlike traditional software engineering where a system/product is designed based on user requirements and heuristics in a deterministic way, experimentation and non deterministic nature of building Machine learning capabilities brings a very new set of challenges and opportunities. Infrastructure for rapid experimentation and friction free iteration between experimentation and deployment are two of the critical aspects. On the deployment front, divergence between tools/tech stack used in experimentation and deployment is a challenge. This is especially keen in terms of ondevice deployments of ML models. Compute, memory, space, binary size requirements for running ML models on device, coupled with difference in pace of iteration in ML frameworks like Tf/Pytorch as compared to device software makes things even more interesting. Lastly, testing of ML pipelines/models and validation of ML models also requires a new way of thinking around bringing A/B testing closer to experimentation cycle. 

Sravya Tirukkovalur, Senior Machine Learning Engineer @Adobe

1:20pm - 2:00pm

Reinforcement Learning for Software Engineers

Presentation details will follow soon.

Jibin Liu, Software Engineer @Ebay

2:20pm - 3:00pm

Practical Fairness

Presentation details will follow soon.

Alex Beutel, Senior Research Scientist @Google

3:20pm - 4:00pm

Debuggable Deep Learning

Deep Learning is often called a black box, so how can we diagnose and fix problems in a Deep Neural Network (DNN)? Engineers at Cardiogram explain how they systematically debugged DeepHeart, a DNN that detects cardiovascular disease from heart rate data. You'll leave this talk with an arsenal of tools for debugging DNNs, including Jacobian analysis, TensorBoard, and "DNN Unit Tests".

Mantas Matelis, Software Engineer @Cardiogram
Avesh Singh, Software Engineer & Technical Lead @Cardiogram

4:20pm - 5:00pm

Federated Learning: Private Distributed Machine Learning

Presentation details to follow.

Eric Tramel, Federated Learning R&D Lead @OWKIN

2019 Tracks