You are viewing content from a past/completed QCon -

Presentation: Scaling Emerging AI Applications with Ray

Track: Papers in Production: Modern CS in the Real World

Location: Cyril Magnin III

Duration: 1:20pm - 2:00pm

Day of week:

Slides: Download Slides

This presentation is now available to view on

Watch video with transcript


The next generation of AI applications will continuously interact with the environment and learn from these interactions. To develop these applications, data scientists and engineers will need to seamlessly scale their work from running interactively to production clusters. In this talk, I’ll cover some major open source AI + Data Science libraries my collaborators and I at the RISELab have been working on.


At a high level, I’ll talk about my work on the following: Ray, a distributed execution framework for emerging AI applications; Tune, a scalable hyperparameter optimization framework for reinforcement learning and deep learning; RLlib, an open-source library for reinforcement learning that offers both a collection of reference algorithms and scalable primitives for composing new ones; and Modin, an open-source dataframe library for scaling pandas workflows by changing one line of code.

Speaker: Peter Schafhalter

Research Assistant @UC Berkeley RISELab

Peter is a researcher in UC Berkeley's RISELab working in distributed systems and machine learning. He has worked with Ray, RLlib, Tune, and Modin for 1.5 years. He is interested in building high-performance scalable systems that enable AI applications.

Find Peter Schafhalter at