You are viewing content from a past/completed QCon

Presentation: Scaling Emerging AI Applications with Ray

Track: Papers in Production: Modern CS in the Real World

Location: Cyril Magnin III

Duration: 1:20pm - 2:00pm

Day of week: Tuesday

Share this on:

This presentation is now available to view on

Watch video with transcript


The next generation of AI applications will continuously interact with the environment and learn from these interactions. To develop these applications, data scientists and engineers will need to seamlessly scale their work from running interactively to production clusters. In this talk, I’ll cover some major open source AI + Data Science libraries my collaborators and I at the RISELab have been working on.


At a high level, I’ll talk about my work on the following: Ray, a distributed execution framework for emerging AI applications; Tune, a scalable hyperparameter optimization framework for reinforcement learning and deep learning; RLlib, an open-source library for reinforcement learning that offers both a collection of reference algorithms and scalable primitives for composing new ones; and Modin, an open-source dataframe library for scaling pandas workflows by changing one line of code.

Speaker: Peter Schafhalter

Research Assistant @UC Berkeley RISELab

Peter is a researcher in UC Berkeley's RISELab working in distributed systems and machine learning. He has worked with Ray, RLlib, Tune, and Modin for 1.5 years. He is interested in building high-performance scalable systems that enable AI applications.

Find Peter Schafhalter at

2019 Tracks

  • Sequential Data: Natural Language, Time Series, and Sound

    Techniques, practices, and approaches around time series and sequential data. Expect topics including image recognition, NLP/NLU, preprocess, & crunching of related algorithms.

  • ML in Action

    Applied track demonstrating how to train, score, and handle common machine learning use cases, including heavy concentration in the space of security and fraud

  • Deep Learning in Practice

    Deep learning use cases around edge computing, deep learning for search, explainability, fairness, and perception.