Presentation: Scaling Emerging AI Applications with Ray

Track: Papers in Production: Modern CS in the Real World

Location: Cyril Magnin III

Duration: 11:40am - 12:20pm

Day of week: Tuesday

Share this on:

Abstract

The next generation of AI applications will continuously interact with the environment and learn from these interactions. To develop these applications, data scientists and engineers will need to seamlessly scale their work from running interactively to production clusters. In this talk, I’ll cover some major open source AI + Data Science libraries my collaborators and I at the RISELab have been working on.
 
At a high level, I’ll talk about my work on the following: Ray, a distributed execution framework for emerging AI applications; Tune, a scalable hyperparameter optimization framework for reinforcement learning and deep learning; RLlib, an open-source library for reinforcement learning that offers both a collection of reference algorithms and scalable primitives for composing new ones; and Modin, an open-source dataframe library for scaling pandas workflows by changing one line of code.

Speaker: Peter Schafhalter

Research Assistant @ucbrise

Peter is a researcher in UC Berkeley's RISELab working in distributed systems and machine learning. He has worked with Ray, RLlib, Tune, and Modin for 1.5 years. He is interested in building high-performance scalable systems that enable AI applications.

Find Peter Schafhalter at

2019 Tracks

  • Groking Timeseries & Sequential Data

    Techniques, practices, and approaches around time series and sequential data. Expect topics including image recognition, NLP/NLU, preprocess, & crunching of related algorithms.

  • Deep Learning in Practice

    Deep learning use cases around edge computing, deep learning for search, explainability, fairness, and perception.