You are viewing content from a past/completed QCon

Presentation: Very Large Datasets With the GPU Data Frame

Track: Hands-on Codelabs & Speakers Office Hours

Location: Mission

Duration: 10:35am - 10:45am

Day of week: Wednesday

Share this on:


Use of the humble GPU has spiked over the past couple years as machine learning and data analytics workloads have been optimized to take advantage of the GPU’s parallelism and memory bandwidth. Even though these operations (the steps of the Machine Learning Pipeline) could all be run on the same GPUs, they were typically isolated, and much slower than they needed to be, because data was serialized and deserialized between the steps over PCIe.

That inefficiency was recently addressed by the formation of the GPU Open Analytics Initiative (GOAI, an industry standard founded by MapD, and Anaconda. This group created the GPU data frame (GDF), based on Apache Arrow, for passing data between processes and keeping it all in the GPU. In this talk we will explain how the GDF technology works, show how it is enabling a diverse set of GPU workloads, and demonstrate how to use a Jupyter Notebook to take advantage of it. We’ll demonstrate on a very large dataset how to manage a full Machine Learning Pipeline with minimal data exchange overhead between MapD’s SQL engine and H2O’s generalized linear model library (GLM).

Speaker: Veda Shankar

Senior Developer Advocate @MapD

Veda Shankar is a Developer Advocate at MapD working actively to assist the user community to take advantage of MapD’s open source analytics platform. He is a customer oriented IT specialist with a unique combination of experience in product development, marketing and sales engineering. Prior to MapD, Veda worked on various open source software defined data center products at Red Hat.

Find Veda Shankar at

2019 Tracks

  • ML in Action

    Applied track demonstrating how to train, score, and handle common machine learning use cases, including heavy concentration in the space of security and fraud

  • Deep Learning in Practice

    Deep learning use cases around edge computing, deep learning for search, explainability, fairness, and perception.

  • Handling Sequential Data Like an Expert / ML Applied to Operations

    Discussing the complexities of time (half track) and Machine Learning in the data center (half track). Exploring topics from hyper loglog to predictive auto-scaling in each of two half-day tracks.

    Half-day tracks