You are viewing content from a past/completed QCon

Presentation: Deep Learning with Audio Signals: Prepare, Process, Design, Expect

Track: Sequential Data: Natural Language, Time Series, and Sound

Location: Cyril Magnin I

Duration: 1:40pm - 2:20pm

Day of week: Wednesday

Share this on:

This presentation is now available to view on InfoQ.com

Watch video with transcript

Abstract

Is deep learning Alchemy? No! But it heavily relies on tips and tricks, a set of common wisdom that probably works for similar problems. In this talk, I’ll introduce what the audio/music research societies have discovered while playing with deep learning when it comes to audio classification and regression -- how to prepare the audio data and preprocess them, how to design the networks (or choose which one to steal from), and what we can expect as a result.

Speaker: Keunwoo Choi

Research Scientist @Spotify

Keunwoo Choi is currently a Research Scientist at Spotify working with deep learning. Before working at spotify he worked for Naver Labs Corp and the Electronics and Telecommunications Research Institute. He has worked with music signal and deep learning, music information retrieval, technical translation, and various digital audio processing projects. Keunwoo received his Master of Science in Electrical Engineering and Computer Science from Seoul National University and his PhD from Queen Mary University of London.

Find Keunwoo Choi at

2019 Tracks

  • Sequential Data: Natural Language, Time Series, and Sound

    Techniques, practices, and approaches around time series and sequential data. Expect topics including image recognition, NLP/NLU, preprocess, & crunching of related algorithms.

  • ML in Action

    Applied track demonstrating how to train, score, and handle common machine learning use cases, including heavy concentration in the space of security and fraud

  • Deep Learning in Practice

    Deep learning use cases around edge computing, deep learning for search, explainability, fairness, and perception.