Keynote: Analyzing & Preventing Unconscious Bias in Machine Learning

Location: Cyril Magnin Ballroom

Duration: 4:00pm - 5:00pm

Day of week: Wednesday

Abstract

Increasingly AI is finding its way into nearly every product we use (everything from photo sharing apps to criminal justice decision algorithms), but often various types of bias are buried in the underlying data and models.  This can have a damaging impact on both individuals and society. Through the lens of 3 case studies, we will look at how to diagnose bias, identify some sources, and take steps to avoid it.

Speaker: Rachel Thomas

fast.ai founder & USF assistant professor

Rachel Thomas has a math PhD from Duke and was selected by Forbes as one of “20 Incredible Women Advancing AI Research.” She is co-founder of fast.ai and a researcher-in-residence at the University of San Francisco Data Institute, where she teaches in the Masters in Data Science program. Her background includes working as a quant in energy trading, a data scientist + backend engineer at Uber, and a full-stack software instructor at Hackbright.

Find Rachel Thomas at

Tracks

  • Deep Learning Applications & Practices

    Deep learning lessons using tooling such as Tensorflow & PyTorch, across domains like large-scale cloud-native apps and fintech, and tacking concerns around interpretability of ML models.

  • Predictive Data Pipelines & Architectures

    Best practices for building real-world data pipelines doing interesting things like predictions, recommender systems, fraud prevention, ranking systems, and more.

  • ML in Action

    Applied track demonstrating how to train, score, and handle common machine learning use cases, including heavy concentration in the space of security and fraud

  • Real-world Data Engineering

    Showcasing DataEng tech and highlighting the strengths of each in real-world applications.