Artificial intelligence and machine learning algorithms became an essential part of many online services. Lately many companies started bringing AI to the physical world. Mobile phone apps use more and more on-board machine learning algorithms, cars are becoming self-driving, and drones processing their images on the fly!
In this track we explore use-cases of machine learning that touch consumers directly. We will explore algorithms that work on drones and cars, algorithms that use a human in their loop, use where you live as input, or even augment our reality. We will look both at the amazing applications AI is applied to, as well as best practices and tools you should use to bring your own application from the virtual into the physical world.
Track: AI Meets the Physical World
Location: Cyril Magnin II
Day of week:

Track Host: Roland Meertens
Roland Meertens is Machine Learning Engineer at Autonomous Intelligent Driving. He works on the machine learning side of the perception software stack that will be deployed to the autonomous vehicles that will soon roam urban environments in Germany.
9:00am - 9:40am
Deep Learning on Microcontrollers
Learn why deep learning is a great fit for tiny, cheap devices, what you can build with it, and how to get started.
10:00am - 10:40am
From Robot Simulation to the Real-World
Simulation is one of the most powerful tools in the robot developer's tool belt. Besides allowing quicker, safer and cheaper iterations, it can be used to prototype before building, run continuous integration, train machine learning algorithms, etc. One popular open source robotics simulator is Gazebo, maintained by Open Robotics, the same foundation that maintains ROS, the Robot Operating System. Combined, ROS and Gazebo are used by an increasing number of developers around the world.
Gazebo's development started over 17 years ago, but its most current form started taking shape in 2012, when DARPA sponsored Open Robotics to run the Virtual Robotics Challenge, the first stage of its Robotics Challenge. A total of 26 teams competed in the virtual competition, controlling an Atlas robot from Boston Dynamics in a simulated disaster scenario. As a result of the virtual competition, the top 7 teams received funding to compete in the final competition with the same robot, but this time a physical one in a physical scenario.
Ever since, Gazebo has continued to be developed and improved to better support various types of robots, spanning ground, water and air, and is being increasingly used by the academia, industries and, sure enough, in other competitions. In this talk, Louise will give an overview of Gazebo's architecture and go over some examples of projects using Gazebo which Open Robotics has been involved with, describing how they bridged virtual robots to their physical counterparts.
11:00am - 11:40am
The Road to Artificial Intelligence: An Ethical Minefield
There is no doubt that developments in artificial intelligence offer significant benefits to humanity. However, are we properly considering some of the negative externalities that could accrue to society? This presentation offers a robust look into the complex ethical issues faced by today's top engineers and poses open-ended questions for the consideration of attendees. It places special focus on the rise of autonomous vehicles and their potential susceptibility to attacks by malicious agents, while also covering adversarial intrusions into machine learning engines more broadly.
12:00pm - 12:40pm
DeepRacer and DeepLens, Machine Learning for Fun! (and Profit?)
In 2017, Amazon announced the DeepLens, a machine learning enabled camera, which they released in 2018. In 2018 they announced the DeepRacer, a 1/8 size model race car that is basically a DeepLens on wheels, which they will release in July. In this talk, you'll hear about the speaker's attempts to do cool things with these machine learning "toys", learn about some of the basics of machine learning necessary to understand what's happening in the devices, and, assuming the device isn't being finicky, see a DeepRacer in action! We will also have a torn down DeepRacer and DeepLens for you to look at and (gently) play with. We'll also go over some possible real life use cases for the devices. Come join us for machine learning fun!
1:40pm - 2:20pm
Evoking Magic Realism with Augmented Reality Technology
2:40pm - 3:20pm
Advanced Topics in Autonomous Driving using Deep Learning
Autonomous vehicles need to perceive their surroundings and analyze it to make decisions and act in an environment. More specifically, the autonomous vehicle detects objects on the road and maneuver through the traffic utilizing smart functional modules. In recent years, artificial intelligence, in particular, deep neural networks, have been used widely to build these smart functional modules.
While object detection (putting bounding boxes around objects), and semantic segmentation (labeling each pixel in an image) has been the focus of many researchers in autonomous driving, these methods may fall short when it comes to forming a better social understanding of pedestrian intent. In this talk, we present our approach to pedestrian intent prediction and communication, which leverages more complex computer vision algorithms that estimate human pose rather than bounding boxes or pixel labels.
The increasingly sophisticated models, like the pose estimation networks we describe, show tremendous promise as they prove to be robust at approximating complex and non-linear mapping functions from images to outputs. However, these models are typically large and have a huge number of parameters resulting in a steep cost in terms of training and inference time resource requirements. This makes the use of these networks challenging on resource and power constrained embedded systems. In this talk, we also show that the compression of neural networks results in faster predictions with smaller deep neural networks.
2019 Tracks
-
Solving Software Engineering Problems with Machine Learning
Interesting machine learning use cases changing how we develop software today, including planned topics touching on infrastructure optimization, developer experience, security, and more.
-
Predictive Architectures in the Real World
Case Study focused look at end to end predictive pipelines from places like Salesforce, Uber, Linkedin, & Netflix.
-
Predictive Data Pipelines & Architectures
Case Study focused look at end to end predictive pipelines from places like Salesforce, Uber, Linkedin, & Netflix
-
Sequential Data: Natural Language, Time Series, and Sound
Techniques, practices, and approaches around time series and sequential data. Expect topics including image recognition, NLP/NLU, preprocess, & crunching of related algorithms.
-
ML in Action
Applied track demonstrating how to train, score, and handle common machine learning use cases, including heavy concentration in the space of security and fraud
-
Deep Learning in Practice
Deep learning use cases around edge computing, deep learning for search, explainability, fairness, and perception.
-
Handling Sequential Data Like an Expert / ML Applied to Operations
Discussing the complexities of time (half track) and Machine Learning in the data center (half track). Exploring topics from hyper loglog to predictive auto-scaling in each of two half-day tracks.
Half-day tracks -
AI Meets the Physical World
Where AI touches the physical world, think drones, ROS, NVidia, TPU and more.
-
Hands-on Codelabs & Speakers Office Hours
Codelabs are a self-guided tutorial of a product, API, or tool kit followed by an Office Hour period with the lab’s creator.
-
Papers in Production: Modern CS in the Real World
Groundbreaking papers make real-world impact.
-
Hands-on Codelabs & Speakers Office Hours
Codelabs are a self-guided tutorial of a product, API, or tool kit followed by an Office Hour period with the lab’s creator.