You are viewing content from a past/completed QCon

Track: AI Meets the Physical World

Location: Cyril Magnin II

Day of week: Wednesday

Artificial intelligence and machine learning algorithms became an essential part of many online services. Lately many companies started bringing AI to the physical world. Mobile phone apps use more and more on-board machine learning algorithms, cars are becoming self-driving, and drones processing their images on the fly!

In this track we explore use-cases of machine learning that touch consumers directly. We will explore algorithms that work on drones and cars, algorithms that use a human in their loop, use where you live as input, or even augment our reality. We will look both at the amazing applications AI is applied to, as well as best practices and tools you should use to bring your own application from the virtual into the physical world.

Track Host: Roland Meertens

Machine Learning Engineer @Autonomous Intelligent Driving

Roland Meertens is Machine Learning Engineer at Autonomous Intelligent Driving. He works on the machine learning side of the perception software stack that will be deployed to the autonomous vehicles that will soon roam urban environments in Germany.

9:00am - 9:40am

Deep Learning on Microcontrollers

Learn why deep learning is a great fit for tiny, cheap devices, what you can build with it, and how to get started.

Pete Warden, Technical Lead of TensorFlow Mobile @Google

10:00am - 10:40am

From Robot Simulation to the Real-World

Simulation is one of the most powerful tools in the robot developer's tool belt. Besides allowing quicker, safer and cheaper iterations, it can be used to prototype before building, run continuous integration, train machine learning algorithms, etc. One popular open source robotics simulator is Gazebo, maintained by Open Robotics, the same foundation that maintains ROS, the Robot Operating System. Combined, ROS and Gazebo are used by an increasing number of developers around the world.

 

Gazebo's development started over 17 years ago, but its most current form started taking shape in 2012, when DARPA sponsored Open Robotics to run the Virtual Robotics Challenge, the first stage of its Robotics Challenge. A total of 26 teams competed in the virtual competition, controlling an Atlas robot from Boston Dynamics in a simulated disaster scenario. As a result of the virtual competition, the top 7 teams received funding to compete in the final competition with the same robot, but this time a physical one in a physical scenario.

 

Ever since, Gazebo has continued to be developed and improved to better support various types of robots, spanning ground, water and air, and is being increasingly used by the academia, industries and, sure enough, in other competitions. In this talk, Louise will give an overview of Gazebo's architecture and go over some examples of projects using Gazebo which Open Robotics has been involved with, describing how they bridged virtual robots to their physical counterparts.

Louise Poubel, Software Engineer @OpenRoboticsOrg

11:00am - 11:40am

The Road to Artificial Intelligence: An Ethical Minefield

There is no doubt that developments in artificial intelligence offer significant benefits to humanity. However, are we properly considering some of the negative externalities that could accrue to society? This presentation offers a robust look into the complex ethical issues faced by today's top engineers and poses open-ended questions for the consideration of attendees. It places special focus on the rise of autonomous vehicles and their potential susceptibility to attacks by malicious agents, while also covering adversarial intrusions into machine learning engines more broadly.

Lloyd Danzig, Chairman & Founder of @ICED(AI)

12:00pm - 12:40pm

DeepRacer and DeepLens, Machine Learning for Fun! (and Profit?)

 In 2017, Amazon announced the DeepLens, a machine learning enabled camera, which they released in 2018.  In 2018 they announced the DeepRacer, a 1/8 size model race car that is basically a DeepLens on wheels, which they will release in July.  In this talk, you'll hear about the speaker's attempts to do cool things with these machine learning "toys", learn about some of the basics of machine learning necessary to understand what's happening in the devices, and, assuming the device isn't being finicky, see a DeepRacer in action!  We will also have a torn down DeepRacer and DeepLens for you to look at and (gently) play with.  We'll also go over some possible real life use cases for the devices.  Come join us for machine learning fun!

Jeremy Edberg, Cofounder @CloudNative

1:40pm - 2:20pm

Evoking Magic Realism with Augmented Reality Technology

At Niantic one of our missions is building experiences that are shared and social. We’ve seen how playing together has made an enormous impact in engagement in our games. Our players tell us that besides having fun, they have found benefits in making friends and building communities. To do so, the AR interaction has to be natural to our senses. The digital world must obey similar rules as the physical world in order to create the suspense of disbelief in our brains. When this balance is achieved, players are immersed in this magical realism where they can have frictionless fun (check out Codename: Neon, one of our prototypes that was created to demo this). The technology just works as expected, obeying laws of physics. For example, players in Codename: Neon can harvest energy from the white pellets on the ground, and those are a shared resource–so if one player gets them, the other players can’t!
 
In this talk, we’ll explore how building a real world system is more a software engineering art. It requires making choices among a set of trade-offs. On one hand, we have complex computer vision and machine learning algorithms that burn many CPU cycles; and on the other hand, we have the conflicting goal of getting the system to be as lean as possible since AR runs on resource-limited hardware such as wearables or mobile devices.

Diana Hu, Director of Engineering & AR Platform @NianticLabs

2:40pm - 3:20pm

Advanced Topics in Autonomous Driving using Deep Learning

Autonomous vehicles need to perceive their surroundings and analyze it to make decisions and act in an environment. More specifically, the autonomous vehicle detects objects on the road and maneuver through the traffic utilizing smart functional modules. In recent years, artificial intelligence, in particular, deep neural networks, have been used widely to build these smart functional modules.

While object detection (putting bounding boxes around objects), and semantic segmentation (labeling each pixel in an image) has been the focus of many researchers in autonomous driving, these methods may fall short when it comes to forming a better social understanding of pedestrian intent. In this talk, we present our approach to pedestrian intent prediction and communication, which leverages more complex computer vision algorithms that estimate human pose rather than bounding boxes or pixel labels.

The increasingly sophisticated models, like the pose estimation networks we describe, show tremendous promise as they prove to be robust at approximating complex and non-linear mapping functions from images to outputs. However, these models are typically large and have a huge number of parameters resulting in a steep cost in terms of training and inference time resource requirements. This makes the use of these networks challenging on resource and power constrained embedded systems. In this talk, we also show that the compression of neural networks results in faster predictions with smaller deep neural networks.

Nasim Souly, Senior Engineer & Machine Learning Researcher @Volkswagen

2019 Tracks