Currently, self-driving cars are powered by machine learning algorithms that require vast amounts of driving data in order to function safely - data that is often kept private to prevent competition. But, say the researchers, if self-driving cars could learn to drive in the same way that babies learn to walk - by watching and mimicking others around them - they would require far less compiled driving data.
This thought led the researchers to develop a new way for autonomous vehicles to learn safe driving techniques - by watching other cars on the road, predicting how they will respond to their environment, and using that information to make their own driving decisions. Such a training paradigm, say the researchers, could also increase data sharing and cooperation among researchers in their field.
"Each company goes through the same process of taking cars, putting sensors on them, paying drivers to drive the vehicles, collecting data, and teaching the cars to drive," says Ohn-Bar, a BU College of Engineering assistant professor of electrical and computer engineering and a junior faculty fellow at BU’s Rafik B. Hariri Institute for Computing and Computational Science & Engineering.
Sharing that driving data could help companies create safe autonomous vehicles faster, allowing everyone in society to benefit from the cooperation. Artificially intelligent driving systems require so much data to work well, says Ohn-Bar, that no single company will be able to solve this problem on its own.
"Billions of miles [of data collected on the road] are just a drop in an ocean of real-world events and diversity," says Ohn-Bar. "Yet, a missing data sample could lead to unsafe behavior and a potential crash."
The researchers’ proposed machine learning algorithm works by estimating the viewpoints and blind spots of other nearby cars to create a bird's-eye-view map of the surrounding environment. These maps help self-driving cars detect obstacles - such as other cars or pedestrians - and to