Leveraging billions of images in the company's connected vehicle network, the new visual localization approach, says the company, greatly improves GPS location accuracy in urban areas and offers a scalable solution for navigation and autonomous vehicle applications. As a benchmark to evaluate the method, the company is also releasing a dataset and benchmark based on anonymized dash cam and GPS information from its network to advance the research of visual localization for safety applications.
"This new localization method makes it possible for Nexar to deliver on our founding promise, which is to help rid the world of collisions," says Nexar co-founder and Chief Technology Officer Bruno Fernandez-Ruiz. "And the benefits will go far beyond our network – this approach could one day allow autonomous vehicles to reliably navigate cities."
"It's just as accurate and far less expensive than structure-based techniques such as lidar, which are limited in scale and expensive to compute," says Fernandez-Ruiz. "So the potential is really tremendous."
By pairing Nexar-powered dash cameras with the the company's app, drivers join Nexar's global safe-driving network, where every vehicle is alerted to what's happening on the road ahead with the help of other vehicles around it. To deliver these critical alerts, says the company, it needs an efficient and accurate way of knowing in real time exactly where vehicles are on the road - a challenge for solutions like GPS in dense urban environments.
The AI-powered image retrieval algorithm, says the company, promises to dramatically improve localization in cities, solving a problem that has long-plagued both rideshare operators and navigation apps, as well as autonomous vehicle manufacturers. Its research of crowd-sourced data of over 250,000 driving hours in New York City found that at least 40% of rides suffered GPS errors of 10 meters or more due to the " urban canyon " effect.
Using a hybrid coarse-to-fine approach that leverages visual and GPS location cues, the new method for