Autonomous navigation using visual terrain recognition gets AI boost

Autonomous navigation using visual terrain recognition gets AI boost
Technology News |
Researchers at the California Institute of Technology (Caltech) say they have developed a new algorithm that allows autonomous systems to recognize where they are simply by looking at the terrain around them.
By Rich Pell

Share:

For the first time, say the researchers, their algorithm enables visual terrain-relative navigation (VTRN) technology to work regardless of seasonal changes to that terrain. Current VTRN technology, which compares nearby terrain to high-resolution satellite images, requires that the terrain it is viewing closely matches the images in its database.

Anything that alters or obscures the terrain – such as snow cover or fallen leaves – can cause the images to not match up and prevent the system from working. So, say the researchers, unless there is a database of the landscape images under every conceivable condition, VTRN systems can be easily confused.

The new algorithm addresses this by using deep learning and artificial intelligence (AI) to remove seasonal content that hinders current VTRN systems.

“The rule of thumb is that both images – the one from the satellite and the one from the autonomous vehicle – have to have identical content for current techniques to work,” says Anthony Fragoso (MS ’14, PhD ’18), lecturer and staff scientist, and lead author of a paper on the research. “The differences that they can handle are about what can be accomplished with an Instagram filter that changes an image’s hues. In real systems, however, things change drastically based on season because the images no longer contain the same objects and cannot be directly compared.”

The researchers’ process uses self-supervised learning, which, unlike most computer-vision strategies that rely on human annotators to label large data sets to teach an algorithm how to recognize what it is seeing, lets the algorithm teach itself. The AI looks for patterns in images by teasing out details and features that would likely be missed by humans.

Supplementing the current generation of VTRN with the new system, say the researchers, yields more accurate localization: in one experiment, the researchers attempted to localize images of summer foliage against winter leaf-off imagery using a standard correlation-based VTRN technique. They found that performance was no better than a coin flip, with 50 percent of attempts resulting in navigation failures.

In contrast, insertion of the new algorithm into the VTRN worked far better: 92 percent of attempts were correctly matched, and the remaining 8 percent could be identified as problematic in advance, and then easily managed using other established navigation techniques.

“Computers can find obscure patterns that our eyes can’t see and can pick up even the smallest trend,” says graduate student Connor Lee (BS ’17, MS ’19), one of the algorithm’s developers.

VTRN was in danger of turning into an infeasible technology in common but challenging environments, says Lee. “We rescued decades of work in solving this problem.”

Beyond the utility for autonomous drones on Earth, say the researchers, the system also has applications for space missions. The entry, descent, and landing (EDL) system on JPL’s Mars 2020 Perseverance rover mission, for example, used VTRN for the first time on the Red Planet to land at the Jezero Crater – a site that was previously considered too hazardous for a safe entry.

With rovers such as Perseverance, “a certain amount of autonomous driving is necessary,” says Soon-Jo Chung, Bren Professor of Aerospace and Control and Dynamical Systems and research scientist at JPL, “since transmissions could take 20 minutes to travel between Earth and Mars, and there is no GPS on Mars.”

The researchers considered the Martian polar regions that also have intense seasonal changes – conditions similar to Earth – and the new system could allow for improved navigation to support scientific objectives including the search for water. Next, say the researchers, they will expand the technology to account for changes in the weather as well: fog, rain, snow, and so on, which, If successful, could help improve navigation systems for driverless cars.

For more, see “A seasonally invariant deep transform for visual terrain-relative navigation.”

Related articles:
Vision processing – its role in the future of autonomous driving
Robot navigation SDK for autonomous systems researchers
Autonomous drone maps 3D models of dense urban environments
Mapless navigation lets autonomous cars drive on unknown roads
Precision navigation partnership looks to quantum sensing

Linked Articles
Smart2.0
10s