For the first time, say the researchers, their algorithm enables visual terrain-relative navigation (VTRN) technology to work regardless of seasonal changes to that terrain. Current VTRN technology, which compares nearby terrain to high-resolution satellite images, requires that the terrain it is viewing closely matches the images in its database.
Anything that alters or obscures the terrain - such as snow cover or fallen leaves - can cause the images to not match up and prevent the system from working. So, say the researchers, unless there is a database of the landscape images under every conceivable condition, VTRN systems can be easily confused.
The new algorithm addresses this by using deep learning and artificial intelligence (AI) to remove seasonal content that hinders current VTRN systems.
"The rule of thumb is that both images - the one from the satellite and the one from the autonomous vehicle - have to have identical content for current techniques to work," says Anthony Fragoso (MS '14, PhD '18), lecturer and staff scientist, and lead author of a paper on the research. "The differences that they can handle are about what can be accomplished with an Instagram filter that changes an image's hues. In real systems, however, things change drastically based on season because the images no longer contain the same objects and cannot be directly compared."
The researchers' process uses self-supervised learning, which, unlike most computer-vision strategies that rely on human annotators to label large data sets to teach an algorithm how to recognize what it is seeing, lets the algorithm teach itself. The AI looks for patterns in images by teasing out details and features that would likely be missed by humans.
Supplementing the current generation of VTRN with the new system, say the researchers, yields more accurate localization: in one experiment, the researchers attempted to localize images of summer foliage against winter leaf-off imagery using a standard correlation-based VTRN technique. They found that