LIDAR for Autonomous System Design: Object Classification or Object Detection?

May 13, 2021 // By Sarven Ipek and Ron Kapusta, Analog Devices
LIDAR for Autonomous System Design: Object Classification or Object Detection?
Perceiving the world around us is challenging. Understanding the design requirements for object detection and classification will help achieve a safe and cost-effective solution.

The promise of a fully autonomous tomorrow no longer seems like a pipe dream. Today, the questions around autonomy center on the underlying technologies and the advancements needed to make autonomy a reality. Light detection and ranging (LIDAR) has become one of the most discussed technologies supporting the shift to autonomous applications, but many questions remain. LIDAR systems with ranges greater than 100 m and 0.1° of angular resolution continue to dominate autonomous driving technology headlines. However, not all autonomous applications require this level of performance. Applications such as valet park assist and street sweeping are two such examples. There are plenty of depth sensing technologies that enable these applications, such as radio detection and ranging (radar), stereo vision, ultrasonic detection and ranging, and LIDAR. However, each of these sensors has a unique trade-off between performance, form factor, and cost. Ultrasonic devices are the most affordable, but are limited in range, resolution, and dependability. Radar is much improved in range and dependability, but it also has angular resolution limitations, while stereo vision can have a large computational overhead and limitations in accuracy, if not calibrated properly. Thoughtful LIDAR system design helps bridge these gaps with precision depth sensing, fine angular resolution, and low complexity processing, even at long ranges. However, LIDAR systems are typically viewed as bulky and costly, which needn’t be the case.

LIDAR system design begins with identifying the smallest object the system needs to detect, the reflectivity of that object, and how far away that object is positioned. This will define the system’s angular resolution. From that, the minimum achievable signal-to-noise ratio (SNR) can be calculated, which is the true/false positive or negative detection criteria needed to detect the object.

Understanding the perception environment and amount of information necessary to make the appropriate design trade-offs enables the development of the optimal solution relative to both cost and performance. For example, consider an autonomous automobile traveling down a

Design category: 

Vous êtes certain ?

Si vous désactivez les cookies, vous ne pouvez plus naviguer sur le site.

Vous allez être rediriger vers Google.