Sensor fusion: A critical step on the road to autonomous vehicles

Feature articles |
By Rich Pell

These numbers will continue to increase as new laws are passed, for example, mandatory rear-view cameras in the United States). Also, insurance discounts and car safety ratings from agencies like the National Highway Traffic and Safety Administration (NHTSA) and European New Car Assessment Program (Euro-NCAP) are making some systems mandatory or increasing customer demand for them.

Autonomous car features like valet parking, highway cruise control and automated emergency brake also rely heavily on sensors. It is not just the number or type of sensors that is important, but how you use them.

Most ADAS installed in cars on the street today are operating independently, meaning they hardly exchange information with each other. (Yes, some high-end cars have very advanced autonomous functions, although this is not yet the norm). Rear view cameras, surround view systems, radar and front cameras all have their individual purpose.

By adding these independent systems to a car, you can give more information to the driver and realize some autonomous functions.  However, you can also hit a limit on what can realistically be done – see figure 1.

Figure 1: ADAS added as individual, indepenent functions to a car.

Sensor fusion
Individual shortcomings of each sensor type cannot be overcome by just using the same sensor type multiple times. Instead, it requires combining the information coming from different types of sensors. A camera CMOS chip working in the visible spectrum has trouble in dense fog, rain, sun glare and the absence of light. Radar lacks the high resolution of today’s imaging sensors, and so on. You can find strengths and weaknesses like that for each sensor type.

The great idea of sensor fusion is to take the inputs of different sensors and sensor types and use the combined information to perceive the environment more accurately. That results in better and safer decisions than independent systems could do.

Radar might not have the resolution of light-based sensors, but it is great for measuring distances and piercing through rain, show and fog. These conditions or the absence of light do not work well for a camera, but it can see color (think street signs and road markings) and has a great resolution. We have one megapixel image sensors on the street today. In the next few years the trend will be two and even four megapixels.

Radar and camera are examples of how two sensor technologies can complement each other very well. In this way a fused system can do more than the sum of its independent systems could.

Using different sensor types can in addition offer a certain level of redundancy to environmental conditions that could make all sensors of one type fail. That failure or malfunction can be caused by natural (such as a dense fog bank) or man-made phenomenon (for instance spoofing or jamming a camera or radar).

Such a sensor-fused system could maintain some basic or emergency functionality, even if it lost a sensor. With purely warning functions, or the driver always being ready and able to take over control, a system failure might not be as critical. However, highly and fully autonomous functions must allow adequate time to give control back to the driver. A minimum level of control needs to be maintained by the control system during this time span.

Sensor fusion system examples
Sensor fusion can happen at different levels of complexity and with different types of data. Two basic examples of sensor fusion are: a) rear view camera plus ultrasonic distance measuring; and b) front camera plus multimode front radar – see figure 2. This can be achieved now with minor changes to existing systems and/or by adding a separate sensor fusion control unit.

Figure 2: Fusing front radar with front camera for adaptive cruise control plus lane-keep assist or rear-view camera with ultrasonic distance warning for self-parking.
  • Rear view camera + ultrasonic distance measuring

Ultrasonic park assist, which has reached wide acceptance and maturity in the automotive market, gives an acoustic or visual warning of near objects while parking. As mentioned earlier, rear view cameras will be legally required in all new cars in U.S. by 2018. Combining information from both allows the introduction of advanced park assist features, which is not possible with just one system. The camera gives the driver a clear view of what is behind the car and machine vision algorithms can detect objects, as well as the curb and markings on the street. Supplemented with the capabilities of the ultrasound, the distance of the identified objects can be accurately determined and basic proximity warning in low light or even full darkness is ensured.

  • Front camera + multimode front radar

Another powerful combination is combining the function of a front camera with the front radar. Front radar can measure the speed and distance of objects up to 150 meters in all weather conditions. The camera is great in detecting and differentiating objects, which includes reading street signs and street markings. By using multiple camera sensors with a different field of view (FoV) and different optics, such things as pedestrians and bikes passing in front the car as well as objects 150 meters and more ahead can be identified. Features like automated emergency brake and city stop-and-go cruise control can be reliably implemented.

Being able to perform ADAS functions under certain well-defined conditions in many cases can be achieved by single sensor types or individual systems. However, this can be insufficient to operate reliably given the unpredictable conditions found on our streets. Sensor fusion – in addition to enabling more complex and autonomous features – can achieve fewer false positives and false negatives in existing features. It will be critical to convince customers and law makers to trust “a machine” to drive a car autonomously.

Sensor fusion system partitioning
Instead of each system independently performing its own warning or control function in the car, in a fused-system the final decision on what action to take is made centrally by a single entity. A key question now becomes where the data processing is done and how to get the data from the sensors to the central ECU.

When fusing multiple sensors, not co-located but distributed all over the car, the connections and cables between sensors and centralized fusion ECU are worth some special consideration. The same is true for the location of the data processing as it will also impact implementation of the whole system. Let us look at the two extreme ends of a possible system partitioning.

Centralized processing
On the centralized end of the spectrum all data processing and decision making is done in a single location, data comes from the various sensors “raw” – see figure 3.

Figure 3: Centralized processing with “dumb” satellite sensor modules.

Sensor module – Sensor modules are small, low cost and low power as only sensing and data transmission is required. Sensors have a flexible choice of mounting locations and require little mounting space. The replacement cost is low. Typically sensor modules have lower functional safety requirements because no processing or decision- making is done.

Processing ECU – A central processing ECU has all data available as no data is lost due to pre-processing or compression in the sensor module. More sensors can be deployed because they are low cost with a small form factor.

Sensor module – Wide-bandwidth communication is needed to handle the amount of sensor data in real time (up to multiple Gbit/s), and due to that the possibility of higher electromagnetic interference (EMI).

Processing ECU – The central ECU needs high-processing power and speed to handle all incoming data. This means higher power requirements and heat generation, with many high bandwidth I/O and high-end application processors. Adding sensors will significantly increase the performance needs on the central ECU. Some drawbacks can be overcome by using interfaces (such as FPD-Link III) that allow sending sensor data as well as power, control and configuration data (bi-directional back-channel) over a single coaxial cable. This can significantly reduce the system’s wiring requirements.

Fully-distributed system
On the other end of the spectrum is the fully-distributed system. It does a high level of data processing and, to a certain extent, decision-making locally in the sensor modules. A fully-distributed system only sends object data or meta-data (describes object characteristics and/or identifies objects) back to a central fusion ECU. Here data is combined and the final decision on how to act or react is made – see figure 4.

Figure 4: Distributed system with sensor data processing in the sensor modules and decision making in a central ECU.

A fully-distributed system has both benefits and drawbacks.

Sensor module – A lower bandwidth, simpler and cheaper interface between the sensor modules and central ECU can be used. In many cases less than 1Mbit/s CAN is sufficient.

Processing ECU – The central ECU only fuses object data, so it requires lower processing power. An advanced safety microcontroller can be sufficient for some systems. Being a smaller module it requires less power. Adding sensors does not drastically increase the performance needs of the central ECU as much of the processing is done in the sensor itself.

Sensor module – Sensor modules require an application processor, become larger, pricier and require more power. Functional safety requirements are higher in the sensor module due to local processing and decision making. Of course, adding more sensors can add significant cost.

Processing ECU – A central decision-making ECU only has object data available with no access to the actual sensor data. “Zooming” into areas of interest is difficult to realize.

Finding the golden middle
Depending on the number and type of sensors used in a system, the scalability requirements for different car types and upgrade options, a mix of the two topologies can lead to an optimized scenario. Today many fusion systems use sensors with local processing for radar, LIDAR, and the front camera for machine vision.

A fully-distributed system can use existing sensor modules in combination with an object data fusion ECU. “Dumb” sensor modules for systems like surround view and rear-view cameras make video available to the driver – see figure 5. Many more ADAS functions can be integrated into a fusion system such as driver monitoring or a camera-monitoring system, but the principle of sensor fusion remains the same.

Figure 5: Finding the perfect mix of distributed and centralized processing.

Platform management, targeted car segments, flexibility and scalability are important economic factors that also play an important role when partitioning and designing a fusion system. The resulting system might not be the best case scenario for any given variant, but could be best when looked at from a platform and fleet perspective.

Who is the “viewer” of all this sensor data?
There are two aspects to an ADAS that we did not yet discuss: informational ADAS versus functional ADAS. The first is to extend the senses of the driver while he is still in full control of the car (for example, surround view, night vision). The second is machine vision, which allows the car to perceive its environment and make its own decisions and actions (automated emergency brake, lane keep assist).  Sensor fusion naturally allows those two worlds to converge.

With that comes the possibility of using the same sensor for a different purpose, but at the price of limiting the choices regarding best inter-module communication and location of processing. Take surround view as an example. Originally designed to give the driver a 360 degree field of view (FoV) through video feeds to a central display. Why not use the same cameras and apply machine vision to it? The rear camera can be used for back over protection or automated parking and the side cameras for blind spot detection/ warning and also automated parking.

Machine vision used alone does local processing in the sensor module and then sends object data or even commands through a simple low-bandwidth connection like CAN. However, this connection is insufficient for a full video stream. Compression of the video can certainly reduce the needed bandwidth, but not enough to get into the single megabit range and it comes with its own challenges.

With increasing resolutions, frame rates and number of exposures for high dynamic range (HDR), this becomes much more difficult. A high-bandwidth connection and no data processing in the camera module solves the problem for the video, but now processing needs to be added on the central ECU to run machine vision there. Lack of central processing power or thermal limitations can become the bottle neck for this solution.

While not technically impossible, using both processing in the sensor module and a high-bandwidth communication at the same time, it might not be beneficial from an overall system cost, power and mounting space perspective.

Reliable operation of sensor fusion configurations
As many fusion systems are capable of performing autonomous control of certain car functions (examples include steering, braking, accelerating) without the driver, functional safety considerations need to be included to ensure the safe and reliable system operation under various conditions and over the lifetime of the car. As soon as a decision is made and followed up by an autonomous action, the functional safety requirements increase significantly.

With a distributed approach, each module processing critical data or making decisions will have to meet those increased standards. This adds bill of materials (BOM) cost, size, power and software compared to a module that is just gathering and sending sensor information. In an environment where mounting space is scarce, cooling is difficult, and the risk of damage and needed replacement is high (a simple fender bender could result in replacing the bumper and all attached sensors), this can make up for the benefits of a distributed system with many sensor modules.

Alternatively, a “dumb” sensor module needs to do self-diagnostic and error reporting to allow safe operation of the whole system, but not to the extent of a smart sensor module.

While pure driver information systems can shut down in case their function is compromised and notify the driver, highly autonomous functions do not have that freedom. Imagine a car performing an emergency braking maneuver and suddenly disengaging and releasing the brakes. Or imagine the full system shutting down on a highway while the driver is sleeping on “full auto pilot” (a potential future scenario). A minimum time will be needed during which the system has to continue working until a driver can safely take back control, which can last several seconds up to half a minute.

To which extent the system has to be operational and how to ensure operation under fault conditions, there does not yet seem to be a clear consensus in the industry. Airplanes with autopilot features typically use redundant systems. While these are generally considered safe, they are expensive and space-consuming solutions.

Sensor fusion is a critical step towards the goal of turning on the auto pilot and leaning back to enjoy the ride.

About the author:
Hannes Estl is General Manager, Automotive ADAS Sector at Texas Instruments – – He can be reached at

Related articles:
IoT: sensor fusion or confusion?
Sensor fusion and radar to drive Advanced Driver Assistance Systems
Audience, InvenSense buy up sensor fusion software firms
MEMS’ new battleground: Hardware-agnostic sensor fusion?
Multi-sensor data fusion processing unit targets drones


Linked Articles