Ensuring ADAS safety with multi-sensor fusion
If you’ve driven around the Metro Phoenix area of the United States recently you may have shared the streets and freeways with a driverless vehicle. Jaguar’s I-PACE forms the base vehicle for Waymo’s premium electric self-driving vehicles, which, if it weren’t for the exterior branding, are recognisable by their ‘ski-box’ sized adornment found on the roof. The availability of this, and other, self-driving services is in response to a demand for car sharing and a falling interest in being a car owner, especially for those whose driving is limited to local commutes and grocery shopping. It also aims to provide a cost-effective transport solution for the 3 million 40+ aged blind or low vision Americans, as well as for senior citizens living in car dependent communities [Reference 1].
Advanced Driver-Assistance Systems (ADAS) have become a standard feature of many modern vehicles, helping the driver to handle certain traffic situations. Many Automotive OEMs and their suppliers are currently focusing on Level 3 automation where ‘hands-off’ and ‘eyes-off’ are possible but the driver has to intervene under certain conditions. Market disrupters looking to offer driverless car-sharing services, such as Waymo, are already playing in the Level 4 space.
Reviewing the constituent parts of automation
When broken down into its constituent parts, the autonomous driving platform is essentially made up of three blocks; sensing, computation and actuation. All current autonomous driving solutions combine a significant number of different sensing solutions to determine the environment around them. This includes short-range sensing such as ultrasound and short-range radar as well as long-range radar, LiDAR, and 360° cameras.
The challenges of processing all this sensing data includes the huge variation in data volume, coupled with the broad spectrum of sampling rates employed. LiDAR can generate more than a million data points per second, while other sensors may be only delivering tens-of-thousands of samples a second. And, while the lower SAE levels can rely upon a handful of sensor inputs, implementation of the higher SAE levels demands significantly more sensor data. Without it, it is not possible to accurately perceive the environment around the entire vehicle.
Fusing of disparate data
In order to achieve an overview of the mass of data being collected it is typically fused together into a comprehensive data stream in a process termed multi-sensor fusion. Through this process incoming sensor data of disparate sampling rate and volume is harmonised to provide a model of the environment that can be used by higher levels of software in the system. This is the approach taken by the SigmaFusion solution from Leti, a technology research institute at the French Alternative Energies and Atomic Energy Commission (CEA). It starts by taking into account that the outputs of range sensors, as used for vehicles, carry some uncertainty. In order to ensure accuracy in the output, and reach safety in the choices made based upon the data, Leti characterises the sensors being used. The source data is fed into the SigmaFusion software which processes it, offering up an occupancy grid to the application layers above. This grid of cells of known dimension assesses the free-spaces and obstacles around the vehicle, with each cell in this model containing the probability of it being occupied or unoccupied by an obstacle.
Within the safety critical system of the vehicle this solution can operate as a stand-alone solution to provide fail-operational perception of the vehicle’s environment for ADAS functions. Alternatively, it is equally appropriate as a safety companion to an automated driving decision system (Figure 1).
Selection of a safe processing platform
Of course, to fulfil the demands of software within a safety critical system a processing platform with pedigree in the field of automotive is also required. Here the best in class Aurix microcontrollers from Infineon, with their advanced signal processing capabilities, lockstep cores, and automotive interfaces, form an ideal basis, both in an ADAS stand-alone system or in a companion role to an automated driving AI platform. As a result, it was selected as the basis for a new demonstrator that embeds a range of sensors into the plastic bumper of a vehicle to display how multi-sensor fusion can be implemented, and how the SigmaFusion software in combination with Aurix presents the results.
The demonstrator draws upon a range of sensing solutions. A 77 GHz radar is based upon the RXS8160, a mmWave MMIC transceiver featuring a programmable waveform generator with fast chirp modulation. This is coupled with the radar processor out of the scalable Aurix family, TC397XA, which includes most advanced hardware signal processing units (SPU) enabling highest radar processing performance in a single chip, providing 4 MB of on-chip SRAM to store the radar image. It connects to the radar sensor via a high-speed LVDS radar sensor interface (RIF) that can operate at up to 3.2 Gbps. Connectivity to other systems within the vehicle is implemented via the TLE9251V CAN-FD transceiver. This includes robust wake-up pattern (WUP) detection, supporting worldwide wake-up filter timing, and is approved for use without external ESD protection. The complete radar module is supplied by a single OPTIREG PMIC TLF30682.
At the core of the demonstrator is a further member of the scalable Aurix safety microcontroller family, the TC397XX. This contains 4 lockstep and 2 non-lockstep cores operating at up to 300 MHz, providing the latest connectivity options via gigabit Ethernet, FlexRay, CAN-FD and LIN. Ensuring the power is optimally supplied to the solution, the board includes an OPTIREG PMIC TLF30682. This device requires a minimal number of support components, resulting in a space-saving power solution that includes integrated monitoring and supervision functions. The system is complimented with further commercial off-the-shelf (COTS) sensors, such as a LiDAR solution and camera, which also feed into the SigmaFusion software (Figure 2).
The demonstrator highlights how SigmaFusion can be used to increase the safety of AI driving algorithms by delivering a free-space assessment of the environment around the vehicle. Equally, the solution can also target more cost-sensitive ADAS implementations, being compatible with any kind of range sensor. With vehicles moving to pure electric energy sources, the solution proves to be especially energy efficient, drawing less than 3 watts in operation.
As established players and market disruptors jostle for supremacy in the space of self-driving vehicles, it is essential that automotive-qualifiable solutions form the basis of those platforms for the safety of road users and pedestrians alike. With the automotive industry moving ever closer to pure electric drivetrains, these solutions will additionally need to be electrically efficient. Industry proven processing platforms with a pedigree in safe automotive electronics, coupled with intelligent multi-sensor fusion middleware, are key to delivering ADAS features. The combination of SigmaFusion, coupled with the Infineon Aurix and RXS8160 radar solution, demonstrates that certifiably safe automotive-capable multi-sensor fusion solutions are possible.
About the authors:
Thomas Schneid is Senior Director Software, Partnership & Ecosystem Management, Infineon Technologies AG.
Marie-Sophie Masselot is Senior Partnership Manager at the Leti Technology Research Institute.
Notes and references
1. Waymo Mission: https://waymo.com/mission/