"Centralisation of sensor data fusion will continue": Page 3 of 4

June 07, 2019 //By Christoph Hammerschmidt
"Centralisation of sensor data fusion will continue"
The automation of driving requires an immense amount of software in the vehicles, also and above all in the area of sensor data fusion. The software company Baselabs has gained a strong position in this area. With Baselabs co-founder and director customer relations Eric Richter, eenews Europe talked about the software requirements and the role artificial intelligence will play in cars.

eeNews Europe: Talking about radar. I assume that the radar sensor of a company X provides a different structure of a point cloud than the one of company Y. With cameras, it is perhaps even more pronounced, since preprocessing is already partly carried out in the camera. Developers then have to deal with completely different data.

Richter: Exactly. This is an important issue. There are different data levels for each sensor; even the terms are not exactly defined. Many then speak of raw data or feature level data, detection level data and object level data - these are the usual three to four levels that are distinguished. The exact idea differs slightly from manufacturer to manufacturer. For us it is important to take a close look at what level a sensor delivers. The two highest levels - object level and detection level - have existed for the longest time; this is where we have already made the most progress with our product range. Newer approaches, which we are also developing at Baselabs, such as the Dynamic Grid, a new algorithmic procedure, primarily address the lower levels, i.e. feature levels and raw data.

eeNews Europe: Dynamic Grid? Please explain.

Richter: This is our term for this process group. The background: You have to reliably determine the free space around the vehicle in order to calculate the trajectory you want to travel. So far, occupancy grids have mainly been used here. However, these methods have some decisive disadvantages. Above all, they are not able to distinguish between static and dynamic objects. At higher SAE autonomy levels defined, from Level 3 and above, i.e. things like motorway pilot ADAS, this causes difficulties. This is where this new process group, which we call Dynamic Grid, comes in. For each space element, per grid cell, not only is it determined whether this cell is occupied by another vehicle, but it is also determined in which direction the object is moving and at what speed. Thus, this method helps to distinguish between dynamic and static objects and can directly process point clouds from lidar sensors or HD radar images.

Vous êtes certain ?

Si vous désactivez les cookies, vous ne pouvez plus naviguer sur le site.

Vous allez être rediriger vers Google.