Automotive Intelligence in the Car

September 30, 2020 // By Mark O’Donnell, Ali Osman Ors, NXP
Automotive Intelligence in the Car
Speeding the Development and Performance of AI-based Automotive Applications

With the increased adoption of advanced driver assistance systems (ADAS), the automotive industry continues to embrace greater driver automation. As the advancement of ADAS will require more 'intelligent' decision making, it has become a good fit for developments in neural networks and deep learning. These technologies are now used in ADAS to deliver the early stages of full autonomy; Level 1, 2 and soon 3. However, there is still a long way to go before we reach full autonomy and ADAS is still an evolving area of research for automotive manufacturers. The term ADAS existed long before AI was introduced to the vehicle, and it will ultimately become replaced by Autonomous Driving when AI is fully in control. Between now and then, we can expect many exciting developments in the area of semi-autonomous features.

Figure 1: Autonomous Driving Levels.

As the main objective of ADAS is to increase driver comfort and safety, embedding machine learning into ADAS makes a lot of sense. These systems will not suffer from fatigue, or slow reactions; they will be designed to operate at the best of their abilities at all times. Just as ADAS today is used to relieve the driver of certain actions or provide greater visibility of the road and its users, when enabled by machine learning these systems will initially work alongside the driver. As we grow more accepting of these systems, we will become more dependent on them. This change in the driver dynamic will not happen quickly, nor will we see a sudden change from fully manual to completely autonomous vehicles.

According to Global Market Insights, the automotive sector is using AI in various ways. For example, deep learning is being used to train neural networks so they can react and behave like human drivers. Vision systems will be a big application area for AI, this can already be seen in the way ADAS are now able to identify road signs. Natural language processing is another area where AI is a good fit, with the precedent already set here in smart speakers in our homes. Machine learning in general will be a sector of its own. This will likely cover various systems around the vehicle that are currently monitored using sensors and controlled through ECUs. The introduction of machine learning into these systems will support inferencing that will, in turn, lead to more efficient vehicles, lower cost of maintenance and longer service lives.

Hardware platforms for more intelligent ADAS development

It is well reported that the automotive industry is extremely cost-sensitive, so any new technology must be introduced with this consideration. For larger OEMs targeting a high-end customer base, ADAS is more common and typically implemented, at an architectural level, in a centralized way. This puts the majority of the processing in a single ECU, which has many advantages, such as optimized system performance and reduced design complexity. However, the processing requirements in a centralized architecture can be high, using multicore processors with a large silicon area. An alternative approach, adopted by many manufacturers for mid-range models, is to use a distributed architecture based on more modest processors.

For manufacturers looking to develop a solution based on a centralized architecture NXP has introduced the BlueBox series of development platforms. While this is intended to help developers working on Level 4/5 self-driving vehicles, it includes the S32V234 automotive vision and sensor fusion processor, which can also be used in a distributed architecture to develop semi-autonomous features (Level 1 and 2).

The S32V234 is based on a quad Arm Cortex-A53 64-bit processor cluster and integrates an image signal processor (ISP), a 3D graphics processor (GPU) and dual APEX-2 vision accelerators. As it is intended for use in ADAS and autonomous vehicles, it is designed and manufactured to deliver automotive-grade reliability, with the functional safety and security required. As well as being suitable for various ADAS applications, such as object detection and recognition using surround view systems, it is also the perfect platform for developing automotive systems that use machine learning and neural networks.

Software tools for automotive AI system development

As with most things, AI has a hierarchy. In terms of the various technologies involved, at the very deepest level are neural networks. These are the algorithms that implement deep learning. In turn deep learning can be considered a subset of machine learning, which refers to systems that can make decisions based on inferencing. Above this sits AI, which is a more general term to describe all of the above technologies.


Deep learning is the preferred technology over machine learning in many application domains like vision and speech recognition where it has been shown to deliver higher accuracy. When applied to a specific domain, deep learning can also be easier to implement than machine learning systems as it does not require hand crafted features and extensive domain knowledge to improve quality of results. For example, sensor fusion will use deep learning neural networks and their output will be used to plan routes and predict trajectory. In vision systems, neural networks will be used to classify objects such as pedestrians and other vehicles, as well as road signs. Similarly, deep learning will be used inside the car, from driver monitoring systems to managing the drivetrain.

For the majority of developers, the software flow for developing these systems is entirely new. It involves several steps which starts with training the model using qualified datasets. For object detection, this may comprise a dataset of road signs, for example. Once trained, the accuracy of the model needs to be evaluated; a potentially iterative stage based on the accuracy required. Before the trained model can be deployed it needs to be optimized for the target hardware. This stage is crucial, because the theoretical performance can only be achieved if the physical resources are used in the most optimal way.

This also introduces another layer of complexity because, once converted, the engineering team must now re-evaluate the model to ensure it still delivers the required performance and accuracy. If any performance is lost during conversion it may be necessary to modify a previous stage. This iterative process needs to be supported by an automotive-grade software development flow if OEMs are to manage the wider development of ADAS with deep learning capabilities.

Delivering deep learning in ADAS

Closer integration between the model development and its deployment is needed. This involves tuning the model for a specific hardware platform. Providing this integration will accelerate the evolution of ADAS using AI technologies. This is where NXP has focused its efforts to introduce the eIQ Auto toolkit.

 

Figure 2: The eIQ Auto toolkit.

As shown in Figure 2, the toolkit has been developed to address the development cycle. The first step takes a trained model based on an existing framework and provides both conversion and optimization (see below) for the target hardware. The user can then evaluate the performance of the converted model with the provided feedback and fine tune the model.

Once the converted model is assessed as being suitable the flow moves to the target hardware. At this stage it becomes integrated with the application code within the software stack using the enablement provided by the toolkit. The toolkit also integrates further optimization capabilities in the backend. The inference engine that results from this process is automotive grade and compliant with A-SPICE.


The process outlined above uses standard industry techniques to optimize the model, including pruning, quantizing and compression. Because these processes can alter the behaviour of the model it is necessary to evaluate at every stage. A critical part of the flow is partitioning the various parts of the network to the available processing resources. Initially, NXP has developed the eIQ Auto toolkit to support the S32V2 processor. This means an optimized inference engine may be partitioned across its resources as shown in Figure 3.

 

Figure 3: An example ADAS application using deep learning, as it would be partitioned to maximize the resources of the S32V2 processor.

Vision based ADAS is a vital feature of semi-autonomous vehicles and its development will be crucial. While other application areas may be able to accommodate deep learning inference engines of any size, the automotive market has much more limited resources. To date, AI is generally developed and deployed on systems with almost unrestricted access to processing and memory resources. In order to move AI deeper into the automotive sector, developers need a way of optimizing deep learning models for embedded systems with limited resources.

The availability of software flows needed to achieve that is still limited. The eIQ Auto toolkit is representative of a new breed of tools, aimed at supporting the use of deep learning in ADAS through the entire design cycle.

About the authors:

Mark O’Donnell is ADAS Marketing Manager, NXP Semiconductors.

Mark O’Donnell is a product marketing professional with more than 20 years of experience with a strong understanding of ADAS systems. He is responsible for automotive vision processors and eIQ Auto marketing. Mark has a bachelor’s degree in Engineering with Business Management from the University of Strathclyde in Glasgow, UK.

Ali Osman Ors is Director, Automotive AI Strategy and Strategic Partnerships, NXP Semiconductors

Ali has over 20 years of experience in the semiconductor industry specializing in video and vision processors and enablement design, he is currently the Director of AI Strategy and Strategic Partnerships, working on foresight for NXP’s Vision, Deep Learning and AI solutions for Autonomous systems. Prior to joining NXP’s Automotive Microcontrollers and Processors unit Ali was VP of Engineering for CogniVue Corp. and in charge of the R&D teams delivering the Hardware and Software for vision SoC solutions and Cognition Processor IP core.  Ali holds an Engineering Degree from Carleton University in Ottawa, Canada.

Design category: 

Vous êtes certain ?

Si vous désactivez les cookies, vous ne pouvez plus naviguer sur le site.

Vous allez être rediriger vers Google.