Consistent usage of simulations: from the function design for ADAS/AD up to the real control unit hardware
The approach for testing an ADAS/AD function is equivalent to the one taken for other control functions. In either case, the “system under test“ needs to be separated and embedded in a plausible simulation of the remaining system. At first, the function can be developed and tested independent of the target hardware. Depending on the state of development we speak of:
- Model-in-the-Loop (MiL) for function design,
- Software-in-the-Loop (SiL) for testing of electronic control unit (ECU) software without real hardware or
- Hardware-in-the-Loop (HiL), for executing the function as embedded software on an ECU.
Even before testing the function, simulation supports during system design and specification and also in failure mode and effects analyses (FMEA). The simulation tools CANoe and DYNA4 by Vector ensure a consistent deployment of simulation throughout the entire development process. This also holds true for the continuous testing during the implementation phase. Figure 1 illustrates this by means of the V-model.
In case of HiL applications, the simulation provides all digital and analog input and output signals as well as the entire communication on different types of networks. Typically, all three phases are conducted during the development process, so it is obvious to reuse as many of the prepared simulation artefacts as possible throughout the entire process. This especially applies to the tests that were generated right at the beginning of development. Ideally, the same test steps can be used consistently, starting with the MiL phase and continuing through to HiL tests.
From stimulation to closed-loop simulation – physical models for virtual test drives
Simple functions can be sufficiently tested by mere stimulation with synthetic or recorded data. However, if the function under test controls the driving behavior of the vehicle, the effects of these control inputs must be fed back to achieve closed-loop behavior. This requires models which reflect the physical world with sufficient precision. What sufficiently precise exactly means depends strongly on the function under test as this example shows: For testing an automated parking function, a precise reflection of the vehicle dynamics including the exact tire forces is important despite the low speeds. In contrast, when testing an ACC function regarding its correct interpretation of fused sensor data, a simplified vehicle dynamics model that reflects the characteristic roll and pitch behavior can be enough.
This applies similarly to sensor models: If the ACC system expects an object list as its input from the environment sensor, object-based sensor models are usually sufficient. If, however, the preceding sensor fusion should be tested as well, less processed sensor data is required. These detections can be calculated by more detailed sensor models.
The models used should therefore be scalable with regards to their fidelity depending on the specific use-case. However, increasing the models‘ complexity leads to an increase of the number of parameters and it should be stated that the model performance can only be as good as the corresponding parametrization. Especially when testing ADAS/AD functions, the parameter sets become increasingly large as not only the vehicle under test, but also different lighting and weather conditions, road surface conditions and of course other traffic participants need to be depicted in complex scenarios. Figure 2 shows an example of such a scenario for a virtual test drive. The structured management of parameter sets and model variants is therefore just as important as the model itself.
Efficient testing from MiL to HiL with CANoe and DYNA4
Dedicated tooling exists for the test execution from MiL to HiL, and likewise for the physical vehicle and environment simulation. Ever more frequently, the respective questions need to be answered in conjunction and throughout the entire development process. For this reason, the domain-specific tools are coupled. CANoe, for example, provides comprehensive features for the integration of ECU software in all development phases:
- Integration of Simulink models (MiL)
- Integration of entire virtual ECUs with vVIRTUALtarget (SiL)
- Integration of real ECU hardware (HiL)
The remaining bus simulation reflects the network communication realistically and enables the integration of the system under test. The tests for ECUs can be designed and automated easily with vTESTstudio and are re-usable throughout the entire development process. In CANoe, synthetic signals or recorded data can be used to stimulate functions. A seamless integration of physical models from the vehicle and environment simulator DYNA4 facilitates closed-loop applications. Figure 3 shows a closed-loop test environment with CANoe and DYNA4.
DYNA4 provides physical models of the vehicle and the environment, but also enables the authoring and management of models, scenarios and their parameters. For evaluation of the simulation results, a wide range of signal analysis options and a 3D visualization are included. Once models and scenarios are prepared in DYNA4, they can be executed directly in CANoe. This allows experienced CANoe users to keep their accustomed workflows and also ensures an easy entry into closed-loop simulations.
The efficient coupling of the highly specialized tools CANoe and DYNA4 leads to an ideal combination for the test of ADAS/AD functions: with comprehensive options for closed-loop system tests and consistent workflows starting at early development stages through to real hardware.
About the author:
Dr. Jakob Kaths works for Vector Informatik as Product Owner for the vehicle and environment simulation DYNA4.