Virtualization on Microcontrollers
What was previously possible on application processors in infotainment and connectivity now also comes to the real-time processors and microcontrollers required in other vehicle domains. A new hypervisor platform opens up many opportunities for the next generation of increasingly automated cars.
NEW VEHICLE ARCHITECTURES
Vehicles need continuously increasing processing power to run software-defined functions providing infotainment and connected services, assisting the driver, making the vehicle safer and managing energy sources. The era where each new vehicle function required the integration of a new ECU (Electronic Control Units) is long over: vehicle manufacturers are designing new architectures whereby many software functions are integrated on more centralized, powerful devices, often called “domain controllers”.
The various functions in the car have very different requirements on the capabilities of the underlying software and hardware platforms. Software applications do not only require generic processing power but need specialized accelerators to generate high-resolution graphics, process camera-images or radar data and run artificial intelligence algorithms such as deep learning. In addition, the vehicle functions have different requirements on functional safety (from ISO26262 “QM” to “ASIL-D”), boot-times and real-time behavior.
As a result, the vehicle electronics architecture will be built from a diverse set of processor cores and hardware accelerators. Application processors, such as cores based on the ARM Cortex-A family or the Intel x86 architecture, are designed to run large operating systems and frameworks (including Linux, Android or adaptive AUTOSAR). Microcontrollers and real-time processors, based on the ARM Cortex-M or Cortex-R family, Renesas RH850, Infineon Tricore and others are designed to run real-time applications, handle high interrupt loads, start extremely quickly or achieve very high reliability, supporting ASIL levels that are difficult to achieve with large application processors. They will typically be used to run classic AUTOSAR-based systems or specialized real-time operating systems.
Many vehicle functions will distribute the software modules of which they consist, on several different kinds of processors and accelerators. An ADAS algorithm will do image processing on an application processor, supported by hardware accelerators, but might use a second microcontroller to supervise the execution and ensure that safety goals are achieved. In some cases, the companion microcontroller might be on a separate hardware device, in some cases it is integrated on the same SoC (System-on-Chip).
The hardware industry offers systems-on-chip combining processing cores and accelerators, targeting specific domain controllers. Many SoCs designed for infotainment or connectivity combine multi-core application processors (often based on ARM Cortex-A) with powerful GPUs (Graphic Processing Units) and a micro-controller (e.g. based on ARM Cortex-M or Cortex-R) to run safety functions or classic AUTOSAR. Similarly, new SoCs targeting domain controllers running many driver assistance systems integrate application processors, real-time processors, microcontrollers and specialized accelerators.
VIRTUALIZATION ON APPLICATION PROCESSORS
Embedded virtualization is a technology that makes it possible to divide the resources of a processor into safely separated “virtual machines” (VMs). Each VM can run its own operating system (called the “guest operating system”), framework and applications. The “Hypervisor” is the piece of software that manages the isolation (“freedom from interference”) and controlled communication between the VMs. Application processors using e.g. the ARMv8-A architecture have built-in extensions to ensure that a Hypervisor can run very effectively and largely transparently to the guest operating system: the processor architecture has an additional execution level for the hypervisor, has added “2-stage translation” in the MMU (Memory Management Unit) and additional facilities in the interrupt handler.
The Intel x86 architecture has similar extensions. In addition, many SoC vendors have selected GPUs or added system-level components that facilitate virtualization of the on-chip devices.
This technology is already in production today. One prominent example is the so-called “Cockpit Controller” which is a domain controller driving many displays in the car and unifying infotainment functionality with a digital instrument cluster. In this case, the hypervisor makes it possible to run different software frameworks (e.g. Android for the infotainment, Linux for the instrument cluster, and a separate OS for safety-critical functions) on one SoC. The Cockpit Controller provides a more integrated user experience and is cheaper and more flexible compared to a multi-ECU approach.
VIRTUALIZATION ON MICROCONTROLLERS?
The need for virtualization on application processors, in domains such as infotainment, is due to the confluence of (1) the requirement to integrate applications with very different requirements modularly on a single processor with (2) a new generation of processors that have the computing power and hardware extensions to run these applications virtualized on a single processor using a hypervisor.
Now the same is happening in other vehicle domains relying more strongly on microcontrollers and real-time processors.
Domain controllers running on a microcontroller or real-time processor need to integrate an ever-increasing amount of software. This software is often developed according to different functional safety levels or sourced from different suppliers so that freedom from interference must be insured. In addition, as the amount of software increases, the modularity has to extend from the development process to software updates after the device has been produced: it must be possible to update one software function without the risk to affect others or the need to completely requalify the entire device.
One concrete example can be found in the body domain. Such a domain controller will run, on a single microcontroller, functions that are safety-critical (such as power-management of the entire body domain), security-critical (such as unlocking the car) and functions that are neither (such as interior lighting). Ideally these functions can be developed independently and integrated easily. It should be possible to do a software update of uncritical functions (such as the interior lighting) without affecting safety relevant functions (such as managing power).
To a certain extent, classic AUTOSAR already provides such separation, even supporting several ASIL-levels, without the need of a hypervisor. AUTOSAR provides separation at the level of the operating system and the applications consist of individual software-components. However, in complex software systems, the configuration of AUTOSAR becomes extremely complex as the behavior of the operating system and the services in the basic-software needs to be defined centrally, which breaks modularity. AUTOSAR also requires all applications to follow the AUTOSAR standard, even to the same version. Finally, the result of the AUTOSAR development process is a monolithic system that does not allow for modular software updates.
The hypervisor adds an additional level of decoupling, supporting a critical first level of separation in development, configuration, integration and software update. Within one virtual machine, an AUTOSAR-based system (providing a second level of separation) will be used in many cases. Several virtual machines can run different systems with different AUTOSAR implementations or even non-AUTOSAR-compliant software.
In addition to these new requirements, which cannot be addressed by existing technologies such as the classic AUTOSAR standards, new generations of microcontrollers and real-time processors have built-in extensions that make running a hypervisor very efficient. The hardware extensions that have been available in application processors since many years, are now coming to microcontrollers and to the SoCs that integrate them. One good example is the ARMv8-R architecture, which has added extensions for virtualization to the “R” (Real-Time) family, such as a second MPU (Memory Protection Unit) controlled by the hypervisor. This architecture is used in the ARM® Cortex®-R52 core, which has been adopted by new controllers such as the NXP S32S.
OpenSynergy has been developing a variant of its Hypervisor for microcontrollers since several years. This product is called COQOS Micro SDK. It is the first hypervisor to take advantage of the virtualization extensions in the ARMv8-R architecture and will support the next generation of microcontrollers built on that architecture.
How does virtualization work on MCUs?
The central component of the COQOS Micro SDK is the hypervisor. The key goal of the hypervisor is to ensure freedom from interference (as specified by ISO 26262) up to the highest level ASIL-D between virtual machines. This requires temporal and spatial separation of virtual machines.
The hypervisor ensures spatial separation between virtual machines by using a dedicated memory protection unit (MPU). The hypervisor is responsible for allocation of exclusive memory regions to a virtual machine, and for enabling virtual machines to access peripheral devices. When the hardware includes two MPUs (per each core), a real-time operating system running inside the virtual machine may use the first-stage MPU for protection inside the virtual machine. To ensure complete separation, the memory space must also be protected from interference by bus masters other than the processor (non-core bus masters), such as direct memory access (DMA) controllers. For this, most SoC manufacturers provide their own custom methods of limiting the accessible memory regions.
If multiple virtual machines share a physical core, the hypervisor uses real-time scheduling policies to switch between virtual machines. The virtual machines’ view of CPU time on a physical core is provided by a virtual CPU (vCPU). The vCPU is used by the real-time operating system running in the virtual machine to schedule tasks. Thus, a 2-level scheduling mechanism is implemented. The hypervisor ensures that each virtual machine gets the configured amount of CPU time, while the RTOS scheduler assigns the provided CPU time to tasks based on their priorities.
Another important aspect for temporal separation is the management of interrupts. In most cases, the hypervisor assigns interrupts to specific virtual machines that use the corresponding devices. The hypervisor may also have to handle some interrupts at first and then notify the virtual machines, for example if multiple virtual machines share a physical core. The hypervisor may also have to take special care when virtual machines use hardware semaphores, in order to avoid conflicts between virtual machines.
The ARMv8-R architecture supports an additional privilege level for the hypervisor as well as virtualization of core timers. This makes implementation of 2-level schedulers easier. Moreover, the ARM Generic Interrupt Controller (GIC) has support for virtualization as well, allowing the hypervisor to directly route interrupts to virtual machines as well as virtualize interrupts. ARM virtualization extensions also make context switching between virtual machines easier.
In addition, the hypervisor must provide means for efficient and safe communication between virtual machines. The hypervisor may encapsulate the complete communication mechanisms, which is conceptually similar to the Inter-OS-Application Communicator (IOC) from AUTOSAR. Alternatively, the hypervisor may provide only the basic mechanisms needed to set up a communication channel to virtual machines: shared memory and, optionally, a notification mechanism between the virtual machines. In this case, the virtual machines execute the appropriate communication mechanisms.
Virtualization is also a step towards more modular software updates. Unlike AUTOSAR OS-Applications, virtual machines can be built independently, and the respective binary code can be updated independently on the target.
What is the advantage over non-hypervisor methods?
The use of virtualization technology brings numerous advantages for the integration of software systems in the vehicle:
- Virtualization makes it simpler to provide freedom from interference by enforcing temporal and spatial separation.
- Virtualization allows independently developed software partitions to run on the same ECU. The software partitions may use different software stacks.
- Virtualization allows consolidation of software from multiple legacy ECUs into a newer, more powerful ECU.
- New functions, and the introduction of multi- and many-core systems increase software complexity. Analyzing real-time behavior becomes more difficult. Because the hypervisor enforces strict timing protection, temporal interference between software components in different virtual machines is avoided, allowing for an easier understanding of the system decomposed to partitions at the function level.
- Virtualization allows for new workflows in software development where different suppliers can develop software for different virtual machines in parallel, thus allowing OEMs a more flexible approach as well as reducing hardware costs.
- Having independent and modular software updates can lead to a significant reduction in the effort needed to re-qualify software partitions, especially when the changes are small.
Embedded virtualization, already in production on application processors, in vehicle domains such as connectivity and infotainment, is now coming to microcontrollers and real-time processors. This technology will enable the integration of more complex software functions on domain controllers that cannot only use application processors. Some upcoming generations of microcontrollers, such as the ones based on the ARMv8-R architecture, have built-in hardware extensions to make virtualization easier and more effective. The software technology will be available as these new processors hit the market.
About the autor:
PhD Stefaan Sonck Thiebaut is responsible for the overall product development and technical direction of OpenSynergy as CEO. The co-founder of the company is a mechanical engineer, graduated from Stanford University (USA) and brings more than 20 years of software development experience to the company.