Static code analysis in practice: The early bird catches the bug
In terms of functional safety in vehicles, there is no getting around ISO standard 26262. The majority of a vehicle’s components must be certified accordingly, including all potential software components. For the latter, ISO 26262 recommends the use of a coding standard: this summarises the key rules, which form the basis for a high level of source code quality. The most established coding standard in the automotive sector is the MISRA standard. ISO certification is extremely difficult to achieve without strict application of this standard, but given the complexity of today’s software this is a tough undertaking without the use of an automated tool.
This issue, in particular, has put static code analysis ever more in focus in the automotive industry in recent years. Appropriate tools help developers comply with the relevant standards by checking the software code for use of the correct rules during the coding process and immediately displaying deviations. Coding errors are also identified at the same time. As this analysis is performed on the source code, i.e. without the need for an executable program to be created as a first step, errors can be detected and fixed at an early stage and well before the actual test phase, which in turn saves time and money spent on quality assurance.
Taking the earliest opportunity
This means that the best time to introduce static code analysis in a process is as early as possible. Ideally, a suitable tool should be deployed before the actual coding process is started. In practice, however, software development projects in the automotive sector rarely begin as a blank slate. Instead, development is carried out based on existing code parts, for example from open source origins, third-party developers or previous projects. In such cases, the earliest possible opportunity should be used to retroactively introduce the corresponding tool, and it should then be used continuously throughout the development process. This makes it possible to ensure that at least future code adheres to the compliance requirements of the industry right from the very start.
In order to prepare for deployment of a static analyzer, consideration should be given to any integration needs. For example, which IDE is used by the developers? How will the tool work with the version control system? How will it integrate with existing build tools? The best commercial tools will allow for flexible deployment scenarios and integrate smoothly with other popular development tools.
Analysis step 1: At the developer’s desktop
The static analyzer is installed directly on each developer’s system. The solution works simultaneously with the creation of the code, and thus checks it in real time for compliance with the relevant coding standard (figure 1). If a particular section does not comply with the rules that are currently in force, the developer will receive notification of this, so that he or she can correct the issue. As a result of the provision of immediate feedback regarding how the code has to be written in order to be compatible with, for example, MISRA, developers are trained in the medium and long term to be more compliant, thus increasing the efficiency of the entire development process over time.
At the same time, the analysis tool checks the current code components for objective errors. This increases the quality of the source code and thus saves time and money, as the longer a bug exists in the code, the more expensive it will be to trace it back through all the process stages that have already been carried out and to fix it – not to mention that it could become a security vulnerability if deployed in the end product. During the code analysis, the critical points are determined directly at the level of the individual code lines, so that any errors contained can be corrected immediately.
Analysis step 2: At the CI build server
In the form of a continuous integration process, a further analysis process is carried out with the help of the server deployment. This typically runs as part of an overnight build process and includes full project analysis, including all the code sections that have been checked in during the day. In addition to the code parts from the individual developers, the project as a whole is also checked. The core of the server deployment is a deep data flow analysis. This simulates the execution of every possible execution path, without actually creating an executable program. Instead, it creates a behavioural model of the software, which can be used to monitor the variables with the values they will be assigned at run time. The data flow analysis also identifies redundant code, or “unreachable” lines of code that will never be executed. Such sections of code may point to design errors, or, especially in embedded systems where hardware resources are strictly limited, they may unnecessarily occupy valuable storage space.
Any errors or conflicts that are identified during the server analysis are flagged for review. Ideally, the analysis tool used should be integrated into an issue management system. This allows issues to be automatically generated on the basis of the results, and these can then be given to the respective developer for fixing the next day. It may also be the case that problems are identified that are already known to the team or that have no relevance for the respective project, for example because a specific MISRA rule is not used in the project at all. Even if these are not of any relevance for troubleshooting, a sophisticated analysis tool can be used to ensure that they are formally documented. If an audit is carried out, the team can then easily prove that the issue is not an error that has been overlooked, but is instead a known deviation with a considered justification, and presents no risk to the correct functioning of the vehicle.
Bypassing known pitfalls
In order to achieve the highest possible effectiveness of the solution in practice, it is important to pay attention to a number of aspects during the introduction and usage. The efficiency of the analysis increases with the degree of customisation of the tool to the particular work environment and to the specific goals and characteristics of the project.
As a rule, the introduction of such a tool therefore also includes the need for individual fine-tuning. If this is not done, the majority of the objective errors will admittedly be detected anyway, but at the same time there is a risk that the solution will produce a high number of false positives, i.e. code sections that are erroneously identified as errors, but which on closer inspection do not prove to be legitimate warnings. A tool that is not optimally configured may generate many false positives, which require time and effort to investigate. This may lead to frustration and reduced confidence in the tool. As a result, the developers may be less willing to use the tool at all – which inevitably leads to more security issues finding their way into the end product.
Another potential risk lies in introducing the code analysis tool without having correctly trained the employees first. In this case, it may be difficult for them to accurately interpret the results of the analysis and to respond correctly to the notifications received. This is because these actions require a basic understanding of how the solution works under the surface. At the same time, awareness of the limitations of static code analysis should also be raised. It cannot, for example, check a piece of code regarding the actual intention of the developer, i.e. to ensure that the function does what the developer actually intended it to do. If, for example, a function is expected to compute an area and the developer adds the area’s length and width to do this (rather than multiplying these values), static code analysis will not detect an error, unless the variables used produce a buffer overflow on the purely technical level (figure 2). To achieve the best possible impact of code analysis in practice, automotive specialists should therefore not underestimate the effort required to train their developers.
Best possible security for the digitised world
Once the right prerequisites have been put in place, the path is clear for the successful use of static code analysis in practice. The concrete benefits can be measured in figures and directly monitored throughout the project. Quality metrics can be measured and associated with snapshots of the code to build up a picture of progress over time.
This makes it possible to report on relevant indicators for the project managers and company management. For example, it is possible to generate reports about changes in error quantities over time or to show by how much the code complexity has decreased since the solution was introduced. And everyone ultimately benefits from the improved code quality: developers are supported and relieved of extra burden in their work, automotive manufacturers can meet compliance requirements and bring their products to market faster, and, above all, consumers acquire and can use vehicles with the highest possible level of safety – even, and especially, in the era of increasingly connected systems.
About the author:
Richard Bellairs has more than twenty years of experience across a wide range of industries. He held electronics and software engineering positions in the manufacturing, defense, and test and measurement industries in the nineties and early noughties before moving to product management and product marketing. He now champions Perforce’s code quality management solution. Richard holds a bachelor’s degree in electronic engineering from the University of Sheffield and a professional diploma in marketing from the Chartered Institute of Marketing (CIM).