The Rise of Coding Standards - Keeping Software Safe
June 20, 2019

Rod Cope
Perforce

Alongside the general emphasis in the industry on making software development safer, the growing use of more complex programming languages — notably C++ — has added to the challenge. While C++ gives developers a far more scope for creativity and innovation, its flexibility makes it easier for individuals to inadvertently create coding errors — take for example, memory leaks — that can lead to software vulnerabilities.

This is not to denigrate those developers' skills (even the most experienced or diligent of them can make mistakes), but it is a risk that needs to be addressed. The cadence of software development, and our increased dependency on it to drive mission or safety-critical applications, means that the process of securing code is a priority. While a bug in a video game is annoying, a car or a heart monitor that fails or is hacked could have catastrophic consequences. Of course, software testing tools are designed to unearth many issues, but it is not possible to test every path of execution.

Coding Standards to the Fore

Among multiple efforts and initiatives to secure code, the use of coding standards is on the rise. These include CERT C++ and MISRA C++, already widely used in a variety of compliance-driven markets. In the automotive sector, use of AUTOSAR has grown and it will be merged with MISRA C++ coding guidelines. Coding standards are relevant to any software where compliance is key and in today's increasingly connected world, with the spotlight on IoT and regulation touching on more industries, that accounts for a big slice of software development.

The idea behind coding standards is elegantly simply: they are "rules" to which software engineering teams aim to comply, with the idea being that they can be confident of code safety. Coding standards are also used to check against regulatory compliance (for instance, in automotive design, both AUTOSAR and MISRA support ISO 26262 compliance).

Probably the best way to illustrate how coding standards work in practice is with an example. Let's take uncontrolled format strings, which could allow a hacker to insert malicious code which could then write to an arbitrary memory location or crash the program. CERT C/C++ coding standard in theory stops this, with a rule that says "exclude user input from format strings."

Best Practice

It is a simple idea, but we all know that the last thing most developers want is yet another piece of housework getting in the way of creating beautiful code and meeting deadline pressures. In common with other aspects of the "shift left" and continuous testing movements, it is vital to automate adherence to coding standards, not just to minimize additional developer workload, but to reduce the risk of manual error.

Static code analysis tools – which have long been at the software development team's disposal to continually monitor code, to maintain consistent quality —are the de facto method to automate implementation of coding standards. They work by continually inspecting the code for deviations and can be used both before and after code is inspected. There are some important steps to consider when successfully implementing a static code analysis tool:

Timing
Static code analysis should be introduced as early as possible: the longer a bug is allowed exist, the harder it becomes to trace, more expensive to fix. As well as inspecting new code, it is important to retrospectively review code that has been supplied by third parties, open source, or previous projects.

Location
Consider where the static code analysis is to be deployed? Inside the IDE, during the build process, or both? Running in the IDE is the ultimate in "shift left" implementation. Running static analysis during the build process will detect integration issues that only become apparent when combining code written by multiple team members. As part of good continuous testing practice, consider running the static code analysis tool across both environments.

Scalability
Not all tools in the marketplace today can deal with today's massive enterprise projects, to find issues in very large codebases that can occur when adding new and apparently independent features. Seamless integration with existing toolchains is going to be vital.

Start smart
Don't look for every possible issue when first adopting a static analysis tool. The team will be overwhelmed and will abandon the tool. Instead, focus on the most important or dangerous issues found and resolve them slowly over time. Ideally, the chosen tool will support automatic ranking and allow custom rules, sorting, and related features so less time is spent triaging issues and more time fixing them. When teams first get started with static code analysis, it's critical to make sure that all new code is as clean as possible, so that there is not any extra technical debt likely accumulated over the years. To this end, consider "breaking the build" automatically when the static code analysis tool finds a critical new quality or security issue.

Micro and macro level inspection
As well as continuous code inspection, also think about setting up overnight inspections that provide full project analysis, looking at all the commits added during the day and how they all harmonize. Data flow analysis provides a simulation of how the code would execute in practice.

False results
If the product absolutely can't fail (e.g., missile defense, self-driving cars, pacemakers), it's more important to find every potentially disastrous bug than it is to produce code faster. False positives are far less interesting than false negatives in this case. Conversely, in situations where the use case is not so severe, then it may make more sense to implement a more lightweight tool, one that might occasionally miss issues but the impact on the project is reduced.

DevOps projects are just going to get bigger, with more complexity and moving parts. Emphasis on the safety, quality and compliance of software is going to rise in tandem. Addressing those challenges needs a multi-faceted strategy, of which the deployment of coding standards and static code analysis during the development process is just a part, but one that can certainly contribute towards making it harder for malicious code to be introduced in the future.

Rod Cope is CTO of Perforce
Share this

Industry News

April 25, 2024

JFrog announced a new machine learning (ML) lifecycle integration between JFrog Artifactory and MLflow, an open source software platform originally developed by Databricks.

April 25, 2024

Copado announced the general availability of Test Copilot, the AI-powered test creation assistant.

April 25, 2024

SmartBear has added no-code test automation powered by GenAI to its Zephyr Scale, the solution that delivers scalable, performant test management inside Jira.

April 24, 2024

Opsera announced that two new patents have been issued for its Unified DevOps Platform, now totaling nine patents issued for the cloud-native DevOps Platform.

April 23, 2024

mabl announced the addition of mobile application testing to its platform.

April 23, 2024

Spectro Cloud announced the achievement of a new Amazon Web Services (AWS) Competency designation.

April 22, 2024

GitLab announced the general availability of GitLab Duo Chat.

April 18, 2024

SmartBear announced a new version of its API design and documentation tool, SwaggerHub, integrating Stoplight’s API open source tools.

April 18, 2024

Red Hat announced updates to Red Hat Trusted Software Supply Chain.

April 18, 2024

Tricentis announced the latest update to the company’s AI offerings with the launch of Tricentis Copilot, a suite of solutions leveraging generative AI to enhance productivity throughout the entire testing lifecycle.

April 17, 2024

CIQ launched fully supported, upstream stable kernels for Rocky Linux via the CIQ Enterprise Linux Platform, providing enhanced performance, hardware compatibility and security.

April 17, 2024

Redgate launched an enterprise version of its database monitoring tool, providing a range of new features to address the challenges of scale and complexity faced by larger organizations.

April 17, 2024

Snyk announced the expansion of its current partnership with Google Cloud to advance secure code generated by Google Cloud’s generative-AI-powered collaborator service, Gemini Code Assist.

April 16, 2024

Kong announced the commercial availability of Kong Konnect Dedicated Cloud Gateways on Amazon Web Services (AWS).

April 16, 2024

Pegasystems announced the general availability of Pega Infinity ’24.1™.