Exadel announced the Grand Prize winner of the “Appery.io COVID-19 Virtual Hackathon.”
Many developers dread code reviews, and one reason for this is probably that most reviewers only offer criticism rather than encouragement. Remember as a peer reviewer, you can also reinforce things you see that are done well, which can be every bit as important and effective as nitpicking every design flaw, mistake, bug or styling issue.
There is an intrinsic value of positive reinforcement for encouraging desirable behavior.
Thou Shalt Not Nag a Developer
There is a bevy of static analysis tools that scan source code for common vulnerabilities. The anticipated desired behavior of a developer using these tools is to ascertain and fix any vulnerabilities discovered. Nagging or guilting engineers into fixing stuff is miserable for both sides.
Unfortunately, the results of such tools are almost always focused on the negative, i.e. orienting it's copywriting around implementation risks that can lead to exploits (your application is vulnerable to command injection, XSS, weak authentication, weak cryptography, privilege escalation, etc). Any implementation assessment that is positive in nature (good validation criteria and checks) is muted out as these tools are optimized/tuned to remove false positives.
Consequently, such tools become an automated security nitpicker with no positive reinforcement whatsoever.
This is dangerous, for several reasons. It undercuts the effort and frustrates developers who spend hours writing code, and then rewriting it (fixing issues via pull requests), and rewriting it again. At times you can witness the stages of grief play out with a security vulnerability:
Anger → Denial → Bargaining → Disengagement/Acceptance.
Leading to this experience, developers often choose to ignore or disengage with these tools.
Rather than muting out observed good practices, can such tools, laud it?
Retooling for Positive Reinforcement
An optimal concoction of positive reinforcement with security risks would lead developers to continually engage with such security tools.
As an alum of Intuit, I was always in awe of TurboTax's delightful experience which is primarily focussed on being goal-oriented rather than task-oriented.
Nobody is motivated to do their taxes for the sake of it. TurboTax knows this and instead orients its experience around the user's true motivation: Get your maximum refund, guaranteed and accomplish it via milestones — now that's a value proposition we can get behind.
Can application security tools be designed with goals in mind?
Of course, finding and fixing all vulnerabilities is NOT a measurable goal. As Dijkstra wisely put — “Testing shows the presence, not the absence of bugs”.
Our applications evolve continually so if we knew all the vulnerabilities we were searching for, it would obviate the need to look in the first place.
The journey of a product begins with source code, followed by its metamorphosis into features targeting consumer delight/value or bugs turning into vulnerabilities. Bugs are a side effect proxy for “observed example of insecurity” which often is a downside of breakneck-speed productivity.
The consequence of a bug can be classified into the following outcomes
■ Exploited— The bug was not discovered or ignored leading to an attacker exploiting it to cause harm
■ Undiscovered— The bug still lurks in code waiting to be exploited
■ Found— Awareness of the bug as a consequence of discovery via code review, security tooling, security testing or ethical bounty hunting
■ Preventative— What is found is fixed and checks are enforced such that it does not repeat again
Just as you can't magically insert quality into a piece of software, you can't sprinkle or mandate security features onto a design and expect it to become totally secure.
Can you imagine the stress of getting a critical security bug in your code? The stress is multiplied through murky and accusatory vulnerability reports demanding swift action.