We recently passed the 2 trillion mark for lines of code scanned. 2 trillion! That’s a lot of code, and a lot of scanning, and a lot of intelligence about what vulnerabilities are lurking where and the best ways to manage them. Our State of Software Security (SoSS) reports leverage this goldmine of data to highlight lessons learned, best practices, trends and insights for anyone starting or managing an application security initiative.
A major point that emerged from our recently published SoSS version 7 was that simply initiating a formal AppSec program dramatically reduces risk. Across our entire customer base, we have found that there’s an average of a 46 percent reduction in flaw density through formal AppSec processes that involve application scanning.
When organizations start to take their AppSec process to the next level through more advanced best practices, the risk reduction grows even more dramatic. Our data reveals that for larger AppSec programs (those with more than 20 applications managed), the top performers had vulnerability fix rates 68 percent better than average performers.
My experience managing application security for a global investment bank is definitely in line with the results of this study. Implementing a measurable AppSec program that wove security into developer processes and featured the best practices listed below dramatically reduced the number of vulnerabilities found in our applications, and in one year increased our entire application portfolio’s compliance with a baseline security policy by over 50 percent.
So, what did Veracode’s recent research reveal about these top-performing AppSec programs? We found that these AppSec programs feature:
With DevOps and continuous deployment/release models becoming more common, many organizations are realizing that the key to successful application security lies with making it easier for developers to incorporate security into their workflows.
So, it’s no surprise that programs that approach this issue from the developer’s point of view are finding much higher levels of success. For example, programs that include remediation coaching and/or developer eLearning see vast improvements in fix rates over those that do not.
Data from our platform reveals that organizations that employ remediation coaching services, which work with developers to prioritize and fix security-related defects, see a 1.45x improvement in flaw density reduction from their efforts. And those who arm their developers with eLearning opportunities are logging a whopping 6x improvement in flaw density reduction through their remediation practices.
In my experience, many of my development teams found these consultation calls so valuable that they changed the dynamic of the security/development relationship. Rather than security chasing down developers, the development teams started proactively reaching out to my security team for advice on the security aspects of new “greenfield” projects they were working on – talk about getting security in at the start!
We’ve been talking about the myth of the “AppSec silver bullet” for a while now. And our new SoSS data solidifies this point.
SoSS 7 shows statistically that there are significant differences in the types of vulnerabilities that are discovered by looking at applications dynamically at runtime, as compared to static tests in a non-runtime environment.
When we looked at the top five vulnerabilities unearthed by dynamic scanning, one was not present at all in the list of vulnerabilities uncovered by static (deployment configuration).
In addition, 25 percent of applications tested dynamically had Cross-Site Scripting vulnerabilities, versus 52 percent of applications that were tested statically.
The major takeaway here is that neither type of test is necessarily better than the other, they’re just different. As such, it is important for security managers to remember that no single testing mechanism is going to solve all of their application security problems. It takes a balanced approach to properly evaluate and mitigate risks.
Our recent data around the use of our Developer Sandbox solution again highlights the growing importance of AppSec solutions that facilitate developers’ role in security. When developers are working to improve the overall security of their applications through early and frequent testing, they can end up sidetracked or stalled if they’re bombarded with policy violations while the application is still in active development. Veracode’s Developer Sandbox offers an assessment space that isn’t tied to compliance metrics and that allows developers to work “in private” – out of sight of the security teams.
Our data around this capability reveals that Developer Sandbox assessments both make developers more likely to cooperate with the security team, and also significantly improve long-term application security. When we look at the fix rate of organizations that don’t use sandbox code, their average fix rates are well below the average. Meanwhile, organizations that perform even just one sandbox scan during early stages of the SDLC report above-average fix rates.
Overall, developers who test unofficially using Developer Sandbox scanning improve policy-based vulnerability fix rates by about 2x.
We find time and again that AppSec success hinges on metrics – both to prove the value of the program and get buy-in from others, and to hone in on areas where the program needs to be tweaked and improved. For instance, your program will benefit greatly from the answers to questions like, is your fix rate improving over time? What’s your rate compared to your peers? Has your flaw density improved, and how do you rank against others with this metric? What types of vulnerabilities are you seeing, and is that in line with others in your industry?
Keep your program on track by measuring your AppSec results through a set of metrics and key performance indicators (KPIs), such as compliance, flaw prevalence, fix rates, industry standards and business- and goal-specific performance.
Get the details on how others are finding AppSec success in our State of Software Security report, version 7.
Is your experience in line with what we found in this report? Have you had success with any of the best practices listed above?