The metrics presented in Veracode’s ninth iteration of the State of Software Security (SOSS) report represent the industry’s most comprehensive set of application security benchmarks. Drawn from real-world applications, we have analyzed the data created through customer testing on Veracode’s application security platform. It represents the scans of more than 2 trillion lines of code across 700,000 scans, all performed over a 12-month period between April 1, 2017 and March 31, 2018. As in previous versions of the report, we’ll provide insight into how well most applications adhere to industry best practices, like OWASP Top 10 guidelines, and which types of vulnerabilities turn up most in typical applications:
Industry best practice adherence
The most common vulnerabilities present in applications remained largely the same:
As we worked on the report, we recognized that our data could provide even more insight than the standard benchmarks we’ve always analyzed in the past. The most important function of an application security program is how effectively flaws are fixed once they are discovered. Our goal this year was to really delve deep into the statistics that show how long different types of vulnerabilities take to get fixed, and to understand why certain risks linger for as long as they do.
Vulnerability fix behaviors
More than 70% of all flaws remain 1 month after discovery and nearly 55% remain 3 months after discovery.
1 in 4 high and very high severity flaws are not addressed within 290 days of discovery.
Flaws persist 3.5x longer in applications only scanned 1 to 3 times per year compared to ones tested 7 to 12 times per year.
DevSecOps unicorns exist, and they greatly outperform their peers in how quickly they fix flaws; the most active DevSecOps programs fix flaws more than 11.5x faster than the typical organization.
Infrastructure, manufacturing, and financial industries have the hardest time fully addressing found flaws.
The data analysis tells some very important stories for security professionals and development teams alike about how they can take measurable steps to reduce application risks. We hope our readers are able to use all of these benchmarks to good effect.
In examining the data for the percentage of applications under test by our customers in the past year, we can see that the vast majority of them suffer from at least one vulnerability. A significant number of these vulnerabilities are of high or very high severity.
FIGURE 1: Apps with at Least One Vulnerability
Throughout the report, we share data from two types of scans. We commonly look at the first scan of applications, which indicate testing of applications that haven’t previously gone through the AppSec program. We also look at latest scan statistics, which includes tests of applications that are currently in the middle of remediation and those applications for which organizations have deemed they’ve fixed enough flaws and have stopped scanning any further. Even on our customers’ latest scans, we found that one in three applications were vulnerable to attack through high or very high severity flaws.
Breaking down the prevalence of flaws by vulnerability categories shows that all of the usual suspects are present at roughly the same rate as in previous years. In fact, our top 10 most prevalent flaw types have hardly budged in the past year.
FIGURE 2: Prevalence of Common Flaw Types
That means that organizations across the board have made very little headway to create awareness within their development organizations about serious vulnerabilities, like cryptographic flaws, SQL injection, and cross-site scripting. This is most likely a result of organizations struggling to embed security best practices into their SDLC, regardless of where the standards are from. The data shows that plainly here.
FIGURE 3: Adherence to Industry Standards
A historic look at OWASP compliance on first scan shows that this year’s pass rate looks significantly better than five years ago. Unfortunately, the rate of OWASP compliance hit its peak in 2016. This year marks the third in a row that OWASP pass rates have declined. One variable to note is that OWASP updated its Top 10 list in 2017. While Veracode policy support wasn’t fully updated until the end of the data window for SOSS Vol. 9, this could have been a factor in the pass rates declining this year. Shifts in focus on vulnerability types take a while to be implemented.
The big question, of course, is how effective are organizations at closing vulnerabilities once they’ve found them through our scans?
FIGURE 4: OWASP Year-By-Year Comparison
The good news here is that customers are closing more of their flaws annually than in the past. Nearly 70% of flaws discovered in the past year were closed through remediation or mitigation – that’s a jump of nearly 12 percentage points of closures since State of Software Security Vol. 8.
FIGURE 5: Flaws Closed vs Open
Simply looking at the sheer volume of open to closed vulnerabilities only gives us so much visibility into the true efficacy of customers’ AppSec practices. The time it takes for attackers to come up with exploits for newly discovered vulnerabilities is measured in hours or days. Which means that it is crucial to measure both how many flaws organizations close out every year, and how long it takes them to do so.
FIGURE 6: Fix Velocity
This year, we’ve taken a closer look at our customers’ fix rate, and when we look at the curve for the average fix velocity from the first day of discovery, we see that it takes organizations a troubling amount of time to address most of their flaws. One week after first discovery, organizations close out only about 15% of vulnerabilities. In the first month, that closure reaches just under 30%. By the three-month mark, organizations haven’t even made it halfway, closing only a little more than 45% of all flaws.
Let's flip that curve and discuss the probability that a vulnerability will persist in an application over time.
We call this flaw persistence analysis.
FIGURE 7: Flaw Persistence Analysis
Visualizing the data in this way allows us to get a clearer view of how long risk lingers in any given application under test. We’ve used flaw persistence as the basis for a lot of new investigation into this year’s data. We hope this new view provides valuable insights into how customers prioritize the flaws they fix the fastest, as well as offering evidence of what isn’t being fixed in a timely fashion, and how that impacts application risk exposure.
Remediation and mitigation of found vulnerabilities are the ultimate objective of Veracode customers, so we wanted to examine our data in a new way to give readers a better understanding of how organizations prioritize their fix behavior.
Understanding how long it takes to close vulnerabilities under different circumstances not only offers a glimpse into the current state of software security practices, but also highlights how organizations can work to incrementally improve their own security.
In the previous section, we shared what we call flaw persistence analysis for all the applications our customers are testing. That analysis presents a line curve to show the probability that a vulnerability will remain in any given application over time, and we denoted the points in time on the curve at which 25%, 50%, and 75% of flaws in a typical application are usually fixed.
To better understand how long different kinds of flaws tend to linger in applications, we are using these percentiles to chart out what we call flaw persistence intervals. Below, you will see the flaw persistence interval for all applications, which corresponds to the flaw persistence analysis curve shown in the previous section.
FIGURE 8: Overall Flaw Persistence Interval
In green, you will see that it takes 21 days to close 25% of vulnerabilities. In blue, the chart shows that it takes 121 days to close 50% of vulnerabilities. In pink, the data shows that it takes 472 days to close 75% of vulnerabilities. That means that, overall, one in four vulnerabilities remain open well over a year after first discovery.
This overall flaw persistence interval serves as the benchmark against which we will compare other intervals throughout the rest of the report. Readers should note that the dotted lines in green, blue, and pink on this and subsequent charts track to the plots on this first overall interval chart. This will provide visibility into whether certain factors correlate to a speeding up or slowing down of the rate of vulnerability closures compared to the overall norm. Interval plots to the left of a corresponding line indicate a faster speed in reaching that particular milestone, while plots to the right of the corresponding line indicate a slower speed of remediation.
Let’s begin with one of the variables that application security teams are most urged to target for speedy remediation: vulnerability severity.
The potential impact to the confidentiality, integrity, and availability of the application determines the flaw severity of any given vulnerability. The highest severity flaws are less complicated to attack, offer more opportunity for full application compromise, and are more likely to be remotely exploitable — overall they tend to open up a wider attack blast radius.
Severity scores on our five-point scale are rated as follows:
Breaking down the flaw persistence intervals based on where vulnerabilities fall on this scale shows that organizations are making a big push to fix their highest severity vulnerabilities first.
The first quartile of very high vulnerability closures is made more than a week sooner than the norm, and organizations managed to start working on the last quartile of very high vulnerabilities 237 days sooner than the norm. Though the intervals for burning down the first 25% and 50% of high severity flaws tracked with the norm, organizations managed to reach closure on 75% of these high severity flaws more than 100 days sooner than the norm.
On the flip side, low severity flaws were attended to at a significantly slower rate than the average speed of closure. It took organizations an average of 604 days to close three-quarters of these weaknesses.
FIGURE 9: Flaw Persistence Intervals by Flaw Severity
In order to give a clearer picture of how severity prioritization is realistically working out in most situations, we rolled flaw persistence intervals into two severity groupings. The first group encompassed very high and high vulnerabilities, and the second included everything below that.
FIGURE 10: Simplified Flaw Persistence Intervals by Severity
This pair of intervals more clearly shows the correlation between the severity of the vulnerability and the speed of closure. Organizations hit the three-quarters-closed mark about 57% sooner for high and very high vulnerabilities than for their less severe counterparts.
If we translate the numbers into flaw persistence analysis curves, you can see even more clearly what the persistence delta looks like between the two severity clusters from the date of first discovery onward.
FIGURE 11: Severity Flaw Persistence Analysis
Exploitability adds another dimension to the measurement of the seriousness of a flaw. While severity scoring looks at a flaw through the lens of its potential overall impact on the application, exploitability specifically estimates the likelihood a flaw will be attacked based on the ease with which exploits can be executed. It is important to look at exploitability ratings to specifically prioritize those vulnerabilities that are both high impact and trivial to take advantage of. For example, a high severity flaw with a very high exploitability score introduces a lot more risk than a high severity flaw with a very low exploitability score.
When we examine the flaw persistence intervals based on exploitability, there are a few surprises that jump out at us. While the flaws judged as likely to be exploited with a score of “Exploitability: 1” have a sped-up flaw persistence interval relative to the average and to other lower exploitability scores, the next higher exploitability category does not. Those flaws ranked very likely to be exploited with an “Exploitability: 2” rating actually trail the average time for closure in all three of the flaw persistence intervals. It takes 40 days longer to close out 75% of these highly exploitable flaws than it does the average vulnerability.
FIGURE 12: Flaw Persistence Intervals by Exploitability
In order to get a clearer picture on how exploitability impacts remediation priorities within pools of similar severity flaws, we created additional flaw persistence intervals that analyzed different combinations of severity and exploitability. In these instances, we did see a few differentiations we’d expect to see. For example, for Severity 2 and 3 flaws, they were getting to the last quartile of open flaws a whopping 214 days faster when they were highly exploitable. But exploitability made a much less dramatic difference within the pool of Severity 4 and 5 vulnerabilities.
It is hard to tell exactly what is going on here with this counterintuitive result, but there are a few possibilities.
First of all, exploitability is more of a secondary prioritization metric than severity. Veracode typically recommends that developers use exploitability scoring as a way to sift through a cluster of vulnerabilities of a similar severity and ease of fix, putting the most exploitable of those on the top of that particular cluster.
We thought it could be that there were a number of highly exploitable but lower severity flaws that were skewing the flaw persistence intervals for this group — particularly considering that this category has a much smaller sample size than the other lower exploitability scores.
FIGURE 13: Flaw Persistence Interval by Severity and Exploitability
It could be that we’re seeing another variable arising, namely the difficulty of remediation. The most severe and exploitable flaws are vulnerabilities deeply embedded in the underlying architecture of an application and require more complex remediation work. As such, they’re much more difficult to fix and that could be what is extending flaw persistence in a population of flaws that should be at the very top of the priority list for remediation.
In a textbook scenario, the properties of the vulnerability itself shouldn’t be the only factors driving fix prioritization. A big part of the risk equation is the value of a particular asset at risk. As such, organizations should — in theory — also be weighting the business criticality of an affected application into their prioritization calculations.
However, when we looked at the data, we discovered that this is not happening to a very large degree. For example, a distribution of first scan and latest scan pass rates showed that the most important applications passed at a lower rate than other applications, and they didn’t even show a higher improvement rate between first and latest scan compared to the others.
FIGURE 14: First Scan vs Latest Scan by Criticality of App
The data for flaw persistence based on business criticality further bore out our conclusion that organizations aren’t using business criticality as a very strong prioritization variable.
While vulnerabilities in low criticality applications do trail all others in speed to reach all three closure percentiles, the flaws in very low criticality applications are addressed the quickest. This is a quirk of the data that we’re trying to understand — it could be that the small sample size is adding greater variability into the findings.
FIGURE 15: Flaw Persistence Interval by Application Business Criticality
What’s more, the flaws in very high criticality apps are actually fixed more slowly than the average application. It takes well over two months longer to fix 75% of vulnerabilities in these mission-critical apps than it takes to reach the same mark in the average application.
Now, it is likely that the stability concerns and change management policies on mission-critical apps are much more stringent, which is likely impacting how quickly teams can get remediations deployed. But the lesson here is that these unfixed flaws are leaving extraordinary windows of risk open within organizations’ most valuable application assets.
Drilling down further into the data, we can see that the disregard for app criticality mostly plays out even when filtered by severity of flaw.
FIGURE 16: Flaw Persistence Interval by Criticality and Severity
If we compare the flaw persistence analysis curves for groups paired by different criticality and severity scores, we see that they’re more likely to be pulled by the severity of the flaw than the criticality of the app.
FIGURE 17: Flaw Persistence Interval by Criticality and Severity
The one silver lining to this occurs as organizations get toward the end of flaw burndown. It does seem like some prioritization kicks in to differentiate between the lingering highest vulnerability flaws that need to be addressed. Around the six-month mark, you can see a clear difference between the highest severity flaws in highly critical apps versus less important apps.
FIGURE 18: FLaw Persistence Interval by Region
Unsurprisingly, vulnerabilities addressed by organizations in the Americas mostly tracked to the overall average. This was inevitable due to the fact that the large volume of these vulnerabilities weighted the average. However, one thing to note is that companies in the Americas did outperform the average on the tail-end of the vulnerability burndown process. This indicates how badly companies in APAC and EMEA trailed when it came to getting to their last quartile of open vulnerabilities.
In examining the APAC companies’ speed of closure, it is interesting to find that these firms jumped on their first chunk of flaws very quickly. It only took APAC companies about a week to close out 25% of their flaws. However, the spread between reaching that first milestone and eventually resolving 75% of flaws was enormous. It took APAC companies well over two years to start working on their last quartile of open vulnerabilities.
Meanwhile, EMEA companies lagged behind the average significantly at every milepost of the flaw persistence intervals. It took more than double the average time for EMEA organizations to close out three-quarters of their open vulnerabilities. Troublingly, 25% of vulnerabilities persisted more than two-and-a-half years after discovery.
Further breaking these persistence intervals out by country, we did find some regional outliers worth noting.
FIGURE 19: Flaw Persistence Interval by Country
For example, companies in India, the United Kingdom, and the Netherlands greatly outperformed their regional counterparts in speed of fix.
In particular, the rapid rate of remediation evidenced by Dutch companies remain a promising bright spot amid the worrying time it took their EMEA counterparts to fix the same percentage of flaws. Dutch firms managed to start working on their last quartile of open flaws within five months of discovery — that is the fastest rate worldwide and three times as fast as the average application.
That sense of urgency was contrasted by outliers on the other end of the spectrum in Germany and Switzerland. It took German firms more than three years to reach their final quartile of open vulnerabilities, and it took Swiss organizations nearly four years to reach the same milepost.
We will dive into industry benchmarks more fully later on in the report, but we would be remiss in discussing overall flaw persistence trends without touching on industry breakouts.
FIGURE 20: Flaw Persistence Intervals by Industry
Healthcare organizations are remediating at the most rapid rate at every interval compared to their peers. It takes just a little over seven months for healthcare organizations to reach the final quartile of open vulnerabilities, about eight months sooner than it takes the average organization to reach the same landmark. Similarly, retail and technology firms outpace the average speed of fix at every interval.
While infrastructure firms address the first half of their open flaws more rapidly than average, it takes them significantly more time to get to the second half. At least one in four vulnerabilities are left open almost three years after first discovery within infrastructure industry apps. This likely reflects the great difficulty that these firms face in fixing many applications within critical systems that have extremely tight thresholds for uptime and availability.
In a mirror to infrastructure situations, government and education firms have a reverse situation. They’re right about on par with the average time to address the first half of their open flaws, but they start to pick up speed once they get over that hump. This could be an indication of bureaucratic inertia that may impede initial progress, but which is likely overcome once security teams and developers cut through the red tape.
As we ruminate over the speed at which organizations are addressing vulnerabilities, it’s worth taking a quick look at how these flaws are being closed out. In tracking flaw closures, there are two main categories — remediation and mitigation.
FIGURE 21: Mitigation vs. Remediation
As we see here, a little over half of all flaws are fixed, and just under 44% of them are left open. Then there’s a small sliver left over that are not closed out with a code fix but instead through mitigating factors noted by developers. This could be because developers deem them false positives, because they believe other elements of the application’s design or its environment counterbalance the risk of the flagged vulnerability.
The good news here is that developers are clearly taking static application security tests seriously — they’re not just blindly rejecting findings as false positives and moving on. In fact, all mitigation reasons account for a little more than 4% of vulnerability closures.
If we zoom in on just the vulnerabilities closed by mitigation, we can get an even clearer picture of the reasons noted by developers for closing out flaws without altering code.
FIGURE 22: Developer Mitigation Reasons
This chart shows that potential false positives aren’t even the first reason named by developers for a close by mitigation. In the majority of instances, developers accept that static analysis may be finding something in the application, but they disagree with the analysis on the assumptions made about the design or the environment to flag something as a flaw. This is where mitigation by design or by environment kicks in. While some of the assumptions developers are making to deem a flaw as mitigated may be up for debate in terms of how sound they really are, the good news is that these mitigations make up such a slim number of flaw closures. This should give organizations peace of mind that when a flaw is closed, it is either fixed or closed for good reasons.
One final thought on the prioritization of how organizations fix flaws is that the flaw persistence intervals above do not really delve into the impact of policy on timing. Usually, individual organizational policies will drive our customers’ fix behavior above all other factors, and each of those policy sets are unique. Based on our analysis, many policies clearly take into account flaw severity. Some might take into account exploitability, others might emphasize certain vulnerability categories, and a few others will dictate how fixes are made to specific applications based on what they do for the business.
At the end of the day, an individual developer is going to be looking at his or her organization’s policy to chart the plan of attack for closing out vulnerabilities. For any given customer, those policies may be based on some of the variables we laid out here, or they could be based on other factors unique to their organization or industry.
The takeaway for the data laid out in this section of SOSS Vol. 9 is that organizations need to start thinking more critically about the factors that impact what they fix first. We called the charts laid out in this analysis flaw persistence intervals because we want to emphasize that they’re offering a very detailed picture of the time of exposure faced by allowing these clusters of open vulnerabilities to linger.
In analyzing the data, we found that the most common types of vulnerabilities cropped up in largely the same proportions as last year. The top four vulnerability categories presented themselves in more than half of all tested applications. This means the majority of applications suffered from information leakage, cryptographic problems, poor code quality, and CRLF Injection.
FIGURE 23: 20 Most Common Vulnerability Categories
Other heavy-hitters also showed up in statistically significant populations of software. For example, we discovered highly exploitable cross-site scripting flaws in nearly 49% of applications, and SQL injection appeared nearly as much as ever, showing up in almost 28% of tested software.
One thing to keep in mind is that this particular distribution of common vulnerabilities was found through Static Analysis Security Testing (SAST), which examines code in a non-runtime environment. We’ve largely focused our data analysis on SAST results because we believe it is more statistically reflective of the high-level efficacy of AppSec during the SDLC. Static testing is more commonly done earlier in the SDLC, whereas dynamic tests are done later in the lifecycle for a variety of reasons, including the length of time it takes to test dynamically.
However, we should note that there are some differences in the occurrence of flaw types when we look at the prevalence in results for Dynamic Analysis Security Testing (DAST), which examines the application as it executes in a runtime environment.
Dynamic testing offers a totally different testing methodology and environment, so it shouldn’t be surprising that it’s stronger at dredging up different classes of flaws. The top 10 common vulnerabilities uncovered by DAST are still heavy on flaws like information leakage and cryptographic issues, but it also shows a higher prevalence of server configuration and deployment configuration flaws. These are flaws that simply can’t be found prior to code execution, but which offer a very viable path to attack. As such, they still need to be on the AppSec radar.
FIGURE 24: Top 10 Vulnerability Categories by Dynamic Application Security Testing
As we examine the top vulnerabilities, it is also crucial to consider that not every flaw type is created equal. It would be myopic to make judgements on risk simply by looking at flaw categories by volume of vulnerabilities present. For example, code quality flaws may be present in twice as many applications as SQL injection vulnerabilities, but that does not mean they pose twice as much risk as SQLi to the state of software security. Probably quite the opposite. As a class, SQLi tends to present flaws of a much higher severity and exploitability than code quality vulnerabilities.
Once organizations dig into individual vulnerabilities, they’ll see that each of these category types exhibit different envelopes of risk based on exploitability and severity ratings. That must be taken into account when setting remediation priorities. However, even exploitability and severity metrics are not perfect indications of how to prioritize remediation of different flaw categories. Certain categories that may have relatively low measurements of severity or exploitability could hold significant risk in many situations — particularly when chained to other flaws. The key thing to keep in mind is context.
A low severity information leakage flaw could provide just the right amount of system knowledge an attacker needs to leverage a vulnerability that might otherwise be difficult to exploit. Or a low severity credentials management flaw, which might not be considered very dangerous, could hand the attackers the keys to an account that could be used to attack more serious flaws elsewhere in the software.
Toxic combinations of flaws are not necessarily reflected in severity or exploitability ratings. In the real world, attack chaining matters. Being mindful of that reality adds further texture to the idea of flaw persistence. The more vulnerabilities organizations leave open to accumulate alongside other persistent flaws, the more attack surface the bad guys have to work with when stringing together their exploits.
FIGURE 25: Flaw Persistence Analysis
As we examine the flaw persistence of common flaw categories, we can easily see that each of these tracked flaw categories presents its own unique remediation challenges. Some of the deltas here in flaw persistence are simply reflecting the difference in severity of each flaw type. But certain flaw categories are also easier to fix than others, contributing to the sometimes wide differences in the time it takes to address some categories over others.
FIGURE 36: Command or Argument Injection Snapshot
FIGURE 37: Buffer Overflow Snapshot
FIGURE 38: Dangerous Functions Snapshot
FIGURE 39: Untrusted Initialization Snapshot
FIGURE 40: Untrusted Search Path Snapshot
DevOps practices have taken the IT world by storm. Enterprises increasingly recognize that the speed of software delivery spurred on by DevOps practices can often be a game changer when it comes to digital transformation and business competitiveness. One study by CA Technologies recently showed that the highest performing organizations in DevOps and Agile processes are seeing a 60% higher rate of revenue and profit growth, and are 2.4x more likely than their mainstream counterparts to be growing their business at a rate of more than 20%.
As the DevOps movement has unfolded, security-minded organizations have recognized that embedding security design and testing directly into the continuous software delivery cycle of DevOps is a must for enterprises. This is the genesis of DevSecOps principles, which offer a balance of speed, flexibility, and risk management for organizations that adopt them. The difficulty is that, until now, it has been tough to find concrete evidence of DevSecOps’ security benefits.
That’s all changing, because we’ve made some significant breakthroughs with our SOSS 9 analysis. This is the third year in a row that we’ve documented momentum for DevSecOps practices in the enterprise, and now with our flaw persistence analysis, we’ve also got hard evidence to show that DevSecOps has the potential to be a very positive influence on the state of software security.
Our data shows that customers taking advantage of DevSecOps’ continuous software delivery are closing their vulnerabilities more quickly than the typical organization.
Over the past three years, we’ve examined scanning frequency as a bellwether for the prevalence of DevSecOps adoption in our customer base. Our hypothesis is that the more frequently organizations are scanning their software, the more likely it is that they’re engaging in DevSecOps practices.
Incrementalism is the name of the game in DevOps, which focuses heavily on deploying small, frequent software builds. Doing it this way makes it easier to deliver gradual improvements to all aspects of the application. When organizations embrace DevSecOps, they embed security checks into those ongoing builds, folding in continuous improvement of the application’s security posture alongside feature improvement.
Keeping this in mind, it’s only natural that a DevSecOps organization will scan much more frequently than a traditional waterfall development organization. These organizations tend to top-load huge changes into a lengthy development cycle, and usually kick security tests to the end of that process as a cursory checkbox action item.
To keep things in perspective, when we look at scan frequency by application, we see that it’s still heavily weighted toward just a handful of scans per application. The median scan rate amongst our entire application portfolio under test is still just two. Plenty of organizations obviously still stick to what they’ve always done before.
FIGURE 41: Scan Rates
FIGURE 42: Scan Distribution
When an application is scanned only two or three times in a year, and those scans are mostly done successively within a few days of one another, an obvious pattern emerges there. Clearly, many of these development teams are undergoing a process of doing their security checks, fixing the problems their organization’s policies dictate, and then quickly moving on. This is same-old, same-old behavior.
But as we delve into scan distributions of organizations scanning six or more times a year, we see more rescans at weekly and monthly intervals, too. This spread could potentially be indicating sprint-based development practices that are popular among DevOps teams who frequently adhere to Agile and Scrum methods. Sprint development typically has teams working on a limited scope of work that’s time-boxed, typically, into two-week- or month-long sprint cycles.
The data could be indicating trends where DevSecOps teams are working intensely on a particular application or app feature for one, two, or three focused sprint cycles, and wrapping up security scans within that work. In this case, it would make sense to see a number of scans popping off within a few days or a week or two of one another. The question is, are these security-focused sprints that are done so that a team can essentially ignore security for the rest of the year? Or are they feature-focused sprints that have security wrapped up into them? It’s a difficult question to ask, but one which bears reflection.
Whatever the reason for the cadence of scanning, one thing is certain. Our data shows that there is a very strong correlation between how many times a year an organization scans and how quickly they address their vulnerabilities.
As we explained above, our working hypothesis is that a greater frequency of scans per year indicates a higher likelihood of DevSecOps adherence. Whether they officially call what they do ‘DevOps,’ ‘Agile,’ or something else entirely, we can show that the teams that are scanning more often are making incremental improvements every time they test.
This does amazing things for fix velocity.
FIGURE 43: Fix Velocity Based on Scan Frequency
As you can see above, every jump in annual scan rates sees a commensurate step up in the speed of flaw fixes. Once organizations reach the point of 300 or more scans per year — the true territory of DevSecOps unicorns — the fix velocity goes into overdrive.
If we flip the discussion around and discuss flaw persistence intervals, we get greater visibility into how the frequency of scanning corresponds numerically to flaw persistence.
FIGURE 44: Effect of Scan Frequency on Flaw Persistence Intervals
If we look at flaw persistence intervals for those organizations that only scan a couple of times per year, we can see that it takes far longer than average to get around to making it to any one of the first three quartiles. When apps are tested fewer than three times a year, flaws persist more than 3.5x longer than when organization can bump that up to seven to 12 scans annually. At that rate of scan, flaw persistence intervals tend to track very closely to the average. Organizations really start to take a bite out of risk when they increase frequency beyond that. Each step up in scan rate results in shorter and shorter flaw persistence intervals. Once organizations are scanning more than 300 times per year, they’re able to shorten flaw persistence 11.5x across the intervals compared to applications that are only scanned one to three times per year.
If we look at a simplified view of the flaw persistence analysis curves, the delta is imminently clear between those flaws that are rescanned 12 or fewer times per year and those that are checked on more than 50 times a year.
FIGURE 45: Effect of Scan Frequency on Flaw Persistence Analysis
It’s important to note that this data may not necessarily be causational. And we admit that in some instances, more frequent scanning could just be detecting closures more quickly. However, the correlation is strong enough to offer security professionals and developers alike some concrete evidence for why they should be embedding more frequent security checks into their SDLC.
We believe strongly that the same incremental processes and automation that DevSecOps teams put in place to make it easier to scan more frequently also lend themselves to faster remediation.
The data above offers some of the first ever statistical evidence to prove that out.
FIGURE 46: Language Prevalence
FIGURE 47: Language Flaw Persistence Analysis