On December 3, CA Veracode published a new supplemental State of Software Security Report, Focus on Application Development. As you might have guessed, the report has raised comments and questions – particularly about the security of applications written in different programming languages.
There have been some great questions and clarification requests raised both on Twitter and on Slashdot; keep them coming there and in the comments here and we’ll try to address them as they come. I’ll address these questions here.
One commenter took issue with our statistic that 34% of the PHP applications statically analyzed had Code or Argument Injection vulnerabilities, saying that our code audits don’t see PHP applications that use shell commands.
To understand this metric, it’s important to note that CA Veracode’s vulnerability categories are an in-house summary of vulnerability classes from the Common Weakness Enumeration (CWE). Let’s look at the underlying CWEs to answer this question.
In this case, this category incorporates both CWE 78, Improper Neutralization of Special Elements used in an OS Command, more commonly known as OS Command Injection, and CWE 88, Argument Injection or Modification. CWE 88 is about twice as prevalent as CWE 78 among PHP applications.
And yes, we do see PHP applications shelling out for various reasons, generally to perform file system operations (unzipping files), sometimes to perform other tasks like installing updates or running maintenance scripts in the background. In other cases, the risks are inherited from the PHP framework on which the application is built. Unless the developer has inspected the framework, he/she may not know that the underlying framework has security risks.
Last, I’d suggest that comparing the universe of findings from manual penetration tests to the universe of findings from static analysis isn’t necessarily a meaningful comparison. Most organizations can’t afford to pen test all their applications and only send the “most critical” ones through, but use static analysis as a cost effective way to achieve broader coverage within their portfolio of applications. Sometimes a static analysis is the first time a less critical application has been evaluated for security vulnerabilities in its lifetime –it’s not surprising that we see less clean code for these applications.
One commenter (hi Dave!) asked about the prevalence of truly “low hanging” SQL injection vulnerabilities in web applications, specifically how many could be addressed pre-authentication vs. post-authentication. We can provide an estimate for this, but it uses a slightly different data set and has some caveats.
A lot of the data reported in the paper is based on static analysis, for a couple of reasons. It typically provides a more complete picture of risk in the application, and gives us intelligence about things like the language of the application. You can’t tell everything in the world from static analysis, and one important thing that’s typically missed is information from the runtime context, such as, “did I need to log in to see this part of the application?”
For this sort of runtime context, we need to turn to dynamic analysis. Fortunately, CA Veracode offers two types of dynamic scanning, DynamicDS, which is designed to assess a single application at a time and can be scripted to log into an application, and DynamicMP, part of our Web Application Perimeter Monitoring Offering, which is designed to assess hundreds or thousands of applications at once to identify problematic applications, and does not attempt to sign into each application.
There are two caveats on the comparison between DynamicDS and DynamicMP is problematic for two reasons. First, the sample is different; not every application that is scanned with one technique is scanned with the other. Second, while we can say with assurance that every SQL Injection vulnerability found by DynamicMP is pre-authentication, not everyone found by DynamicDS is post-authentication. But we can guess that most of them are.
For every application with an SQL Injection found by DynamicMP (and therefore pre-auth), there are 5.2 applications with SQL Injection found by DynamicDS. So there’s a gap, but not as big as one might expect.
Some observant commentators pointed out a production error in Figure 3 of the report, which showed two entries for SQL Injection prevalence. If your copy of the report has this error, we apologize, and you can download the corrected version from us here.
As we said, we deeply appreciate the opportunity to answer questions about this research, so keep ’em coming!