A conversation on Twitter this morning started out like this:

@dinozaizovi: Finding vulnerabilities without exploiting them is like putting on a dress when you have nowhere to go.

This clever analogy spurred a discussion about the importance of proving exploitability as a prerequisite to fixing bugs. While I agree that nothing is more convincing than a working exploit, there will always be a greater volume of bugs discovered than there are vulnerability researchers to write exploits. Don't get me wrong -- as a former penetration tester, I agree that it is fun to write exploits, it just shouldn't be a gating factor. Putting the burden of proof on the researcher to develop an exploit is not scalable, nor does it help create a development culture that improves software security over the long term.

A related topic, and one that hits closer to home for me, is how software developers deal with the results of static analysis. Static analysis is often misunderstood, particularly by people who have only dealt with dynamic analysis (fuzzing, web scanning, etc.) or penetration testing in the past. Because static analysis detects flaws without actually executing the target application, there's an increased likelihood of finding "noise" (insignificant flaws) or false positives. On the other hand, static analysis provides broader coverage, often detecting flaws in complex code paths that a web scan or human tester would be unlikely to find. So there's your trade-off.

Here's a conversation I have all too frequently, paraphrased:

DEVELOPER
I don't think I should have to fix this SQL injection flaw unless you can prove to me that it's exploitable.

ME
Static analysis isn't performed against a running instance of the application. Not all flaws will be exploitable vulnerabilities, but some of them almost certainly are. Here, let me show you all of the code paths where untrusted user input enters the application and eventually gets used in the ad-hoc SQL query we've marked as a bug.

DEVELOPER
But what's the URL that I can click on to exploit it?

ME
Static analysis is different from a penetration test. The output of our analysis is a code path, not a URL. URL construction cannot be derived solely from the application code, because it depends on outside factors such as how the web server and application server are configured. Moreover, we don't have the necessary context of how this flaw fits into the business logic of the application. Maybe this functionality is only accessible by certain users when their accounts are in a particular status. It might take a couple hours working closely with a developer in a test environment to come up with the attack URL. It might take several more hours to write a script around that attack URL to mine the database. On the other hand, it would take about 10 minutes to replace that ad-hoc query with a parameterized prepared statement.

DEVELOPER
Well, if you can't demonstrate the vulnerability, then it's not real.

ME
Demonstrating a working exploit certainly proves a system is vulnerable. But the lack of a working exploit is hardly proof that it's not vulnerable. You could spend the time to investigate every single flaw to figure out which ones are vulnerable, or you could fix them all in such a way that you're guaranteed it won't be vulnerable. In our opinion, the time is better spent on the latter.

DEVELOPER
[more defensiveness]

ME
[bangs head against wall]

Now imagine that conversation stretching out to 30 minutes or more. They could've fixed a half-dozen flaws already. And it's not limited to SQL injection. For example, consider cross-site scripting (XSS):

DEVELOPER
I need you to prove that this XSS flaw is exploitable.

ME
How about just applying the proper output encoding so you know the untrusted input will be rendered safely by the browser?

Buffer overflows:

DEVELOPER
I need you to prove that this buffer overflow is exploitable.

ME
How about just using a bounded copy or putting in a length check, so you know the buffer won't overflow?

By now you get the picture. Many developers want proof, to the extent that they'll sacrifice efficiency to get it. If we are to improve software over the long haul, developers must learn to recognize situations where it takes less time to patch a bug than to argue about its exploitability. On a more positive note, from someone who talks to static analysis customers on a daily basis, the tide is starting to turn in the right direction. But it is still an uphill battle.

Veracode Security Solutions
Veracode Security Threat Guides

Chris Eng, vice president of research, is responsible for integrating security expertise into Veracode’s technology. In addition to helping define and prioritize the security feature set of the Veracode service, he consults frequently with customers to discuss and advance their application security initiatives. With over 15 years of experience in application security, Chris brings a wealth of practical expertise to Veracode.

Love to learn about Application Security?

Get all the latest news, tips and articles delivered right to your inbox.

 

 

 

contact menu