A conversation on Twitter this morning started out like this:

@dinozaizovi: Finding vulnerabilities without exploiting them is like putting on a dress when you have nowhere to go.

This clever analogy spurred a discussion about the importance of proving exploitability as a prerequisite to fixing bugs. While I agree that nothing is more convincing than a working exploit, there will always be a greater volume of bugs discovered than there are vulnerability researchers to write exploits. Don't get me wrong -- as a former penetration tester, I agree that it is fun to write exploits, it just shouldn't be a gating factor. Putting the burden of proof on the researcher to develop an exploit is not scalable, nor does it help create a development culture that improves software security over the long term.

A related topic, and one that hits closer to home for me, is how software developers deal with the results of static analysis. Static analysis is often misunderstood, particularly by people who have only dealt with dynamic analysis (fuzzing, web scanning, etc.) or penetration testing in the past. Because static analysis detects flaws without actually executing the target application, there's an increased likelihood of finding "noise" (insignificant flaws) or false positives. On the other hand, static analysis provides broader coverage, often detecting flaws in complex code paths that a web scan or human tester would be unlikely to find. So there's your trade-off.

Here's a conversation I have all too frequently, paraphrased:

I don't think I should have to fix this SQL injection flaw unless you can prove to me that it's exploitable.

Static analysis isn't performed against a running instance of the application. Not all flaws will be exploitable vulnerabilities, but some of them almost certainly are. Here, let me show you all of the code paths where untrusted user input enters the application and eventually gets used in the ad-hoc SQL query we've marked as a bug.

But what's the URL that I can click on to exploit it?

Static analysis is different from a penetration test. The output of our analysis is a code path, not a URL. URL construction cannot be derived solely from the application code, because it depends on outside factors such as how the web server and application server are configured. Moreover, we don't have the necessary context of how this flaw fits into the business logic of the application. Maybe this functionality is only accessible by certain users when their accounts are in a particular status. It might take a couple hours working closely with a developer in a test environment to come up with the attack URL. It might take several more hours to write a script around that attack URL to mine the database. On the other hand, it would take about 10 minutes to replace that ad-hoc query with a parameterized prepared statement.

Well, if you can't demonstrate the vulnerability, then it's not real.

Demonstrating a working exploit certainly proves a system is vulnerable. But the lack of a working exploit is hardly proof that it's not vulnerable. You could spend the time to investigate every single flaw to figure out which ones are vulnerable, or you could fix them all in such a way that you're guaranteed it won't be vulnerable. In our opinion, the time is better spent on the latter.

[more defensiveness]

[bangs head against wall]

Now imagine that conversation stretching out to 30 minutes or more. They could've fixed a half-dozen flaws already. And it's not limited to SQL injection. For example, consider cross-site scripting (XSS):

I need you to prove that this XSS flaw is exploitable.

How about just applying the proper output encoding so you know the untrusted input will be rendered safely by the browser?

Buffer overflows:

I need you to prove that this buffer overflow is exploitable.

How about just using a bounded copy or putting in a length check, so you know the buffer won't overflow?

By now you get the picture. Many developers want proof, to the extent that they'll sacrifice efficiency to get it. If we are to improve software over the long haul, developers must learn to recognize situations where it takes less time to patch a bug than to argue about its exploitability. On a more positive note, from someone who talks to static analysis customers on a daily basis, the tide is starting to turn in the right direction. But it is still an uphill battle.

Veracode Security Solutions
Veracode Security Threat Guides

About Chris Eng

Chris Eng, vice president of research, is responsible for integrating security expertise into Veracode’s technology. In addition to helping define and prioritize the security feature set of the Veracode service, he consults frequently with customers to discuss and advance their application security initiatives. With over 15 years of experience in application security, Chris brings a wealth of practical expertise to Veracode.

Comments (8)

Anton | November 20, 2009 7:18 pm

The development of some exploits requires skills and time. It might be also not in the scope of your contract. As my last argument I normally request for a permission to publish a bug in a public security mailing list and let other people to confirm that. You didn't see such mails, did you?.. ;-)
Some bugs might look not exploitable even from your humble opinion. Low down the risk rating, but it has to be fixed anyway.

Andy Steingruebl | November 20, 2009 8:14 pm

I've had a policy that during a pentesting engagement, etc. we only have them craft working exploits if we have to, to show a complicated bug for the purposes of verifying that we have fixed it, not for the purposes of getting it fixed.

Also, Sometimes the best way for a QA person to understand a bug is through an exploit, or at least a partially working one.

Of course, where I work we actually take security bugs very seriously, and not everyone has this luxury.

CEng | November 20, 2009 9:20 pm

@Anton: I think you're agreeing with me but I'm not sure!

@Andy: I agree that a QA person or a developer benefits from seeing an actual exploit the first time they are exposed to a vulnerability class they haven't seen before. It's the "prove it to me or it's not real" mindset that I'm tiring of.

Andy Steingruebl | November 21, 2009 1:38 am

Sorry, I was in complete agreement. Luckily I work somewhere where I can easily convince people to fix those types of bugs.. Yes, its quite tiring to have to create a working exploit for obviously buggy code....

King | November 22, 2009 3:20 pm

Even if you could prove a bug is not exploitable today, that's still no reason not to fix it. It could become exploitable tomorrow when some new feature is added.

Joseph Webster | November 24, 2009 2:09 pm

As a security pro AND developer I'm constantly amazed by the bizarre attitude that some of my boneheaded cohorts have towards testers in general. They seem to take it as a personal affront when errors are found in their code. The developer should thank you for finding ANY bug (i.e. doing their job for them and preventing them from really looking like an idiot when it's most important) rather than giving you static. It's reasonable to assume that you should be be willing to give developers all of the details to assist in the fix, or that there may be a dispute as to nature of the flaw. But if it's broke you fix it and thank whoever is trying to keep your buns out of the ringer. I can tell you for certain that if any of my direct reports reacted like the code monkey in your story, I'd bitch slap them so hard their unborn children would be well behaved.

ehay2k | December 9, 2009 11:34 am

It is fascinating that people will spend more time defending their position of not fixing something rather that just implementing the fix. This applies not just to developers, but to anyone who creates or repairs things: carpenters, plumbers, bakers, etc. Pride is a funny thing.

Here's what I ask them when I get pushback: Do you wash you hands after you use the restroom? Times are tough, so we are thinking of saving $$ by removing the soap and turning off the sinks. I guess, if you can do an audit to show all the microbial threats in the restroom, and then show that they WILL infect you (not just that they might, or that sometime down the road they may mutate or your immunity will diminish), then we can let you continue to wash your hands.

I find it always helps to have people take a more personal perspective on a problem.

TheReality | February 27, 2014 11:00 pm

@Joseph Webster: Typical Internet tough guy. Given the level of your hyperbole and vitriol, I'd wager that it's the "code monkeys" doing most of the bitch slapping at your office, not the self-styled "security pros". Your response indicates that you appear to know very little about security, testing or development and obviously have a chip on your shoulder regarding developers, probably because you failed as one.

Please Post Your Comments & Reviews

Your email address will not be published. Required fields are marked *

Love to learn about Application Security?

Get all the latest news, tips and articles delivered right to your inbox.