Is anyone else getting tired of hearing excuses from customers -- and worse yet, the security community itself -- about how hard it is to fix cross-site scripting (XSS) vulnerabilities? Oh, come on. Fixing XSS is like squashing ants, but some would have you believe it's more like slaying dragons. I haven't felt inspired to write a blog post in a while, but every once in a while, 140 characters just isn't enough. Grab your cup of coffee, because I may get a little rambly.

Easy to Fix vs. Easy to Eradicate

Let's start with some terminology to make sure we're all on the same page. Sometimes people will say XSS is "not easy to fix" but what they really mean is that it's "not easy to eradicate." Big difference, right? Not many vulnerability classes are easy to eradicate. Take buffer overflows as an example. Buffer overflows were first documented in the early 1970s and began to be exploited heavily in the 1990s. We understand exactly how and why they occur, yet they are far from extinct. Working to eradicate an entire vulnerability class is a noble endeavor, but it's not remotely pragmatic for businesses to wait around for it to happen. We can bite off chunks through OS, API, and framework protections, but XSS or any other vulnerability class isn't going to disappear completely any time soon. So in the meantime, let's focus on the "easy to fix" angle because that's the problem developers and businesses are struggling with today.

It's my belief that most XSS vulnerabilities can be fixed easily. Granted, it's not as trivial as wrapping a single encoding mechanism around any user-supplied input used to construct web content, but once you learn how to apply contextual encoding, it's really not that bad, provided you grok the functionality of your own web application. An alarming chunk of reflected XSS vulnerabilities are trivial, reading the value of a GET/POST parameter and writing it directly to an HTML page. Plenty of others are only marginally more complicated, such as retrieving a user-influenced value from the database and writing it into an HTML attribute. I contend both of these examples are easy for a developer to fix; tell me if you disagree. Basic XSS vulnerabilities like these are still very prevalent.

Of course, there are edge cases. Take this freakish example, which combines browser-specific parsing behavior with the ill-advised use of tainted input in Javascript code. Exceptions will always exist, but that doesn't change the fact that most XSS flaws are straightforward to fix. We can take a huge bite out of the problem by eliminating these basic reflected cases, just like we started attacking buffer overflows by discouraging the use of unbounded string manipulation functions. Some will claim "developers shouldn't be responsible for writing secure code," which is noble and idealistic but also completely impractical in this day and age. Maybe it'll happen eventually, but in the meantime there are fires to put out. So let's step down from those ivory towers and impose some accountability.

Ease of Fix vs. Willingness to Fix

I've heard the assertion that XSS vulnerabilities aren't getting fixed because they are difficult to fix. Asking "what percentage of XSS vulnerabilities actually get fixed and deployed to production?" is a valuable metric for the business, but it doesn't reflect the actual difficulty of fixing an XSS vulnerability. It conflates the technical complexity with other excuses reasons why website vulnerabilities are not fixed.

At Veracode, we collected data in our State of Software Security Vol. 2 report that reveals developers are capable of fixing security issues quickly. While our data isn't granular enough to state exactly how long it took to fix a particular flaw, we do know that in cases where developers did choose to remediate flaws and rescan, they reached an "acceptable" level of security in an average of 16 days. This isn't to say that every XSS was eliminated, but it suggests that most were (more details on our scoring methodology can be found in the appendix of the report).

WhiteHat's Fall 2010 study shows that nearly half of XSS vulnerabilities are fixed, and that doing so takes their customers an average of 67 days. These numbers differ from ours -- particularly with regard to the number of days -- but I think that can be attributed to prioritization. Perhaps fixing the XSS vulnerability didn't rise to the top of the queue until day 66. Again, that's more an indication that the business isn't taking XSS seriously than it is of the technical sophistication required to fix.

At Veracode, we see thousands -- sometimes tens of thousands -- of XSS vulnerabilities a week. Many are of the previously described trivial variety that can be fixed with a single line of code. Some of our customers upload a new build the following day; others never do. Motivation is clearly a factor. Think about the XSS vulnerabilities that hit highly visible websites such as Facebook, Twitter, MySpace, and others. Sometimes those companies push XSS fixes to production in a matter of hours! Are their developers really that much better? Of course not. The difference is how seriously the business takes it. When they believe it's important, you can bet it gets fixed.

Manufactured Contempt

There's a growing faction that believes security practitioners are not qualified to comment on the difficulty of security fixes (XSS or otherwise) because we're not the ones writing the code. The ironic thing is that this position is most loudly voiced by people in the infosec community! It's like they are trying to be the "white knights", coddling the poor, fragile developers so their feelings aren't hurt. Who are we to speak for them? I find the entire mindset misguided at best, disingenous and contemptuous at worst. To be fair, Dinis isn't the only one who has expressed this view, he's just the straw that broke the camel's back, so to speak. You know who you are.

Look, the vast majority of security professionals aren't developers and never have been (notable exceptions include Christien Rioux, HD Moore, Halvar Flake, etc.). Trust me, we know it. I've written lots of code that I'd be horrified for any real developer to see. My stuff may be secure, but I'd hate to be the guy who has to maintain, extend, or even understand it. Here's the thing -- even though I can guarantee you I'd be terrible as a developer, most XSS flaws are so simple that even a security practioner like me could fix them! Here's another way of looking at it: developers solve problems on a daily basis that are much more complex than patching an XSS vulnerability. Implying that fixing XSS is "too hard" for them is insulting!

That being said, who says we're not qualified to comment on a code-level vulnerability if we're not the one writing the fix? In fact, who's to say that the security professional isn't more qualified to assess the difficulty in some situations? Specifically, if a developer doesn't understand the root cause, how can he possibly estimate the effort to fix? I've been on readouts where developers claim initially that several hundred XSS flaws will take a day each to fix, but then once they understand how simple it is they realize they can knock them all out in a week. Communication and education go a long way. Sure, sometimes there are complicating factors involved that affect remediation time, but I can't recall a time where a developer has told me my estimate was downright unreasonable.

Bottom line: By and large, I don't think developers feel miffed or resentful when we try to estimate the effort to fix a vulnerability. They know that what we say isn't the final word, it's simply one input into a more complex equation. Yes, developers do get annoyed when it seems like the security group is creating extra work for them, but that's a different discussion altogether.

Ceteris Paribus

One final pet peeve of mine is the rationalization that security vulnerabilities take longer to fix because you have to identify the root cause, account for side effects, test the fix, and roll it into a either a release or a patch. As opposed to other software bugs where fixes are accomplished by handwaving and magic incantations? Of course not; these steps are common to just about any software bug. In fact, I'd argue that identifying the root cause of a security vulnerability is much easier than hunting down an unpredictable crash, a race condition, or any other non-trivial bug. Come to think of it, testing the fix may be easier too, at least compared to a bug that's intermittent or hard to reproduce. As for side effects and other QA testing, this is why we have regression suites! If you build software and you don't have the capability to run an automated regression suite after fixing a bug, then let's face it, you've got bigger problems than wringing out a few XSS vulnerabilities.

My high school economics teacher used the term "ceteris paribus" at least once per lecture. Loosely translated from Latin, it means "all other things being equal" and it's often used in economics and philosophy to enable one to describe outcomes without having to account for other complicating factors. The ceteris paribus concept doesn't apply perfectly to this situation, but it's close enough for a blog post, to wit: ceteris paribus, fixing a security-related bug is no more difficult than fixing any other critical software bug. Rattling off all the steps involved in deploying a fix is just an attempt at misdirection.

Closing Thoughts

My hope in writing this post is to spur some debate around some of the reasons, excuses, and rationalizations that often accompany the surprisingly-divisive topic of XSS. I want to hear from both security practitioners and developers on where you think I've hit or missed the mark. We don't censor comments here, but there is a moderation queue, so bear with us if your comment takes a few hours to show up.

Veracode Security Solutions
Veracode Security Threat Guides

About Chris Eng

Chris Eng, vice president of research, is responsible for integrating security expertise into Veracode’s technology. In addition to helping define and prioritize the security feature set of the Veracode service, he consults frequently with customers to discuss and advance their application security initiatives. With over 15 years of experience in application security, Chris brings a wealth of practical expertise to Veracode.

Comments (11)

r | September 27, 2010 10:10 pm

Hey Chris, nice post. I agree with your bottom line. Here's my $.02 on your post:

Thankfully my cave and cave dwelling counterparts haven't heard of this "developers shouldn't be responsible for writing secure code" so we can leave that bit out of the discussion. :)

I haven't heard much in the vein of "too difficult to fix" as much as the client needs help in discovering 'root cause'. That is the most oft excuse I've heard.

It's rather sad, I guess. I've found that places offering web apps send out v1.0 of their code with horrible user input sanitization and then follow up with a v1.02 that contains fixes for maybe half the XSS issues (given that you are contracted to perform an assessment and you report 50 XSS issues and say 'Your app sucks so bad that if I wrote up every single XSS issue I'd be billing you for 8hrs per page'). It's very rare that they will fix the systemic issue. The shops with an internal team typically will attempt to fix all known (reported) and unknown (not reported directly) issues. Shops that contract out parts of the code will *only* fix exactly what's pointed out to them.

I question why developers have so much trouble thinking when it comes to blindly trusting user supplied input in the first place. Over the last 7 years I've seen maybe 4 issues with cookies or UA strings, the rest were values supplied by the user in response to variables fed to them. No trickery there. Yet developers are foiled by a quote or two that get sent down range. I would understand a bit if there was JSON-esque games performing multi-request/response parsing trickery feeding back to build some overblown UI: But we are talking about simple quotes and script tags most of the time.

A lot of these XSS issues can probably be blamed on the sales force and senior managements willingness to allow the sales force to drive the feature set of the application. I would say that code release dates are, by and large, set by sales. This fact would force security into a secondary (tertiary?) role. In my limited experience, once sales promises are coded, unit tests are built, then the app is tested and shipped/deployed. If security issues are discovered at any time during that process, they're added to the backlog and given priority based on arbitrary values.

I personally have not had excessive push-back from XSS fix suggestions. If there is any issue between the security tester and the developer it's usually resolved when the developer starts to school the security tester, who in turn then changes the generic fix, based upon the newly supplied and previously unknown background information, into a specific fix. Taking the "we're partners" approach with the developers usually puts everyone on the proper playing field.

Perhaps the issue is still that web apps aren't being designed with any regard to security from the start. Or that browsers suck and we should build a new client server model that doesn't involve HTTP or HTML. :)

Sorry, a bit rambly and tangential. It's been a long Monday.

Jeff Williams | September 28, 2010 7:21 pm

I think discussing the difficulty of "fixing" a single XSS problem is just silly. It varies tremendously depending on the environment and architectural approach chosen. As an example, we recently helped an organization eliminate (and test) over 1000 XSS holes in just a few weeks by leveraging ESAPI and making some strategic presentation layer changes.

I would like to see much more talk about the difficulty of "eradication" though. Does it really matter if you fix only some of the XSS holes? It's not like it's much more difficult for a human attacker to find the "freakish" examples. Until you have some basic defenses in place, I'm not even sure it makes sense to scan or test for XSS. You already know it's there.

I've been working with several large organizations to eliminate XSS (among other things) across their codebase (e.g. 35MLOC+) for several years now. I'm just a security guy, but I think I've earned the right to discuss the difficulty of eradication. However, I wouldn't want to discuss the difficulty of a "fix" because it just doesn't make sense.

I think we've made eradication much easier with the XSS Prevention Cheat Sheet and supporting ESAPI libraries. But there's still more work to do. Many GUI libraries don't properly escape. The browser supports a ridiculous number of escaping formates. Basically nobody canonicalizes input before validating (assuming they're validating). We have a long way to go.

CEng | September 29, 2010 11:06 am

@r: Thanks for your thoughts. I agree that once you start explaining stuff to the developers they usually tend to get it.

@Jeff: Thanks for commenting! I appreciate the insight. I agree that the XSS prevention cheat sheet offers the best guidance out there for XSS avoidance -- that's why I linked to it. But why wouldn't it make sense to scan/test for XSS just because you know there is some lurking? That's like saying, "don't run any QA tests until it's time to ship, we should find the bugs on our own first." Scanning tells you exactly where most of the XSS flaws are. We're all trying to make software safer; sometimes that means fixing individual bugs, other times it means introducing broad defenses, as in the example you gave. Unfortunately, most companies, large or small, don't have the luxury of being able to revamp their entire application to incorporate broad defenses.

You have to work on both problems in parallel. Patching XSS is quick and easy, and if you're exposed, you should spend the few days required to do it. Now if you're serious about really attacking the problem at its core and you have the resources to do it, by all means initiate a longer term effort to build in systematic defenses. But don't ignore the simple stuff.

I'm a realist. Eradication is a noble goal, but it's not going to happen any time soon. We've known about XSS for at least a decade now, and it hasn't gotten any better. We've known about buffer overflows for several decades, and those aren't going away either. I can see SQL injection going away long before XSS does, and even that's quite a ways off.

paranoid | October 4, 2010 1:08 am

Let's fix the real problem, which is: "Why is it so easy to do the wrong thing?" We (the industry) have made it too easy to do wrong thing. In addition we have also made everybody a programmer with PHP and Javascript, but we have not done anything to help any of these new programmers to avoid any of the pitfalls. In other words we have created the ants ourselves. Squashing ants doesn't scale. So let's stop the ant production by fixing the programming languages and web frameworks.

CEng | October 4, 2010 11:23 am

@paranoid: Thanks for your comment. I enjoyed reading your posts as well. Eliminating XSS vulnerabilities one by one definitely doesn't scale, but for the time being, it's a necessary evil. More importantly, it's something we can't simply brush off simply because we'd rather work on securing the platforms and frameworks. A lot of things don't scale but you have to keep doing them until there is a better, more scalable solution.

Erzengel | October 4, 2010 2:11 pm

Lemme get this straight: "Developers shouldn’t be responsible for writing secure code".
Then who should be?
Security Practitioners? But "Security Practitioners aren't qualified to comment"!

Developers write the code. They must be responsible for their code. It's like a tax, it has to be done even if you don't like it.
Saying that security practitioners should be responsible, or that they aren't qualified, is like saying QA should be responsible for fixing code, or that QA isn't qualified to comment on the software.
There's a big picture here that people aren't seeing.

paranoid | October 4, 2010 11:07 pm

@Chris: I agree that we have to squash the individual vulnerabilities, but we should not forgot to also fix the underlying problem to prevent it from happening again. I am using node.js as a representative for a newer language and as demonstrated in my blog post it's way too easy to do the wrong thing. If we ever want to get rid of the ants we have to prevent new ones from being released.

CEng | October 5, 2010 4:43 pm

@Erzengel: No argument here. But this is the sentiment about developer responsibility that I'm starting to hear more often, both in casual conversation and in BlackHat/OWASP type venues. I don't know if it's so much a lack of the big picture as they think they are being visionaries.

r | October 6, 2010 8:39 am

Sorry, fixing the frameworks is really just plain silly. Any framework used, at this point in life, is going to have issues. The way to fix the problem now is to help/force developers to write better and more secure code. Tangent: It probably starts with writing better test cases for their code. I'm not sure when writing test cases fell out of favor, but the last few places I've gotten to speak at length with the developers, I've found that they aren't as diligent with writing their test cases.

But, be that as it may, test cases or no, it's still the developers responsibility to write secure code. Trusting any input from the end user without proper sanitization, really?

Fixing the frameworks used in developing web applications would be akin to stating that C should not be used to develop software. Think about it.

Onus should be placed upon the developer to do a good job. A good developer should be willing to be made accountable for their code base.

Perhaps it's the "We" in "We've known about XSS for at least a decade now...". I'm not sure that the current set of web developers have been enlightened. Or that they care. I can tell you that out of my last 4 contracts that had external developers (Russian firms, not that that means anything) all of them had serious input validation issues. Very simple GET /do_search.aspx?value= type stuff. Once we talked to them, they fixed the issues (slowly, and for a cost - so says our client). (That does mean something, but who knows exactly what. Perhaps something about repeat work? Perhaps something about features vs security?)

There's something to be said about a strongly written contract between developers and their employers.

paranoid | October 11, 2010 7:38 am

@r "Sorry, fixing the frameworks is really just plain silly. Any framework used, at this point in life, is going to have issues." So you are saying we shouldn't fix any problems in languages/frameworks and instead force developers to write safe code. Why shouldn't we learn from past mistakes and improve. I think there most of the XSS vulnerabilities we see today can be trivially fixed by having the language/framework automatically sanitize input.

How may of the developers you are talking about are real developers?

"Fixing the frameworks used in developing web applications would be akin to stating that C should not be used to develop software. Think about it." I don't see the similarity. But perhaps we could change it to, developers who can't write safe code should not be allowed to write web applications. That will never work. We have to help/prevent developers from making mistakes where ever we can.

r | December 8, 2010 2:38 pm

@paranoid: I think it would be better to focus on writing secure web applications. Educating developers to write better code would help mitigate the issues faster than attempting to fix the language/framework. As an example, look at .NET v1 vs v1.1 vs v2 vs v3 vs v3.5 vs v4. Check the release dates. Check the XSS issues within the frameworks themselves.

Now think about XSS and CSRF. Each version of .NET has failed to fix those issues completely. XSS attacks are harder to perform in a .NET v4 environments, but still not impossible. Version 1 of .NET was released in 2002, version 4 in 2010. In 8 years Microsoft has failed to fix the issue of XSS and CSRF using their frameworks. It is *key* to educate the developers more than it is to fix the language/frameworks.

As to how many of the developers I'm talking about are real developers, I'd say all of them. Given we use the definition of developer which points to the definition of programmer.

Not to drag out the conversation a month later. I just stumbled back here via Chris' newest post. Figured I'd try and state my case again. I still think the key is training the developers properly and then giving them time to write the most secure functional code that they can.

Please Post Your Comments & Reviews

Your email address will not be published. Required fields are marked *

Love to learn about Application Security?

Get all the latest news, tips and articles delivered right to your inbox.