I'm finally getting around to finishing my post on minimizing attack surfaces. Here's Part 1, in case you missed it.

First, a quick clarification. I noticed that some of the readers who commented on that first post wanted to talk about improving security through the use of various development methodologies or coding frameworks. Those are interesting tangents (and ones that I may write about in the future), but my intention with this post is to discuss a very specific problem related to how people integrate third-party code -- that is, the stuff you import or link in but didn't write yourself.

As I mentioned previously, developers have a tendency to "bolt on" third-party components to applications without understanding the security implications. Often, these components are glossed over or ignored completely during threat modeling discussions. I attempted to illustrate this with my fictitious WhizBang library example in Part 1.

When integrating a third-party component, developers familiarize themselves with the API but generally don't care how it's implemented. Granted, that's how an API is supposed to work; you don't have to futz around with code beyond the API boundary, and you can blissfully ignore parts of the library that you don't need. In past consulting gigs, I've sat in threat modeling discussions where nobody knew whether a particular library generated network traffic. "We just use the API," they say. The fact that it works is good enough; nobody seems to care how it works.

That mindset is ideal for rapid development but problematic for security. Failing to understand the complete application, as opposed to just the part you wrote, prevents you from accurately assessing its security posture.

It's also no coincidence that web app pen testers love third-party components -- we get excited when we see "bolted on" interfaces, because we know that developers tend to leave extraneous functionality exposed. The resulting findings usually generate reactions such as "I didn't even know that servlet had an upload function."

An Example

Here's a close-to-home example related to my post about DWR 2.0.5 from the other day. DWR is an Ajax framework that has a variety of operating modes. In-house, we use a subset of DWR's full functionality -- specifically, we interact with it using the "plaincall" method only, so we made sure that the features we didn't need were disabled via the configuration file. As it turned out, there were vulnerable code paths prior to the "do you have this thing disabled" check. In hindsight, if we had taken more time to understand the exposed interfaces, we could have reduced the attack surface by filtering out unneeded request patterns before they even touched the third-party code.

But wait, you say. What about maintainability? If I whitelist using a point-in-time application profile, doesn't this create the same maintenance headache as the reviled WAF? It doesn't have to. Certainly, one option would be to whitelist each and every unique URL that references the DWR framework, e.g.


But then you'd have to update the whitelist every time you added or removed functionality from your application. Also, don't lose sight of the security goal, which is to minimize the amount of exposed third-party code. If I add or remove URLs that list, provided they are still using the "plaincall" method, I'm hitting the same DWR dispatcher every time. So I've increased maintenance cost without any security benefit.

A better option is to simply tighten the URL pattern a bit in the J2EE container. Here's the default configuration:


Now, instead of allowing every URL starting with /dwr/ to be processed by the DWR library, you could be a little more restrictive:


In this configuration, you don't have to worry about /dwr/call/someothercodepath any more. There is less third-party code exposed, thereby reducing the overall attack surface of the application. (NB: DWR also serves up a couple of Javascript files, so those URL patterns will have to be whitelisted too)

A Logical Extension

Even if you're not a developer, you should still be thinking about attack surfaces. People download and install blogging platforms such as WordPress, Movable Type, etc. all the time, but how many take additional steps to harden their installations? The concept is the same as the OS hardening analogy I brought up at the very beginning of this discussion.

Similarly, people install third-party WordPress plugins or Joomla components without considering that most of them are written by some random programmer who is a whiz with the plugin API but knows nothing about security?

At the risk of sounding trite, always remember that security is only as strong as the weakest link.

FREE Security Tutorials from Veracode

Cyber Security Threats
Mobile Phone Security
Flash Player Security
SQL Injection Attack
CRLF Injection

Veracode Security Solutions

Software Security Testing
Binary Code Analysis
Application Testing

Veracode Data Security Resources

Data Breaches
Data Loss Prevention
Data Security

About Chris Eng

Chris Eng, vice president of research, is responsible for integrating security expertise into Veracode’s technology. In addition to helping define and prioritize the security feature set of the Veracode service, he consults frequently with customers to discuss and advance their application security initiatives. With over 15 years of experience in application security, Chris brings a wealth of practical expertise to Veracode.

Comments (0)

Please Post Your Comments & Reviews

Your email address will not be published. Required fields are marked *

Love to learn about Application Security?

Get all the latest news, tips and articles delivered right to your inbox.