In lieu of actual technical content, and inspired by Jeremiah's blog post, 8 reasons why website vulnerabilities are not fixed, I started thinking about all the different manifestations of reason #8, "No one at the organization knows about, understands, or respects the issue."

I polled the Veracode research group, most of whom have been security consultants at one time or another, and ask them about the best responses they've heard from customers that reflect a lack of understanding or respect for a pen test finding. These often start with the proclamation, "that's impossible..." followed by one of the statements below.

Developer doesn't understand how the web works

  • "Users can't change the value of a dropdown"
  • "That option is greyed out"
  • "We don't even link to that page"

Developer doesn't understand the difference between network and application security

  • "That application is behind 3 firewalls!"
  • "We're using SSL"
  • "That system isn't even exposed to the outside"

Developer doesn't understand a vulnerability class

  • "That's just an error message" (usually related to SQL Injection)
  • "You can't even fit a valid SQL statement in 10 characters"

Developer doubts attacker motivation

  • "You are using specialized tools; our users don’t use those"
  • "Why would anyone put a string that long into that field?"
  • "It's just an internal application" (in an enterprise with 80k employees and a flat network)
  • "This application has a small user community; we know who is authenticated to it" (huh?)
  • "You have been doing this a long time, nobody else would be able to find that in a reasonable time frame!"

Developer cites incorrect or inadequate architectural mitigations

  • "You can’t execute code from the stack, it is read-only on all Intel processors"
  • "Our WAF protects against XSS attacks" (well, clearly it didn't protect against the one I'm showing you)

Developer cites questionable tradeoffs

  • "Calculating a hash value will be far too expensive" (meanwhile, they're issuing dozens of Ajax requests every time a user click a link)

So that's what we came up with in about half an hour, and I know there are dozens that we've forgotten about in our old age (you know, over age 30). This drives home the point that education is one of the largest gaps in most SDLCs. How can you expect your developers to write secure code when you don't teach them this stuff? You can only treat the symptoms for so long; eventually you have to attack the root cause.

Submit your best "that's impossible" lines in the comments! I know there are some good ones out there.

Veracode Security Solutions
Security Threat Guides

About Chris Eng

Chris Eng, vice president of research, is responsible for integrating security expertise into Veracode’s technology. In addition to helping define and prioritize the security feature set of the Veracode service, he consults frequently with customers to discuss and advance their application security initiatives. With over 15 years of experience in application security, Chris brings a wealth of practical expertise to Veracode.

Comments (25)

adam | May 19, 2009 10:43 am

"That application must be secure, we hired @Stake to audit it!"

CEng | May 19, 2009 10:47 am

@adam: Man, that's eerily similar to one I used to hear... "That application must be secure, we hired Guardent to audit it!" :>

Chopstick | May 19, 2009 10:51 am

Well, I think you've covered them pretty well. I had a developer tell me once: "We've encrypted the request, ", referring to Base64 encoding followed by gzip. " and so the SQL statement we're sending in the POST request is safe."

This was for core banking. I am not making this up.

Shortly after, I had a Python script to fetch contents from tables at will. Including the DBA_USERS table.

Chopstick | May 19, 2009 10:52 am

Edit to last post: Sorry, that should read gzip, followed by Base64 encoding

Adam Baldwin | May 19, 2009 10:56 am

Received this from basecamp support yesterday about multiple XSS vulns in their app. I'm pretty sure that most would agree XSS is not a feature.

"Basecamp intentionally allows HTML (and JavaScript) because many of our users find great value in being able to use that. We're full aware that this allows for XSS attacks, but Basecamp is based on the notion of trusted parties. You should only allow people into the system that you believe won't hack your system (just as you should only invite people into your office that you don't believe will steal from you). If this was a public system, it would definitely be different. You can't have a public forum today without carefully dealing with XSS issues.

If your friend becomes a foe, you can revoke their account and change your login credentials. Just like you would simply not let them into your office. In the 3+ years we've operated Basecamp, we've never had a single such case occur, though. So it doesn't seem like it's a big problem as may be feared."

Zach | May 19, 2009 11:04 am

"We're aware of problems with [token_used_across_all_applications]. We're phasing that [token] out in favor of the IDM solution, so we're probably not going to fix it."

Thing is...this was in reference to an XSS bug (wherein an example involved cookie theft). They didn't make the connection that the "issues" in the token/identifier had little to do with snarfing it via XSS.

Simple Nomad | May 19, 2009 11:05 am

Some of my favorites from clients:

"A normal user would not do that."

"I didn't pay you for me to look foolish!" (presented the report in a roomful of people, and yes you kind of did)

"That can't be real, you are faking the responses, it does not do that." (during a demo of the flaw, again in front of a roomful of people, and yes, it *did* do that)

"We solved that problem." (my IP address was blocked at the firewall, later the entire website was wiped with no backups)

John Carmichael | May 19, 2009 11:08 am

I once had a customer claim that their custom (horribly weak) crypto alg that they use to encrypt passwords in a config file isn't a problem because within the same folder as the config file exist other config files with the same access rights and other passwords in plaintext. So the attacker would just take those instead.

This one totally DoS'd my brain for a few seconds.

cji | May 19, 2009 11:09 am

"You just hacked yourself with an alert box! Who Cares? What could anyone do with that?" <-- time for a more thorough demonstration of reflected XSS.

mckt | May 19, 2009 11:11 am

"But you're the only one in the company that knows how to do that" if an attacker would tell them.

I recently ran into several critical holes in a financial institution's loan management app. Their response to the vuln report was essentially denial, and mostly laughable:

After I clearly demonstrate session ID prediction (it was simply a timestamp): "this algorithm is extremely secure. of course i’m not going to explain how it’s programmed. we also use hacker and virus proof servers"

After I demonstrate XSS: "we have several safeguards in place. i am not at liberty to discuss them since they are a major key to the security of our site" (Obviously they aren't working)

Loan statements are publicly available PDFs with predictable filenames: "the only place ssn’s were visible were on the january 2009 statements, which we have removed even though we feel they were secure" (It's nice that you feel secure, but you're not).

Finally, the real key to their security: "i would also like to refer you to the login screen where it states that “unauthorized or improper use of this system may result in civil or criminal penalties”. we capture host name and IP address". (Because threat of prosecution is sooooo effective)

sirdarckcat | May 19, 2009 11:12 am

Ya, got a few..

"Since AES may be found insecure in the future, we've implemented our own poli-substitution algorithm"
"We dont use TCP, we use TLS"

mckt | May 19, 2009 11:15 am

Forgot one.

A developer today: "That CSRF hole only affects the admin section."

sirdarckcat | May 19, 2009 11:16 am

Ohohoh and another one: (You gotta love CISSPs)..

If you need access to the LAN, then it's not remote code execution.

=D heh

someone | May 19, 2009 4:20 pm

We have to send all the credentials to the client so they can get authenticated...our users don't know how to use a sniffer.

...after the "fix" and a re-test...

Yes we switched to a java serialized object to "prevent sniffing". screenshot of unserialized object ;-)

Andre Gironda | May 19, 2009 6:33 pm

I especially like the ones that question the intelligent adversary argument, e.g.

"Nobody is smart enough to do that!"
"None of our users are savvy enough to try that."
-- You already explained these phenomenons under "attacker motivation".

"It's impossible to get access to our source code from the outside, why would we ever give it to you?"
--Note that Windows NT, Cisco, Halflife, and other source code has been stolen and released to the Internet, sometimes numerous times. Also note that many penetration-tests result in source code found or reversed.

"We're only worried about botnets and script kiddies, not uber-hackers. You can't stop them."
--If an exceptionally good penetration-tester can breach security in a unique and powerful way, there is also an excellent chance that said exploit can be automated. Thus, any exploit can be put on milw0rm, packetstorm, or turned into a botnet so that numerous people can profit from it.

The whole, "I don't have to outrun the bear; I only have to outrun you!" argument, when referring to the fact that a company only needs to spend or "do" a little bit more "security" as their competitor(s). The argument supposes that adversaries will go after the competitor instead of their company because the competitor is an "easier target".
--LOL. This is one of my favorites. Again, automation or availability of exploit information makes this argument moot. There are enough adversaries and they have enough time and resources to target every person, every computer, and every application (internal and external) at every organization in the world more than just a few times a day. Spam refutes this argument very well.

Andre Gironda | May 19, 2009 6:37 pm

Oh hahhaa I forgot "We're running that service on a higher port number." Security obfuscation for the win!

Peterix | May 19, 2009 8:10 pm

Developer doesn’t understand how the web works?

More like 'Developer never heard about interactive debugging' or 'Developer never hacked or cracked anything'. Attacker motivation... google 'do it for teh lulz'.

True, all of this!

saari | May 20, 2009 12:48 am

This stuff doesn't bother me, it's the fundamentally broken by design issues and basic social engineering that bother me. See OAuth session fixation.

MikeP | May 20, 2009 10:28 am

My own favourite: "Yeah, but who has the time to figure that out? We're just xxx."

Timsta | May 21, 2009 4:41 am

"The application must be secure. All our competitors use it!"

Erich | May 21, 2009 4:53 pm

"We're certainly not in focus. Who's gonna attack OUR minor website?"

Later on wondering, why the website is spreading malicious code to any visitor so google marks it as dangerous.

veye0l8tr | May 23, 2009 1:22 am

How about "That's a problem with the vendor software not our security", to which my response was "If it's a problem with the vendor software you are using then it's a problem with your security"

or another one I hear often "how many people actually know how to do that?" to which my response is "Anyone that has an interest in breaching your security, with internet access and the ability to read."

The script kiddie arguement mentioned above is another common thread, I usually end up explaining that Uber-hackers document methods and script kiddies can follow instructions.

Motoma | May 26, 2009 3:00 pm

"If the client's are dumb enough to do ___ then they deserve what they get."

PB | May 27, 2009 5:00 am

Oh hahhaa I forgot "We're running that service on a higher port number." Security obfuscation for the win!

no way | November 17, 2011 2:17 pm

as a developer on more of the security side than most..

about 30% of what veracode is saying is rather retarded.

Please Post Your Comments & Reviews

Your email address will not be published. Required fields are marked *

Love to learn about Application Security?

Get all the latest news, tips and articles delivered right to your inbox.