Yesterday, Dave Lewis over at LiquidMatrix Security Digest cried foul at Core Security for releasing too much detail about a recent DoS vulnerability they had discovered. His specific gripe was that they provided an IDA Pro excerpt that showed where the vulnerability was triggered. The excerpt is short, so I'll even copy/paste it here:

.text:00405C1B mov  esi, [ebp+dwLen]  ; Our value from packet
.text:00405C20 push edi
.text:00405C21 test esi, esi          ; Check value != 0
.text:00405C31 push esi               ; Alloc with our length
.text:00405C32 mov  [ebp+var_4], 0
.text:00405C39 call operator new(uint); Big values return NULL
.text:00405C3E mov  ecx, esi          ; Memcpy with our length
.text:00405C40 mov  esi, [ebp+pDestionationAddr]
.text:00405C43 mov  [ebx+4], eax      ; new result is used as dest
.text:00405C46 mov  edi, eax          ; address without checks.
.text:00405C48 mov  eax, ecx
.text:00405C4A add  esp, 4
.text:00405C4D shr  ecx, 2
.text:00405C50 rep  movsd             ; AV due to invalid
.text:00405C52 mov  ecx, eax          ; destination pointer.
.text:00405C54 and  ecx, 3

Dave asserts that publishing 16 commented assembly instructions makes this disclosure irresponsible. But look at the code -- it's completely generic, just a textbook example of what it looks like when you forget to check a return value after calling operator new. Sure, Core gives you the exact offsets into the executable, but so what? If I have the binary, then it's not going to be too hard to find the vulnerability anyway. It's not like Core is giving away a proof-of-concept exploit that generates the malformed registration packet required to trigger the DoS. What's more, they provide a detailed timeline going back to January 30th of this year describing exactly how the disclosure process with the vendor transpired. This looks extremely responsible to me; I just can't understand what is "not cool" here.

There's another interesting angle to this, completely unrelated to Core's disclosure process. The vulnerability itself is described in the advisory as follows:

Un-authenticated client programs connecting to the service can send a malformed packet that causes a memory allocation operation (a call to new() operator) to fail returning a NULL pointer. Due to a lack of error-checking for the result of the memory allocation operation, the program later tries to use the pointer as a destination for memory copy operation, triggering an access violation error and terminating the service.

This may bring to mind some recent discussions on whether callers of memory allocation functions should check the return value prior to use. To summarize, one camp says "caller should check", the other camp says "callee should exit on allocation failure." This is a gross oversimplification and if you want more detailed arguments, read the other blog posts that I linked to. In this case, if the "exit on failure" approach were taken, the DoS scenario might still happen, whereas if the caller were checking, the error could be handled more gracefully. More fuel for the debate!

FREE Security Tutorials from Veracode

Cyber Security Risks
Mobile Security
CRLF Injection
Flash Security
SQL Injection Hack

Veracode Security Solutions

Software Security Testing
Binary Analysis
Application Analysis

Veracode Data Security Resources

Data Security Issues
Data Breaches
Data Loss Protection

About Chris Eng

Chris Eng, vice president of research, is responsible for integrating security expertise into Veracode’s technology. In addition to helping define and prioritize the security feature set of the Veracode service, he consults frequently with customers to discuss and advance their application security initiatives. With over 15 years of experience in application security, Chris brings a wealth of practical expertise to Veracode.

Comments (3)

cwysopal | May 8, 2008 4:19 pm

There is a continuum of information that can be disclosed in a coordinated release when the vendor is also releasing a fix. There is a window of vulnerability between the time of the coordinated release and when a system is patched. Is all information release responsible or are there some things that should wait?

At one end of the spectrum the discloser could release a working exploit. Some would say it's responsible disclosure because the patch is available. Exploits are released all the time for patched vulnerabilities. But the few days after a new disclosure are particularly sensitive since systems may not be patched yet. Another level of information is the proof of concept which can quickly be turned into an exploit so they are really about the same thing.

Then there is the disclosure that has all the details of the vulnerability down to commented assembler code. Someone can certainly develop an exploit from this if they have the skills. This seems to be what Chris Eng is calling "responsible-ish". I am not convinced that this level of disclosure is more helpful or hurtful in the short term after a patch is first released.

I have always thought that all information about a vulnerability including working exploits should eventually be released, but with a delay after the initial disclosure to give people some time to patch. We tried this at @stake for a while (just before we were gobbled up by Symantec and all disclosures stopped)*. We would hold the details that would help with writing the exploit for 30 days so people would have time to patch.

In the end it is all compromise and there are always edge cases but for the majority of flaws a delay on details would help more than hurt security.


*To be fair they started up after about a year.

Marcin | May 8, 2008 8:45 pm

Should exploit code even be released? Besides researcher ego and script kiddies, who benefits from PoC exploit release? Is it to prove a point? If details of the vulnerability are released full-disclosure, that should be more than enough information for people to go out and mitigate the issues. Throwing an exploit PoC into the mix is just adding fuel to the fire.

Joshua | May 8, 2008 11:09 pm

I agree with what Chris said and Core-Security has done nothing but produced top-notch disclosures in an appropriate manner and looking at the timeline of the issue at hand it was all done professionally.

Here is my perspective and I wont use the line on how I worked with 10+ vendors. (Note: I used perspective, take it or leave it)

1. At least Core-Security notified the vendor and found the issue,It could have been worse if an attacker found the issue and was never reported at all.

2. It's not Core-Securitys fault that Wonderware failed to do any form of quality assurance or code analysis with a focus around security on its product.

3. End-users or anyone using a product has the right to know if they a vulnerable or a vulnerability resides and this includes weather a patch has been released or the advisory has been done in a certain period of time.. including the amount of information disclosed. Why? well I want to know and be able to make the decision weather or not I want to keep it deployed in my environment or take it out till a patch is actually out. This is all on the basis of severity and risk. If I was Core I might have released earlier.

4. I've seen less professional and worse cases of vulnerabilities and exploit disclosure then this issue, when someone drops the bomb on a issue discovered in Windows it's a huge impact. I don't hear anyone bashing the user disclosing those. I wonder what the real issue is at hand with Liquidmatrix? Nobody is holding anyones leg here and maybe his brain ran away in the night.

Please Post Your Comments & Reviews

Your email address will not be published. Required fields are marked *

Love to learn about Application Security?

Get all the latest news, tips and articles delivered right to your inbox.