Robert Lemos has an excellent summary of the state of the debate on disclosure of exploit code in his column at Dark Reading. In it, I’m quoted briefly:
But that’s really only part of the story — disclosure is a complicated topic.
It’s easy to understand the point of view of a defender: details of a specific vulnerability or even example exploit code are scary. Their existence means you as a defender have a very short period of time to react, you have to prioritize that fix instead of rolling it into a planned update, because attackers now have a ready-made path to attack you. These concerns are exactly why Veracode takes such pains to keep the vulnerabilities we discover in our customers’ applications confidential.
It’s just as easy to understand the point of view of a researcher: their reputations—and thus their livelihoods—depend upon their ability to discover and document vulnerabilities. If they can’t share this information, it becomes very hard for the community and industry to evaluate their abilities objectively. Additionally, the sharing of information about vulnerabilities is essential to advancing the state of the art of defense. We learn from each other, and we apply that knowledge to better defenses.
Both sets of concerns are valid. It would be a detrimental to security if every vulnerability discovered were immediately disclosed along with a working exploit. It would be just as bad if researchers were constrained from ever sharing their findings. After all, we’re on the same side—we all want higher-quality software.
Fortunately for us, there are various approaches to responsible disclosure, all of which have a few key attributes:
- The vulnerability is disclosed first to people who have the ability to repair it
- The details are kept confidential for a reasonable and agreed upon period of time to allow the vulnerable party to engineer and properly test and deploy a fix
- Once the vulnerability is fixed (or once a reasonable time to fix has passed), the researcher publishes the details
This general framework for responsibly disclosing vulnerabilities strikes an excellent balance among the various concerns of defender, researcher, and user.
The defender is given an opportunity to benefit from the researcher’s findings. But using this method also allows them to treat the vulnerability like other production defects: it can be appropriately prioritized, the fix can be engineered soundly, and the system can be thoroughly tested before the fix is deployed. Being able to treat a security flaw with the same QA measures as any other production defect results in higher-quality software. At the same time, unaffected defenders are able to learn from the mistakes of others and avoid them in their own systems. This makes everyone safer.
The researcher retains his or her ability to share important findings with the research and defense communities, advancing the state of the art in research and defense and providing useful opportunities for further academic study. He or She also retains the leverage of disclosure as a way to ensure that the vulnerable party takes the issue seriously—the vulnerability will be disclosed, and so it must be repaired.
Each user of the system comes out ahead as well. Ideally, they get to see that a vulnerability was discovered and repaired by learning about the vulnerability after the fix is already in place. And if not, they can trust that they’ll learn about a vulnerability that affects them should the defender fail in their duty to repair it.On top of that, the user benefits from the better defenses that result from information about vulnerabilities being publicly available.
Responsible disclosure of vulnerabilities—including the details and even example exploits—simply works for everyone.