/oct 20, 2015

Why Relying On the NVD is Not Good For Open-Source Security Tools

By Sean Kinzer

In part 1 of this blog series, I showed why it probably is not a good idea to use CPEs when trying to identify vulnerabilities in your code. Proper library identification is obviously crucial when trying to figure out what kind of nasty exploits might be hiding in that rails app you created 3 years ago (there are at least 20 vulnerabilities associated with all 3.x versions of rails) but a more difficult task is finding the relevant vulnerability information. Often I come across customers claiming the National Vulnerability Database provides all of the vulnerability information one might ever need. I have some more bad news ...

What is the National Vulnerability Database?

The home page of the National Vulnerability Database(NVD) states:

NVD is the U.S. government repository of standards based vulnerability management data represented using the Security Content Automation Protocol (SCAP). This data enables automation of vulnerability management, security measurement, and compliance. NVD includes databases of security checklists, security related software flaws, misconfigurations, product names, and impact metrics.

The idea behind the NVD is to maintain an up-to-date central database of uniquely identifiable vulnerabilities. In order to add a vulnerability to the NVD, you can contact MITRE which will result in the creation of a reserved CVE identifier and owner(s) of the technology/code being notified. The vulnerability will not be moved out of the reserved state if those involved in disclosing the issue do not tell MITRE to do so. Finally, the vulnerability is added to the NVD with supplemental information by researchers for the vulnerability including fixes, references, and a CVSS score which estimates vulnerability severity. This sounds like the optimal resource for an open-source security tool. In an ideal world, maybe it would be.

The Not-So-Ideal World We Live In

As soon as a vulnerability is discovered in a 3rd party library, it is only a matter of time before that knowledge is publicized on the internet for any hacker to find. This makes vulnerability tracking an extremely time sensitive process. With that being the case, security tools relying on the NVD are playing with fire. Our researchers have seen wait times of 20 days or longer when trying to request a CVE identifier. As of October 8th, 2015, there are 2,928 reserved CVEs not in the NVD which were created from 2010 to 2015 AND have information publicly available for the vulnerability. That is just reserved CVEs with information easily found with a Google search; there are a total of 10,471 reserved CVEs. While some of the 10,471 reserved CVEs may be duplicate vulnerabilities or invalid after having been researched, it is safe to say there are a significant amount of public vulnerabilities showing up late or not at all to the NVD. What's worse, there is an upward trend in the amount of reserved CVE's as shown by the graph below. More and more vulnerabilities are flying under the radar despite some effort to publish it. Well-known vulnerabilities which make headlines like Heartbleed, POODLE, FREAK, and Ghost will likely be added to the NVD in a timely manner, but if you are only concerned about widely publicized vulnerabilities, you have much bigger fish to fry.

cvegraph

If you have looked through the vulnerabilities in the NVD, you will understand what I mean when I say the information there can be misleading or often incorrect. Take CVE-2015-6584 as an example:

Cross-site scripting (XSS) vulnerability in the DataTables plugin 1.10.8 and earlier for jQuery allows remote attackers to inject arbitrary web script or HTML via the scripts parameter to media/unit_testing/templates/6776.php.

At first glance this looks like a concerning XSS vulnerability, but upon further investigation (or just looking at the file path in the description) I can see that the vulnerability is actually restricted to a unit test within the plugin. The CVSS score provided for this particular vulnerability is 4.3, which is relatively high despite it being arguable that there is no way to weaponize it. This is because CVSS scores are a combination of both the impact of the vulnerability AND the ease of exploitation.

Other vulnerabilities include very little useful information at all. In CVE-2005-3779, the description reads:

Unspecified vulnerability in xterm for HP-UX 11.00, 11.11, and 11.23 allows local users to gain privileges via unknown vectors.

Vague descriptions of being able to gain unspecified privileges and "unknown vectors" appear in more places than one often resulting in drawn out vulnerability fixes which potentially have no impact on your code at all. Relying on CPEs to help identify the vulnerable component won't get you very far either, as I discussed in my last post. We are gradually going through our database verifying and re-writing all advisories. It's time consuming and we aren't done by a long shot, but it's important work and we need to do it.

Of course, many disclosed vulnerabilities in open-source libraries simply go completely unaccounted for in the NVD. Our researchers have been experimenting with automatically watching GitHub commit logs and issue trackers for components. One example of how this works is demonstrated by one vulnerability which was reported on the Node.js Security Advisory List (an awesome resource for Node.js vulnerabilities). On March 31st there was a vulnerability reported in the jsonwebtoken module where "the verification part is expecting a token digitally signed with an asymetric key (RS/ES family) of algorithms but instead the attacker send a token digitally signed with a symmetric algorithm (HS* family)." The fix for the issue occurred on March 16th. This is common and and many other GitHub commits related to vulnerability fixes are occurring and going completely undocumented. Issues such as these haven't been assigned CVE numbers and may never make their way to the NVD. We plan to implement our experimentation with automated GitHub commit tracking into our platform by the end of the year.

Though it has been discussed before, it bears repeating that even if the NVD were up to date, and contained accurate details for all of its data, it would not be an entirely comprehensive database of vulnerabilities. Trying to find a database which does have every public vulnerability known to man is like trying to find a unicorn, it doesn't exist. I am certainly not saying that the NVD should not be used at all, because it is a great resource for open-source security tools. At SRC:CLR it's just one small (but important) source of data we use in combination with many other resources/advisories and research in order to verify the information we maintain. The use of data-science and machine learning across open-source promises to uncover even more information about undisclosed and disclosed vulnerabilities, but more on that in future... unicorn

Related Posts

By Sean Kinzer

Sean is part of the customer success team at Veracode. He helps address customer issues and handles our support desk.