The playwright and existentialist Jean-Paul Sartre famously observed that “Hell is other people.” Put in the modern context, however, it might be more accurate to say that “Hell is other people’s code.” After all, malicious attacks against vulnerable software applications and rampant data theft are real problems. Moreover, the connection between third party- and reused code and serious security flaws is compelling. The most recent edition of CA Veracode’s State of Software Security report found that 60% of third party applications that the company tested failed to comply with enterprise security policy, while vulnerabilities that lead to remote execution and backdoor functionality were far more prevalent in commercial off the shelf (COTS) software like Adobe Reader, Microsoft’s Internet Explorer. Not that you need me to tell you this. In just the last week, we’ve seen Microsoft scramble to patch a critical, remotely exploitable hole that affects most supported versions of its Internet Explorer Web browser and Windows operating system. According to a report from the security firm AlienVault, malicious hackers have been using the flaw in targeted attacks against high-value defense industrial base companies. Experts in the security of critical infrastructure have raised alarms about gross negligence on the part of firms that create the software that helps run electrical utilities, power generation plants and other critical infrastructure. And, just today, a researcher at the annual Ekoparty hacker conference is releasing information on a critical hole in Oracle’s 11g Database Server that could enable remote attackers to conduct offline, dictionary-style attacks to crack administrator passwords. According to the researcher, Esteban Fayó of Application Security Inc., Oracle has known about the flaw for more than two years, but failed to take steps to adequately inform customers and speed their transition to a newer and more secure version of its database server and client. Why the mess? The reasons are complex, but it might help to think of the current model of application development that operates more than a little like the meat packing industry of Upton Sinclair’s The Jungle. As with the turn of the century meat packing industry, in software today you have high output, a high degree of efficiency and relatively low labor costs – unfortunately, it all comes at the cost of quality. Organizations of all sizes benefit from a burgeoning global marketplace in which application development talent, not to mention ready-made software components can be had quickly and cheaply. However, without the software equivalent of an FDA inspector to walk the floor and impose costs (fines, penalties) for shoddy work or unsanitary conditions, it’s a race to the bottom when it comes to the quality of the code that’s produced. CA Veracode’s Chris Wysopal and Chris Eng have spoken forcefully about a culture of slap-dash coding and lax security testing that prevails in many organizations. Application development shops – whether internal or independent – often fail to do more than verify that code doesn’t outright break before pushing it into production. Add to that the fact that modern enterprises might have dozens or more internal applications. Some of these are coded in house, some are third party applications, but many are hybrids: with components developed internally and components outsourced or dropped in from third party sources with minimal changes. Like Sinclair’s meat packing operations: that approach comes with risks. Organizations often have no idea how many internal applications they run, let alone the time or resources to check under the hood of each of them. Data from the firm Price Waterhouse Coopers suggest that customers probably don’t want to look under the hood anyway; fully eighty percent of third party software that PWC tested against the OWASP security compliance checklist failed that test. Nevertheless, poorly coded software leaves customers open to compromise, data theft, data loss and damaged reputation. Even when organizations are aware of security issues third party applications or code, it can be a struggle to get vendors to fall in line and fix the problem. They’re reluctant to divulge proprietary source code, or lack the internal resources and expertise to fix the security problems that their customers have uncovered. Without iron-clad language in procurement agreements (a rarity), customers are often left hanging on the line when they try to escalate problems with software vulnerabilities and other issues. That’s why CA Veracode’s announcement of the VAST program yesterday is so interesting. The company is the first to attempt to actually bridge the divide between third party software suppliers and their customers. VAST – or The Vendor Application Security Testing program – leverages CA Veracode’s SaaS based static- and dynamic testing infrastructure to act as an intermediary: testing application security compliance for both traditional and web-based applications. Upon request, third party software developers can join the program and then upload their binaries to CA Veracode for testing. CA Veracode can then validate the quality of that software, providing a high-level summary of its results to the customer and the complete and detailed report to the vendor with guidance on how to remediate the issues that were discovered. The process protects the integrity of the vendor’s source code, but also provides a provable and independent assessment of the security of that company’s code. Once the security issues are fixed, CA Veracode will attest to the security of the tested software – a valuable seal of approval in a market where, historically, there has been no easy way to attest to the quality of developed code. Programs like this are vital. More than one security expert has wondered, aloud, why a society that is so intolerant of shoddy quality in our cars, household appliances and food continues to shrug when we’re undone by shoddy code. One problem, experts have noted, is that there hasn’t been a respected authority – an Underwriters Lab, if you will – to put its stamp of approval on quality work, and blow the whistle when software products miss the mark or, god forbid, actually put people in danger. VAST isn’t UL – at least not yet – but it’s a big step in the right direction. You can read more about VAST here.