A question for debate: SQL injection is as big a threat to the security and integrity of U.S. businesses as the Year 2000 (Y2K) date calculation flaw fifteen years ago. Discuss. For those of you too young to remember, Y2K was every IT group’s “dark horse of the apocalypse”: a lurking application logic problem engendered by software programmers in the 1960s, 70s and 80s who elected to express the year as a two- rather than four-digit number. 1998, then, was rendered as simply “98.” Why? Because programmers at the dawn of the age of ubiquitous computing concluded (pessimistically, it turned out) that anything they wrote would be killed off long before the turn of millennium. Y2K was the Black Plague of software bugs. It affected organizations large and small, wealthy and poor, notes Chris Wysopal, the CTO of CA Veracode. “It affected everyone equally. It didn’t matter who you were. If you had a computer system that was keeping dates, then (Y2K) was an issue,” he said. Because Y2K’s potential impact was so broad, it was impossible to say with certainty what the impact would be when the clock struck midnight on January 1, 2000. Oft-repeated doomsday scenarios in news stories of the time warned of FAA flight control systems going haywire and cities going dark as electric grids went offline. Businesses in the U.S. are estimated to have spent in excess of $100 billion on Y2K remediation, and an estimated $300 billion globally, but there were few reports of Y2K related problems even in countries (like Russia, China and India) that spent far less on Y2K preparedness than the U.S. and Western nations did. Of course, “SQL injection vs. Y2K” is a false debate. The question isn’t whether the problem warranted that level of investment. Rather it is this: “how do we do that again!?” Specifically: how can our economy muster the resources and the sense of urgency that was accorded to Y2K to address the host of very serious software related problems – SQL injection, broken session management or insecure third party code - that are staring us in the face? “We can’t,” says Wysopal. Y2K was an anomaly, he said: a ubiquitous and easy-to-explain flaw whose risk to organizations wasn’t easy to quantify. Those factors, coupled with the specter of a hard deadline (“when the clock strikes Midnight!”) got checks written and people scrambling to fix the problem. In contrast, most security issues today don’t pose an existential risk to computer networks. And the risk they do pose is contingent on an (unknown) adversary. “The problem isn’t getting JP Morgan or Boeing and Microsoft to do something about security. They already are,” he said. “It’s the people who don’t even know it’s a problem that need the help,” Wysopal said. What about a hard deadline? That seemed to work well for Y2K “compliance”? That could work – but expect a lot of resistance from the business community, who will see any tough talk of deadlines as expensive regulatory overreach. Besides, with Y2K the federal government didn’t need to wave the big stick of regulation. “It was the businesses themselves who concluded that they couldn’t deal with the downside liability of Y2K,” Wysopal recalled. Rather than trying to recreate the Y2K hysteria, the trick may be to get businesses to understand their exposure and potential liability in the way they did with Y2K. That means getting them to understand the connection between vulnerable software and successful (and expensive) application hacks, Wysopal said. “Organizations have to understand that ‘these guys are going to attack me and this is the harm that is going to happen and I need to prevent that,’” he said. Without that “end of times” quality fear as a motivation, appeals to pour resources into fixing application security vulnerabilities often fall on deaf ears. Why another Y2K? Like most massive, society wide mobilizations, Y2K’s impact goes far beyond the discrete fixes to date handling. The years of hype about Y2K that led up to the millennium prompted large scale investments in new IT systems, the productivity benefits of which flowed in the years following the turn of the century. Beyond that, Y2K got people thinking about application integrity and, particularly, about static code analysis, Wysopal said. Many of the tools that were initially written to detect Y2K bugs were later expanded to do other kinds of application audits. “That process of using static analysis kind of got into people’s heads,” Wysopal said. The benefits from a ramp-up in spending and attention to common application security problems, such as those on the OWASP Top 10 list, would be just as profound – if only we can get the world to take the problem seriously.