If you caught the headlines last week, you might have read about the developing scandal over a fatal problem with ignition switches in General Motors cars?
The automaker has been forced to recall 1.37 million GM cars containing a faulty part that is believed to be the cause of 31 crashes and 13 fatalities in the last decade. The scandal is that the National Highway Traffic Safety Administration (NHTSA) - the federal agency charged with maintaining vehicle safety knew about the problem as long as seven years ago, but did not demand a recall of affected vehicles.
It’s a curious story – especially when you learn that it wasn’t as if the NHTSA ignored the problem. On the contrary, the agency ordered three Special Crash Investigations into incidents in 2004, 2005 and 2006 in which a new type of air bags failed to deploy in accidents. It also met with GM in 2007 to discuss the problem believed to be the source of the failures: a flaw that caused ignition switches to slip from “run” into “accessory” mode under the weight of heavy key chains, killing power to airbags and other systems.
But if you’re a software engineer or if you work for a company that makes software or if you’re an enterprise IT professional who is in the position of procuring software and services, you’re probably not shaking your fist at the NHTSA. You may, instead, be thinking “Wow! Talk about accountability!”
After all, no such federal, state or even industry equivalent of the NHTSA exists to oversee the operation of software, hardware and services – even in cases where those products are managing critical infrastructure and lifesaving systems.
And it’s not like the threats are hypothetical. I’ve noted on this blog how software flaws and vulnerabilities can kill people – literally. In just one dramatic example:a serial killer named Charles Cullen manipulated a design and software flaw in a drug dispensing product, Pyxis Medstation, to obtain lethal doses of medications he used to claim his victims. And, in recent months, the FDA has warned medical device makers to take more precautions to thwart cyber attacks on their products.
But would such a model work in the software world? After all, we’ve been trained to believe in a kind of ‘cyber exceptionalism’ – the notion that problems rooted in technology are fundamentally different from other real-world problems that we’ve already solved.
A panel discussion at last month’s RSA Conference posed that very question: whether a ‘National Cyber Safety Board’ was needed to put some teeth into calls for more secure application design and development.
Needless to say: this is a controversial proposal. Since its inception, the software industry has been defined by its agility and creativity. This is a space where brilliant, entrepreneurs can spin a “good idea” (say: the spreadsheet, a social network or a chat application) into riches. Imagine having to pass each of those creations through the filter of some government bureaucracy like the Food and Drug Administration.
But the idea has supporters – especially within the software security industry. The panel’s moderator, CA Veracode’s CTO Chris Wysopal, noted that a Cyber Safety Board could provide much needed expertise to do root-cause analysis of major cyber incidents like attacks and malware outbreaks. This would be akin to the kinds of work that the National Traffic Safety Board does investigating airplane and train accidents, or the CDC does for disease outbreaks.
Alex Hutton, the director of operations risk and governance at what he described as a “Too Big To Fail” bank said that there’s a desperate need to bring more science to bear in cyber, such as measuring the frequency of incidents and their impact. Better data on cyber incidents would help focus investments in controls, so companies at least knew that they were spending money on the right things.
This kind of positive feedback between public sector and private sector is something we take for granted in many other areas of our daily lives. Accident reports filed by local law enforcement and with private insurance companies inform the work of the NHTSB and DOT. Those agencies, in turn, exert pressure on automakers to fix problems and improve the safety of vehicles that consumers buy.
The same dynamics should work in the software world. So far, however, there have been only half steps in this direction.
Panelist Jacob Olcott, a principal in Good Harbor Consulting’s cyber security practice, helped create the Securities and Exchange Commission rule requiring companies to disclose “material” cyber breaches. But critics point out that the interpretation of “material” has given companies lots of wiggle room to not report serious cyber incidents.
And even Olcott is skeptical of direct government regulation of cyber security. The interests of private investors are enough to keep companies honest about cyber incidents, he said. “Investors care about this and have a right to ask,” he told audience members.
After all, panelists observed (rightly) that mandatory reporting of cyber incidents would raise a din of SEC disclosures since “everyone is getting breached.”
But maybe that’s the point? With a strong regulator (say the SEC or some future Cyber Safety Board) mandating disclosure of every material breach and serious (i.e. exploitable) software “defect,” there would be a flood of reports.
That would be scary and overwhelming. But it would also give a shape to a fear and anxiety that, today, is already overwhelming, but also shapeless. Knowing the details and having the data on vulnerabilities and cyber incidents will help us – as a society – understand and begin to manage our risk in the exact same way that we do with other societal ills, from property crime to vehicle accidents.
In the short term, that would be painful. In the long term, it might just save us all a lot of pain.