Warnings about the security of medical devices often get passed off as just more “FUD.” But the case of serial killer Charles Cullen shows that arcane application security issues can literally be matters of life and death.
Cullen, you may recall, is the career nurse and former Navy electronics technician who admitted to a 16 year-long killing spree comprising 40 murders, all of hospital patients under his care, though experts familiar with the case believe the total death toll may be several hundred patients. That would make Cullen, the subject of a recently released book, The Good Nurse, the most prolific serial killer in American history.
An article in Wired by that book’s author points out, however, that Cullen’s crimes continued for so long, in part, because he proved adept at manipulating flaws in medical device design to obtain the drugs he used to kill his victims. In particular, Cullen is alleged to have exploited an application design flaw in a then-new device, Pyxis Medstation, a medication distribution and management product made by the company Cardinal Health.
As author Charles Graeber notes in his Wired article, Cullen’s technical background made it easy for him to become an expert on Pyxis, which distributes drugs to nurses and tracks withdrawals, linking each with the account of a particular patient and nurse to create a record.
Homicide detectives studying Cullen’s Pyxis records didn’t see a smoking gun — a clear pattern of drug orders by him corresponding to the hospital overdoses. What they did find, however, were lots of canceled orders.
“Cullen had realized that if he placed an order of the drug for his own patient, then quickly canceled it, the drug drawer popped open anyway. He could simply take what he wanted without recording it in the system. It was that easy,” Graber wrote.
In short: Cullen had discovered what application security folks call a “race condition” in the Pyxis – a flaw in the underlying application logic that opened a small, but exploitable, gap of time between two inputs that left the device open to tampering. In this case, the inputs were placing the order for a drug (thereby opening the Pyxis drug door) and cancelling that order (keeping the door locked).
Cullen figured out that he could pop the Pyxis drug door by issuing, then quickly cancelling an order for a drug. The drug order is only listed as a cancellation, but he gets access to the pharmaceuticals he needed to murder a patient under his care.
That’s a real head-slapper, but it’s a nice illustration of the high stakes when it comes to medical device security. Of course, serial killers like Cullen are one in a million. But substance abuse by nurses or doctors is a much more common problem, and the Pyxis race condition would work just as well for them.
Even without Cullen as a poster child, the security of medical device hardware and software is going to get a lot more attention in the coming months. Just this week, for example, The Department of Homeland Security warned that medical devices pose a significant risk to the security of healthcare organizations and the sanctity of patient data.
In a May 4th bulletin, DHS warned that rapid adoption of features like wireless network connectivity and remote management make medical devices greatly increase the “attack surface” for hospitals and other healthcare organizations, while existing regulations do a poor job of addressing – or even assessing the security of medical devices, DHS warned.
Among the problems cited in the DHS bulletin:
- The U.S. Food and Drug Administration – which is authorized to approve medical devices for use – focuses on device safety, but not security. Issues around configuration and device security have traditionally been out of scope for The FDA.
- The rapid adoption of wireless networking and its use connecting remote medical devices to health IT networks has opened doors to attacks on medical devices, and attacks that use vulnerable medical devices as stepping stones to other network resources.
- Similarly rapid adoption of mobile devices like smartphones and tablets within healthcare settings introduces the possibility of patient data loss through insecure network connections and sync operations. Further, unmanaged mobile devices connected to health IT networks are a possible source of attack and compromise.
DHS’s prescriptions for fixing the medical device security issue were what you’d expect: the presence of actual security features on products (sad you have to ask for it, but….), as well as secure deployment: layered protections, strong passwords, user least privilege, patching.
The Cullen case suggests that, even with those protections, danger lurks. DHS would do well to make strong recommendations for application audits and other assessments of underlying code security part of the requirements that medical device makers must satisfy for customers – or regulators.