Medical device manufacturers face a daunting host of challenges, especially where cybersecurity is concerned. In response to the growing concerns of these manufacturers, the Food and Drug Administration (FDA) recently released guidance in the form of its "Content of Premarket Submissions for Management of Cybersecurity in Medical Devices." This nine-page document details five "cybersecurity framework core functions" — identify, protect, detect, respond and recover — that developers should use to direct the steps they take. But what do these things really mean for developers and the future of medical device cybersecurity? Let's take a closer look.
That's how a recent Ars Technica article sees the FDA guidance. Because, while implementing controls to "limit access to devices through the authentication of users" and features "that allow for security compromises to be detected, recognized, logged, timed and acted upon during normal use" is great in theory, there's no consequence for developers who don't follow these rules. The FDA has plans to hold a public workshop to make developers more aware of the existing threat landscape and convince them to build medical device cybersecurity into their manufacturing processes rather than tacking it on after the fact.
It's a good idea, but as noted by InformationWeek, the FDA may be missing the mark with a focus on devices themselves rather than the data they carry. While research has demonstrated that mobile medical devices can be hacked and pose real risks to patients, according to Ryan Kalember of WatchDox, "As with the BYOD challenge, it's not the device that's at risk; it's the data. Health data isn't going to stay on the device forever." So where does this leave developers?
Ultimately, medical devices are designed to forward the needs of patients, meaning any attempt to "harden" them hits a natural barrier — one that's noted in the FDA's guidelines: "Security controls should not unreasonably hinder access to a device intended to be used during an emergency situation."
Those guidelines, along with the growing realization that medical data is at risk, put developers into the position of coding better software in the form of applications that are native to medical devices and ensure data security without compromising function. Ideally, this is best done through a kind of modified Agile development process, where time lines remain reasonable and goals short-term, but security ramps up to match usability for the top spot. Achieving this kind of rhythm with secure Agile development requires a process-driven testing environment. Given that many of these health devices will be used almost constantly, vulnerability testing must be continuous, extending from each new code iteration to the next to help ensure the best premarket submissions possible.
While the FDA's guidance is in the right place, its focus is slightly off the mark as data (rather than devices) becomes the high-priority target for malicious actors. Regardless, a solution for the medical device cybersecurity problem can align with the administration's aims. Manufacturers should shoot for Agile software development, backed by continuous testing protocols, that can help deliver devices that are purposefully built to resist data breaches while continuing to provide on-demand medical functionality.
Photo Source: Wikimedia Commons