While it may seem, to some, that the FDA was falling short in their need to address cybersecurity, it is important to note that the FDA does indeed provide guidance (and has for many years) related to cybersecurity from a functional perspective. In fact, with all issues related to safety (and we are talking about cybersecurity as it relates to safety here), the FDA is quite thorough in addressing safety as it relates to unintentional misuse of functionality. What is different today is that the FDA is now tasked with addressing intentional misuse...and that is where things become complicated.
The reason this is so complicated is because intentional misuse (or all the ways something can be used incorrectly...malicious or otherwise) is infinite. That is why hackers, researchers, or malicious actors have so much to work with. Moreover, hiring a hacker to constantly try to constantly hack away at your medical devices during the 18 month to 2 year development phase can become quite cost prohibitive.
In conversations I had with the FDA, who happen to be a very busy and underfunded agency, it was clear to me that they wanted to figure out a way to shrink this infinite space into something reasonably manageable, and they began seeking the advice of the security community...and the security community was happy to help. What is particularly great about having discussions with the FDA is that they are, by and large, scientists. Security researchers...despite the rather underground nature they have worked in for so long...are also scientists. While some may argue against that assertion...others will agree.
Scientists like empirical evidence, and are driven more by curiosity than by dollars. I am not saying they are not budget conscious...because they must be in order to conduct research. What I am saying is that they are more concerned with "what if" than "how much does it cost". This, as many of us our painfully aware, differs from the corporate world. If you talk to a hacker for any length of time, you will see the similarities to scientists pretty quickly.
In late April of this year I joined Codenomicon, which is a company that is arguably the world leader in fuzzing technology. For those who do not know what fuzzing is, I strongly suggest you do some "Googling" and read about it. In short, it is the practice of exercising (some may say bombarding) a target with malformed data until it produces an error...or simply dies. The malformed traffic that causes the error is thereby deemed a vulnerability. A good fuzzing tool keeps track of what causes the error, and allows it to be replayed as needed to help developers remediate the error.
So let's get back to the FDA. Codenomicon demonstrated their fuzzing tools (known as Defensics) to the FDA, and they were more than a little impressed. As we were informed, they had decided to build a cybersecurity testing lab, and wanted to bring in our fuzzing tools as the first of many tools to come. It took a while to get from the initial conversation to the final award, but on July 12, 2013 the FDA posted a solicitation for Codenomicon Defensics, and we were awarded the contract on August 13th, 2013.
Needless to say, this has certainly generated a lot of buzz in the medical device industry. The FDA released draft guidance in June, 2013 stating they are expecting vulnerability assessments as part of the documentation submitted to the FDA, then states they are building a test lab, and will incorporate fuzzing into their lab. This, of course, is part of the answer to how they are going to address cybersecurity.
I look forward to working with the FDA in making sure our medical devices are secured. This is a great first step toward that goal.