As a security professional, I am often intrigued and frequently fascinated by some of the clever things that security researchers come up with. I remember hearing of one presentation where a researcher was able to tunnel into a laser printer that was exposed to the internet, and stop a print job, causing the paper to catch fire in the fuser (at least I believe that is how he did it).
Another interesting discussion I was made aware of was someone who could remotely access cars parked in a parking lot and set off the car alarms. Very clever indeed.
Another researcher showed how he could attack an ATM and get it to spit out cash. Fascinating!
The fact is that people who build these devices and add features that allow creative ways to access them build them for functional purposes. The functionality is complex enough to build, so it is not very likely that they consider non-functional uses of such devices when building them. Even if they did, someone is likely to give them some blank stares and furrowed brows if they spend too much engineering time considering how not to use the device.
This, of course, is completely at odds with the way security researchers (hackers, if you will) look at things. They do not spend much time looking at what makes something work, but instead on what makes it not work...or work in a way that was not intended by the designer. As luck would have it (for the hackers), there are literally infinite possibilities in the non-functional world.
This, of course, leads to a lot of ways to potentially entertain and certainly alarm more than a few people. In many cases, there are ways to mitigate some of the risk associated with these findings. In the case of the laser printer, simply segregating the network is fairly straightforward. With ATMs, we can always go back to tellers (until we find a fix). We can always disable the network features of automobiles as well. These "fixes" may inconvenience us, but at least they do not diminish our quality of life in any major way. As humans, we cope well.
From an ethical persecutive, one can argue that the work of security researchers is completely necessary to move secure design and development forward. Let's face it...good security happens in a reactive manner close to 100% of the time. Going back to my earlier point, engineers are primarily tasked with building functionality into products. There are ways to do it with security built in from the ground up, but we are still a long way from getting the engineering world to embrace that. We will eventually get there, but not likely until the public consciousness is raised.
Nonetheless, I had the opportunity to review an excellent document recently titled "The Menlo Report: Ethical Principles Guiding Information and Communication Technology Research" (http://www.cyber.st.dhs.gov/wp-content/uploads/2011/12/MenloPrinciplesCORE-20110915-r560.pdf) which "...proposes a framework for ethical guidelines for computer and information security research, based on the principles set forth in the 1979 Belmont Report, a seminal guide for ethical research in the biomedical and behavioral sciences." This report really hit home with me, and I want to explain why, but first let me tell you a couple of stories.
I want to start with a story about flu shots. Back in the 1980s I worked at a resort in Florida and one day I heard that a nice old chap who was one of the dock masters had gone to his doctor for a flu shot (his first ever) and had an allergic reaction to the vaccine, went into anaphylactic shock, and died.
That sealed the deal for me. Despite all the prodding I had gotten from those around me as I grew older, I decided that there was no way I was going to get a flu shot. I mean...c'mon...I DON'T WANT TO DIE OF AN ALLERGIC REACTION!
This sat well with me for many years...until I got the mother of all flu attacks in 2004. I remember laying in bed in sheer misery for two weeks, first fearing that I was going to die of all the pain and inability to breathe...and then, towards the end, almost wishing I would just die. Let me tell you, a cold is NOT the flu. I have had bad colds, and this was the flu, and it was utter HELL.
It occurred to me, once I was feeling better again, that the one instance in my entire life of anyone having a severe reaction to the flu vaccine was not a rational justification for that two weeks of misery, which was likely to happen (and perhaps even kill me as I got older) again. I certainly like to think of myself as being intelligent, but I have a way of rationalizing things to suit my purpose outside of global empirical information...and sometimes it bites me in the you know what.
What is perhaps even more alarming is that, prior to my awakening to the benefits of vaccination, I almost prevented my first child from getting vaccinated because of all the hysteria surrounding alleged incidences of autism from vaccinations. Were it not for the calm and patient persuasion of the vaccination nurse at the local medical center, who explained to me that the likelihood of devastating childhood maladies was indeed quite high for my baby if he did not get vaccinated, I may have exposed him to several diseases that came back in the last decade (namely polio and whooping cough), no doubt at least in part from the irrational fears brought about by vaccination naysayers.
Fear and uncertainty has a way of getting us to do things outside of the realm of reason at times. It is the essence of propaganda, marketing hype, and political circuses. After watching "Jaws" in the 1970s it was not until I had spent over a decade living in Florida, where the waters are literally thick with sharks, that I realized that the likelihood of getting attacked by a shark was FAR smaller than the likelihood of getting skin cancer...which several of my Florida friends and associates did contract. Media hysteria and Hollywood stunts have a way of tugging at our hearts and warping reality...indeed they do.
So this brings me back to the point I am trying to make (and thank you for being patient). Lately, there have been more than a few media-rich and Hollywood-like stunts portraying some of the dangers of security flaws found in medical devices, and this is simply not sitting well with me. Unlike printers, ATMs, and automobiles, medical devices are currently causing patients that need them to experience a much better quality of life than they would have without them...and in many cases they are keeping them alive. If one looks at sections C 3, 3.1, 3.2, and 3.3 of the aforementioned report, some very salient points emerge:
C 3 Beneficience
"...the Beneficence principle reflects the concept of appropriately balancing probable harm and likelihood of enhanced welfare resulting from the research. Translating this principle to ICTR demands a framework for systematic identification of risks and benefits for a range of stakeholders, diligent analysis of how harms are minimized and benefits are maximized, preemptive planning to mitigate any realized harms..."
C 3.1 Identification of Potential Benefits and Harms
"...researchers should identify benefits and potential harms from the research for all relevant stakeholders, including society as a whole, based on objective, generally accepted facts or studies..."
"...One helpful approach to identifying harms is to review the laws and regulations that apply to an ICTR activity, and analyze the underlying individual and public interests that the research might negatively impact..."
"Because laws may be unclear or open to interpretation, a narrow focus that only considers acts impacting the integrity or availability of information and information systems might overlook a broader range of harms that may not be explicitly protected by law."
C 3.2 Balancing Risks and Benefits
"...the researcher should systematically assess risks and benefits across all stakeholders. Researchers should be mindful that risks to subjects are being weighed against the benefit to society, not to to either the research subjects or the researchers themselves. Researcher actions should be measured using a standard of a reasonable researcher, who exercises the knowledge, skills, attention, and judgment that the community requires of its members to protect their interests and the interests of others.
When ICT is involved, burdens and risks can extend beyond “the human subject,” making the quantification of potential harm more difficult than with direct intervention. It can be difficult to balance risks and benefits with novel research whose value may be speculative or delayed, or whose realized harm may be perceived differently across stakeholders. If there are plausible risks, researchers bear the burden of showing specific, evidence-based consideration that they can manage those risks."
C 3.3 Mitigation of Realized Harms
"Despite appropriate precautions and attempts to balance risks and benefits in ICTR, research may cause unintended side effects that harm stakeholders."
Please understand that I have taken excerpts out of the report, and I expect the reader of this blog posting to look at the entire article.
The point should be relatively clear by now. The work of security researchers is invaluable, but prior to the hacking of medical devices, the societal risks have not hit home quite at the level they are now. The fact is that any patient who chooses to minimize their risk of having their device hacked as a result of the media hype is likely to make a decision that is likely to cause them far greater harm than the likelihood of the device being criminally hacked.
It is my hope that security researchers delving into medical device hacking take this under serious consideration, because (as Peter Parker - The Amazing Spiderman's uncle once said) with great knowledge comes great responsibility.