♪ [opening music] ♪ ♪ ♪ >> Ken Hoyme: I'm going to
talk to you today about medical device security. It's going to somewhat blend
into security of cyber physical systems, things that
touch the real world. And we'll just kind of like -
If you haven't felt this way and this is kind of how the security
world tends to kind of see this. And so, if you're
ready, we'll set off. So, as I talked a little bit
about- I started my world in the safety critical world. So, I worked at Honeywell's
corporate research center. So, we were a centralized
research center for all of Honeywell back at the day when
Honeywell was a control systems focused company. So, everything we touched
manipulated the real world. Thermostats, control
systems for oil refineries. I focused mine in space and
aerospace and ended up working quite a bit in fault tolerant
architecture as I had got to spend quite a bit of fun time
working on the flight deck for the Boeing Triple 7. Then I got recruited and
pulled into the device world, ostensibly as a safety person. I built a - I worked on the
design of a fault-tolerant back up pacemaker defibrillator
that's in all of our devices. So, it's in - That's where my
patent stuff comes from in the device world. And then I ended up working -
starting to get pulled into the security space as I ended up
becoming the system lead for the system that we use to monitor
patents when they're sleeping in their bed at night and putting
long-range radios to pick the data up while their sleeping. Norma has a bedside
stand next to her, so our latitude
patient-management system monitors her amongst about four
or five hundred thousand others that are currently
on that system. So, I evolved to become
the focal point for product security. Then I went away, as I said,
about three and a half years at a small R&D company, did DHS and
army tele-medicine work where I pulled in medical device
security government research contracts, worked on cyber
[indistinct] for DARPA. During all this period of time
I - AAMI is the Association for Advancement of Medical
Instrumentation. They are a standards and
education body for the medical device interacting with the
parts of hospitals that are fielding and managing
medical devices in there. So, I co-chaired a device
security working group and one of their best sellers was the
guidance we have and how you do security for medical devices. And then little over two years
ago I got pulled back into Boston Scientific to drive a
companywide program on medical device security. So,
that's what I do now. I'm going to- couple of examples
because I realized coming down here - The only thing you may
know about Boston Scientific is we must be from Boston, which
is what a lot of the marketing people keep asking, do I want to
come down to a lunch in downtown Boston? And I'm
from Minneapolis. We have more employees than the
twin cities area than we do in Minneapolis, but again, we're
about just coming up on a 10-billion-dollar company. We do about 10 percent of that
goes into R&D and so we have about 30 million
patients a year. We also have a huge focus
on being a green company. So, we touch 130 countries,
132 thousand employees. Very much a global business. This is the kind of areas
as medical devices that we focus on. The way I tend to describe
Boston Scientific is we've got two parts of the company. We've got plumbers and
we've got electricians. And the plumbers are really good
at making long skinny things that can be slid inside the body
to go and treat things remotely with minimally
invasive capabilities. So, its heart stents. Putting a stent in the heart
that goes in through the leg or through that point of you. And more and more and more
delivering energy remotely through lasers you know
laser fibers that go in to treat kidney stones or cancer. We'll do ablation for cardiac
ablation to get rid of arrhythmias or ablation of
other locations, so we're getting more
and more broad in that. But in general - And then, the
electricians are the implantable devices that we do in terms
of deep brain stimulation, spinal cord stimulation,
heart stimulation, subcutaneous heart, so this is
kind of a mixture of more in mapping arrhythmias and how do
you do 3D mapping of hearts? So, again, everything we're
touching is either treating patients either actively while
they're ambulatory by having it in their body during
a procedure, or diagnosing and providing
information to help physicians make accurate diagnoses.
So, we touch the human. I just wanted to make sure
that people understood a little background of Brian and myself. So, this is Brian at his 4th
birthday and you'll notice the space shuttle on here and
his Lego Space Shuttle there. So, I took this picture
back in that time. Now, about a year later
when I was at Honeywell, I got to be over
this other side. I took the picture on this side.
That's the real space shuttle. That's the space shuttle
Challenger being serviced in the orbital processing facility. About a year-and-a-half
after this was taken was when Challenger blew up
on launch in '86. So, this is kind of - But, we
ended up resonating early on because we were both
into space shuttles. So, let me talk a little bit
about - your welcome, Brian. [laughter] I showed Maggie this yesterday
so she already knew it was coming.
“Don't tell Brian.” [laughter] And that is not the most
embarrassing picture I could have gotten. [laughter] I was nice. It wasn't
embarrassing at all. It is pretty damn cute. Anyway, we'll talk a little bit
about the threat of environment in healthcare. I am sure if you read
the security literature, you read the public
pot literature, you're going to hear all
the stuff going on related to healthcare and
how insecure it is. Mostly, we have not seen
targeted attacks against the healthcare environment. Somebody's written
custom code to go after So, a lot of the warnings of
things - even the situation with Medtronic that happened late
last week - was really about researchers looking at what
could be potentially done if somebody were to field something
to initially attack these. Mostly what we've gotten
has been what you call drive- by exploitation. An awful lot of medical devices
- and we'll talk about this a little bit later as well -
tend to go with commercial off-the-shelf software,
Windows operating systems, things that are easy to build,
and as a result they share the vulnerabilities of those
underlying platforms. And so, something put out to
exploit those vulnerabilities will catch a healthcare
device because of the shared infrastructure, not because
the developers wanted to attack medical devices in
the environment. There had been some ransomware
that were deployed to hospitals on EHR that look like it may
have been targeted phishing attacks against the hospital,
but the underlying malware that they were using in the phishing
attack was not built to attack an EHR system or something
that was uniquely found in the healthcare environment. It was designed using
garden-variety kinds of things. So, again, we haven't seen that,
but a lot of healthcare has been affected by NotPetya,
WannaCry, Conficker. I put the NT symbol up here
because we'll talk a little bit about how long devices that
are cyber-physical systems and medical devices get used
and that tension between what software uses their base versus
how long they're going to be used in the system.
The other left click. Two primary threat actors and
a lot of the thought you go through in terms of
threat-modeling in a world is I think- What are
the threat actors? What are their capabilities? What kinds of things do
I need to worry about? So, nation-states - And five
years ago everyone would say, “Well, you know, you can't
tolerate - If a nation state decides to come after
you, you're toast. So, I don't worry about them
in my threat model.” Well, what we've seen in both NotPetya
and WannaCry in 2017 was they were North Korea
and Russian attacks. And they weren't, again,
necessarily deployed against healthcare, but they used
exploits that caught them. So again, what nation-states
tend to be after is either staging capabilities to
potentially use in the future if a cyber-war broke out. Or a lot of it is data
information-gathering. They're trying to find out -
If they can find out health information that's embarrassing
about somebody that has a sensitive position that
could be used to blackmail. So, they're collecting
lots of data. So, they're sweeping up massive
amounts and filtering through for that kind of intelligence. But they have a
lot of capabilities. And then, cybercriminals has
been the other area which is people looking to make money. So, their output of
it is what can I sell. So, whether it's credit
card information, whether it's healthcare records,
whether it's insurance information, or is it ransomware
that can cause people to pay money to get back a resource
that is important to them? And certainly, in the healthcare
business the health records are a prime potential target for
cybercrime because today's modern hospital runs on the
electronic systems that keep it going, and to take out the
EHR can cripple a hospital, and we've seen some examples
of hospitals being down or in reduced mode for a week
while they are dealing with their backup. So, another lesson in the
security operations rule is there is - and I don't have this
on a slide - but there is a term called Schrodinger's backup. And Schrodinger's
backup is - That is, basically, any backup that has
not been tested for recovery does not exist. You know, if you haven't
actually checked. So, a lot of hospitals were
running on Schrodinger's backup and found the ability to go
rollback was not nearly as easy as they thought it was. So, what you're seeing is a
blending of these threats. What you're seeing in Russia
is cybercriminals that the government looks the other way,
or may hire them for one-time jobs as long as what they're
doing doesn't impact the Russian government or
their infrastructures. But elsewhere in China you're
finding people who work for hacking groups for the
government who then on the side will freelance, given the tools
and things that they already have access to through their job
to do cybercrime work that they might do to make
additional income. So, we're seeing - This is
probably within the next couple of years in the healthcare
environment, the thing we're worrying
about and watching the most. Unfortunately, they're the most
sophisticated - Hacktivists: We've seen a
couple of incidents. There was a child custody
case at Boston Children's four years ago. And the group Anonymous did a
denial-of-service attack against the public website of
Boston Children's in order to, kind of, be as a protest against
this custody issue of this sick child until people in various
positions said you know, “Anonymous, you're talking about
the health of kids.” And they went “oops” and they pulled
back their DDOS attack. But still, script kiddies are
your people mostly in this world, the researchers that you
see that post things on devices don't post scripts that make it
easy for people to replicate it. They try to contact the FDA
and the companies to fix it with painting the broad picture
of what could be done, but not posting the code
to make it done easily. Disgruntled employees, and
my recommendation always on disgruntled employees is try
your best to keep them gruntled. And just plain old user error,
and user error can even be in the form of social engineering
and being aware of what kind of information - So, there's a lot
of companies out there that will do common administrative
username and passwords for their equipment and then I think
I'll use the example here and some hacks. Mayo Clinic hires some
researchers to come in and just see how their devices would
behave and one of the things that surprised Mayo was
these researchers used social engineering to call up the
support lines of companies and pretend they're docs and get the
companies to give up information like the passwords and things
they needed to access it, because nobody would pretend
to be a doc, would they? So, again, learning lessons
for the industry about how those kind of things can be done. I going to touch briefly - I got
kind of a potpourri of topics to get you through today, to
get you a picture of this. And this is a you know I was
laughing with Brian this weekend because the hand soap in his
guest bathroom was a brand called method. And the scent, or whatever,
was called waterfall. And, so, I said, “Oh, do
you have a method agile?” And, so, that's what he's had to
grow up with and may explain a little bit about why
he is the way he is. But anyway, many of the students
nowadays probably have only heard of agile and haven't
necessarily heard of the waterfall model. This is a classic waterfall
model where phases kind of - You do one. Complete
it. Move it the next. Much of the regulated world, and
I'm not just saying - When I say regulated, it's not just
medical device world, though this is from FDA quality
system regulation documentation. But whether you're in aviation
or nuclear reactors or that, tend to characterize the
elements in a waterfall mode without saying that your
development and design processes necessarily in here have
to follow a period - So, you know, full Big bang to - And
you don't start testing until you have the whole
thing together. But, kind of, the idea in this
one is - and I've wanted to bring this - because as you see,
all the stories about what's going on in the security
world when the next step is, “How do you patch them? And what is the expect - Why
is it that device manufacturers aren't just patching things in
24 hours like some hospitals ask us to do? And it really is - this
two steps as we talk about verification, validation. And this kind of just
automatically shows that verification is proving that
what you built meets what you said you were going to build. Validation is: Does the end
system meet what the customer needed? So, a physician - So,
often in validation exercises, if it's a surgical device we
will go through practice surgeries on a dummy and
observe whether or not a trained physician doing it actually
achieves what they were supposed to. That's validation. Verification is going and
verifying requirements, design specifications. Does this thing respond in
this particular period of time? So, everything that's captured
in terms of requirements we've demonstrated is actually
doing what it's supposed to. And the reason that I kind of
pointed that out - I think I talked a little bit is - When
I talk about updates is because we have these old devices.
Because we still have NT. What has been traditional in the
regulator Marketplace is - The reality is we're
really good at safety. And we test the heck out of
systems and are pretty convinced that they're safe. And traditionally we update
software in a medical device only when something happens in
our experience in the field that has demonstrated that our
safety analysis is wrong. There's a safety issue that we
need to address that we didn't know of. That is infrequent,
thank goodness. But as a result, we tend to not
necessarily have the mechanisms staged for rapid reverification. Validation doesn't necessarily
change because if I'm doing a vulnerability patch, I'm not
changing the correct function of the device, I am removing
incorrect functionality. A buffer overflow that's sitting
in a library that's now been patched by Microsoft- You never
want that buffer overflow to be used for wrong purpose,
so if I add code, then I'll bounce check the
buffer and it doesn't have a buffer overflow. All of the normal cases should
still operate in the same way. So, we don't need to revalidate. But - So essentially,
we need to do this. FDA is really trying to get out
of the way and not be blamed as the molasses slowing
down update, so they've made some real clear
expectation about situations where we as manufacturers can
make updates and go to the field without asking FDA's
permission in advance. They still expect that
validation to be done, and I think the thing that
industry's wrestling with when you hear slow response is
because we only did validation and verification infrequently,
the processes to do them may not be terribly efficient. I would assume that in some
of your classes related Agile, you've talked about things like
continuous testing things where you can have that so building
these systems so that you can make these kinds of patching
updates and rapidly know that you have not broken verification
tests is an important thing the medical device and other
regulator world needs to learn so that we can do those
kinds of updates quickly, but that's a lot of the reason
why you hear foot-dragging as many of those. I know that in the case of our
implanted device like Norma has, the test for the implant to go
through all of our tests is a calendar time of a team- I don't
know how many people will be on the team - around the order of 3
months to complete just to run the tests because of
all the safety tests. So again, not all of those
tests need to be done, if you do a networking thing. That said, implants tend to be
bare metal devices there's no windows or anything their coded
straight on metal so we're not typically having vulnerabilities
come about because Microsoft issued a patch, but that's
understanding that. So, I'm going to give you
a quick, so it's 1 :40 is what's 1 :50? An hour
and a half, 3:00 okay. I just wanted to make
sure I'm gauging, and if there's questions do feel
free to jump in with things if you need. I can manage that. So, if you've seen the history
of hack some of the stuff, I'm going to just talk a little
bit about what has hit the medical-device world. In 2008 was kind of
the watershed event. A team at the time from
UMASS-Amherst and University of Washington published, well
got a paper excepted into IEEE Security and Privacy's
annual conference. And it was in May 2008 the in
March of 2008 the agenda got announced, and
the blogger world, a blogger person saw it, started
blogging about the fact that this topic was about
hacking defibrillators, and shortly thereafter a couple
of small publications I believe is<i> New York Times</i> and<i> Wall
Street Journal</i> picked it up, and were looking to do stories
on it, and it blew up. They basically demonstrated that
they were given an old Medtronic device that have been
explanted out of a patient, so you know had
limited battery life left, that wasn't an issue
in their research. But devices, they
always talk about well devices now are wireless. The pacemakers and
defibrillators have been wireless since the early 70s
because the absence of wireless means you got to physically
touch the device under the skin. And early back in the sixties
the early pacemakers had one setting. You could change the pacing
rate and that was done with a screwdriver, so if you wanted
to do it after was implanted you had to locally anesthetize and
sort of gear that and make a little cut and go in and attach
a screwdriver with that and literally change the pacing rate
physically by turning a port on the device. You generally don't really
want to do that very often, so these systems have
been near-field inductive. I'm an electrical
engineer by training, so that basically means you
have two coils of wire that are planarly located over each
other and a magnetic field is generated through the coils
and you communicate with them. They have to be within about an
inch of the skin for that link to happen, and that is what
they hacked without - and did a classic well you know RF can be
made to operate from a distance, but this is a 1/r q
drop off with energy, if you remember
your physics class, so 1 over- the cubed distance
standard RF protocols up in RF range were using more
electromagnetic waves and in terms of that is 1/r squared
and energy drop, so it's a lot easier to make a
high frequency RF system work from a long distance
then it is an inductive. So, I thought that with all
the flaws in the research, they really didn't show how
they could make an attack, which they demonstrated from 1
inch away with a software radio work from any
practical distance, and the real risk goes into
these long-range RF links, which had just started
emerging on the market, but weren't in the
device they had. Later they said they'd
demonstrated the same attract using the long-range RF link and
last week's report shows similar behaviors in their long-range
RF link today. Yeah Ryan? Oh, we had a question
over here, thank you. >> [Audience member]: Is the
frequency for communication standardized? >> Hoyme: Good question. The - most device manufacturers,
us different initially, went to a 405-megahertz band
called the medical information communication system, and
ICS mix or mic's depending on whether you're from here
or there, you say potato. We used an ISM band range up
in the 900-megahertz range, and ultimately because there
were certain countries that would not approve those
higher frequency bands, we also joined the 4:05,
so pretty much everyone is at 4:05 megahertz. The evolution in the industry
is toward low energy Bluetooth. So, looking at security
protocols on top of standard Bluetooth the energy level
of Bluetooth starting, the main issue of these is a
pacemaker is designed to last off of the battery that's inside
the sealed can and a little thing for 10 years. So, the energy level you can put
into telemetry is very limited or you shorten that life cycle
down and the infection risk when you, when you replace the
battery you replace the whole device and the infection risk
for a replacement is on the order of 2%, that'd
get a nasty infection, and so long life really has some
overall benefits to the patient. So, when they talk you
know about encryption, full [indistinct] encryption
like last week, takes battery energy. There really is a battery
trade-off going on that if you shorten a 10-year device down to
a six-year device and you look across a hundred thousand
patients how many infections happen because I did that many
more replacements so it's not a - it's not dealt with lightly. But yeah anyway for 4:05
megahertz is an International frequency that has
been standardized. >> [Audience member]: Is there
a reason why they're not using microwave-hertz? >> Hoyme: A lot of it
tends to be depth of skin. There's certain attenuation that
happens with fat-tissues and moisture, so getting frequencies
that can penetrate into the skin. And one of the things that
we've actually- was interesting learning, we learned on
a mechanical problem, is there are certain parts of
the world where for aesthetic purposes they will implant the
device under the first layer of muscle rather than in the fatty
tissue so that it hides it more, which puts it deeper
underneath more tissue, so you have more energy
loss in talking to it. It also means when it's under
the muscle that the can gets twisted every time you do this
and that sets some requirements for Dubai stiffness that had to
tolerate that kind of an implant approach. Good question. So anyway, that was the first
one and I will just make a brief point because it wasn't in the
slides because we try to avoid too much, waving our thing but
we put our first RF protocol with long-range devices out
starting in 2005 in Europe, 2006 in the United States, and
they didn't have these security issues. It's good
to be paranoid. The next line of attacks
happened in 2011, a researcher by the
name of Jay Radcliffe, who himself was a diabetic
and wearing an insulin pump, wore an insulin
pump, decided, “Gee, is my own pump hackable?”
And he discovered it was, and presented it at Black Hat in
August 2001 and everything kind of exploded related
to that as an issue, and if you ever have a chance
to dig into Jay Radcliffe's- the slides and things that he did
and some of the presentations on it. It's fascinating the
persistence he did. He ended up Social Engineering
figuring out what the chip vendor it was TI I beleive was
the chip vendor that was used for some of the
internal electronics. And he social engineered TI to
get specs and manuals on the processor so he could kind
of learn how they work. So, it's like you kind of think,
“Oh this is all too hard” until you see well what happens when
somebody birddogs it and decides - they decide to focus
on you every system has vulnerabilities. So, it's to a certain extent
your first goal in implementing a secure system, is to make sure
that all of the standard things they're going to
try first don't work, because they will try all the
standard things and depending on how many systems they have in
their potential field to try, if the simple things don't work
on yours they may move on. Now if somebody has a particular
reason this guy was wearing a Medtronic device he had a very
particular reason to understand Medtronic you know somebody
happens to grab onto a specific system and beat on it, it's a
deeper issue about how strong it needs to be to make
sure they don't find it. But anyway, Jay worked
for us for a while, so I've gotten to know
Jay very well. Good guy. Very well intentioned in terms
of it was personal to try to improve the industry
kind of thing, which is what I found about
a lot of the researchers that are out there. Barnaby Jack was a
very famous researcher. Type in Barnaby Jack if you want
to see his most dramatic thing that got to hit the news waves
was at one of the death Connor Black Hat conferences. He had an ATM machine on stage
and hacked into it and have it spitting out twenties
onto the floor which was, you now so, he was a showman. In that regard, he was with
McAfee at the time and he demonstrated- Jay Radcliffe hack
required him to know the model serial number in
order to affect it, he figured out how to extract
the model serial number from the device and use it to connect,
and from a hundred yards away demonstrated to a clear dummy
with clear fluid inside and the insulin in the pump being dyed
blue so it would be dramatic showed from a
long distance away, getting a dump fed device to
dump all of its insulin at once. So again, showman
combined with it. So, it started- the whole
industry starting to take this much more seriously as
they started getting, and again it's always been this
love-hate relationship between the people being hacked and
the researchers hacking them; which is the researchers
stock-in-trade can be reputation which allows them to charge
higher rates for their consulting services
for gives them credit within their industry. But they can be very well
meaning and they always feel like they approach manufacturers
and they get strong-armed or the comment that researchers say is
they hate it when they contact a company and the person who
returns their call as a lawyer. They don't feel like they're
being listened too, rather than having a
technical person call. So again, if depending on which
side of the world you might end up in, if you're working
in security research, just be willing to listen and
be- and listen to them and try to direct their interest in a
way that is- it is productive for all because in the end
largely is what they're intending too. Billy Rios is another
big name in this area, has been a prolific hacker in
both the medical device space, but if you've read any stories
about NASA and the Department of Defense funding, some hack work
on a NASA 757 on the ground, and some of the success that
happened that's been still classified, Billy was on the
team that hacked through the airplane as well. Anyway, they started looking
at devices that were using hard-coded passwords, and if
they used hard-coded passwords and you compile
them into your code, a pass-through of strings
and grepping for certain alpha character- grepping for things
like “password” or other things in there. It easily uncovers inside binary
code where- what the password is that's been stored-
compiled- into the code. So, they weren't- there's a lot
of updating software that's put out by manufacturers
on their websites, so they started going and
downloading and doing these. They said they stopped
when they hit 300. And kind of go, “Who do we
tell?” And they told the FDA and ICS-CERT- So, if you've
talked in- about organization, so U. S. CERTs are computer
emergency response teams, and they are government-funded
groups that coordinate the reporting of vulnerabilities
and the response by the those responsible for the software
to patches getting issued. So, if you see a CVE number next
to a vulnerability reported, those CVEs are monitored by
Miter- are managed by Miter, which is funded by
the government and U.S. CERT is the organization that
tends to have the website that talks about them. ICS-CERT came under
Department of Homeland Security, and ICS stands for
Industrial Control Systems. And so, it was initially created
for vulnerabilities around systems that manufactured
chemicals, and other things where if they
go wrong you could have a Bhopal or another kind of
disaster on your hands. So, ICS-CERT publishes
the things related to that. ICS-CERT was given the
responsibility to pick up medical devices in healthcare,
because of its relationship to internet of things of
cyber physical systems. They didn't change
their name from ICS, but if you go to the ICS-CERT
site and look at their alert list, you'll see a mix of
Siemens and Honeywell and Rockwell and things related to
automation systems and Medtronic and Phillips and Boston
Scientific and BDF for medical devices. So, they're the
people that kind of do it, so they basically went to the
FDA and ICS-CERT which issued an alert, and the FDA issued
guidance on what they needed that day. It was kind of a
force multiplier to get the industry to act. This is kind of a slide
used- not kind of a slide, this is a slide used
by Mayo Clinic. After- I referred to earlier
about them having people in, and the social engineering
surprise that they had. They had, does that say 45
devices up here that they looked at? Yes, for five days. They didn't put
this on the slide, but from one of the researchers
that I know that was in the group doing this- my
understanding is the approximate number of devices of those
45 that they were able to successfully hack into and
break- if I remember right it was 45, that was it. So, none of them
were invulnerable. So, the other one that
happened was back in 2015 against Billy Rios again. Billy Rios had grabbed on and
he had shown a ability for a particular Hospira
pump to be accessed from outside the hospital. So, it's- infusion pumps sit on
hospital networks to get access to a drug library, so there is a
library that defines drug types and what their maximum and
minimum dosage rate can be, and it's used as part of when
you put a IV bag up on a pump next to a patient, that they
pair up and make sure that it's- it's supposed to
be a safety check. But there's a locally-hosted
server that the drug library can be tuned by the physicians'
staff of that hospital. So, in order to get
the latest drug library, the pumps are on a network to
allow them to download the drug library, and so they found that
between those manipulations and its presence on it, they were
able to make infusion pumps change their flow rate
from outside a hospital. So, it becomes all of a sudden,
a potential global attack. So, the FDA strongly encourages
you just stop using those pumps. So, when the regulator basically
starts telling all your customers to stop
using your equipment, that's not a good thing. I will note again in the issue
of being taped and that, Hospira was, I believe,
purchased or acquired by a company called ICU Medical,
who has been very adamant about revamping their product line. There was a biohacking
village at last year's DEF CON. And there was- specific note was
there was new devices produced by ICU Medical
that have been much, much better than what
was being done back then. So again, it's a lesson where
being whacked over the head in public by your regulator
forced behavior to change. Just a couple more examples
and then we'll move on to, “What do you do about
this?” Trinity Health- they were hit by WannaCry. Remember, right, WannaCry was
ostensibly a ransomware that was actually a lockerware, and
they had 2500 devices. Interesting thing that they
noted that was the point of the conference- that the site CISO
from Trinity was talking about was, this was a variant that
hit two weeks after the original WannaCry variant. If you remember the original
WannaCry variant had a kill switch built-in and there was a
researcher out of the UK that figured that out by
reverse-engineering some of its behavior and that kill switch
was whether or not a particular IP address or a particular
domain name would resolve on the DNS to an IP address. So, essentially, they would
check a domain name, run it through DNS,
and if it came back, there was nothing there, it
would continue to do its hack. If it responded back
with an IP address, then it would say, “Okay,
kill switch is active.” So, this kid went out and quickly
registered the domain for this particular string- DNS string-
and all of the WannaCry devices stopped because they saw it. I believe he's still being
retained here in United States, because he was at Black
Hat then a year or so later, and he was wanted for some other
hacking he had done in the past, and so he was
arrested by the U. S. government shortly
after leaving Black Hat, so. I follow him on Twitter-
he's an interesting person. Anyway, this was one where the
kill switch had been deployed without the kill switch, and
their thing was okay- they were quick to go to their
manufacturers, say, “What are you
gonna do?” Well, in two weeks to my previous
comment about verification, validation, how
all that process is, it was a surprise to them, but
no surprise if you understand the industry that the device
manufacturers haven't necessarily put that patch
on all of their devices, because this- if you remember
WannaCry exploited EternalBlue, which was a NSA hack that had
been hacked by the Russians, had been exposed by the- what
was the hacking group that was out there trying to sell them
on the dark web and then finally they exposed them. Microsoft had patched these
about 2 months earlier, which says through Microsoft
has never acknowledged it, that once the NSA realized that
their hacks had been stolen, they had gone to Microsoft and
told them what they had broken so that Microsoft could
work on patches, so that before they got out in the wild
there would be patches, so. But reality is there's a lot of
systems on the internet have not had the March patch when
the May breakout happened. When I heard this, it was
five months after this. They said they still had
1/3 of their devices that weren't patched. So, it's a real challenge for
operators of these kinds of networks of how you manage
networks of devices that are critical for lifecare. You are going to sleep
tonight okay, aren't you? You're going to
sleep okay tonight. When we talk around family
things she always gets you know, I always want to make sure
I'm not making you too nervous, but. We're getting
home late tonight so. Nuance Healthcare- if you
ever use Dragon Dictation you're familiar. Nuance
makes Dragon Dictation. They have a vertical for
healthcare and they had about 60% of the U.S. hospitals
signed up for their service. It's a transcription service
used during surgery so that physicians and things can be
narrating what they're doing and finding and having it
transcribed so they can just edit transcribed notes rather
than after it having to sit down and type or deal with- or
transcribe or dictate all of what's- what they
remembered having happen. Well, NotPetya was in June
of that year and that was wiper ware. If you remember, that's
the thing distributed to- through Ukraine. It was essentially the TurboTax
equivalent in Ukraine for filing taxes for the Ukrainian
government and the update server- this was a
supply chain attack. So, if you read yesterday about
anything if you- depending on how much you're following
cybersecurity, like I am. Asus was reported yesterday as
having had a supply chain attack where they're update server had
been corrupted and was serving bad updates. Well, this is an example where
the software vendor in Ukraine for this software had an
automatic update server it's was corrupted and issued Notpetya
into it which got distributed so all around the world. NotPetya was one that hit Nuance
and they had to go down and rapidly shut down the company. And the comment of the size of
this company that was at this conference was we're not talking
about separating yourself from the internet. We're talking about shutting
your network internal network down because it was going
laterally from computer to computer inside
once it had got in. It was not the outside
connection, that was already done. The root cause of like-I
remember- I was at Nuance or was, Merck was also hit by it
and one of them the root cause they realized was why they
got hit and some of their competitors didn't was it used
credentials that had been left on the computer that got
infected and used them as network credentials
to try to login. They were unlucky that one of
the last people that had logged in to the computer
that got infected, had administrative credentials
so they're NotPetya variant happened to be walking around
the company network with an administrator level credentials
which caused it to go [explosion sounds]. Anyway, they had to shut
down-their phones were all IP phones, their
email was all server, their phone numbers for who
everyone was they needed to contact were in Outlook. So, lessons on here
where they just said was, have a other means of knowing
cell phone numbers and things you do, have all your key people
so you can actually assemble team to recover because you
don't realize how dependent you are on your network
until it gets shut down. They ended up scrapping out
6,000 computers and they were offline for 6 weeks I think it
was more than a billion dollar'' worth of cost. The amazing thing-which I've
never heard a good root cause presentation for-they were
evolving to a system where the transcription and everything
was flowing through a cloud- based service. So, the NotPetya malware did not
flow through the cloud links and infect their customers. It may have been through the
administrator credentials that intervened the block but, had
they also infected 60% of the hospitals through those links,
it would have been a huge liability and safety issue to
have that many hospitals in the U. S. all impacted
simultaneously, so. Mayo Clinic, one of the leading
places in terms of testing and dealing with it, they have now
set minimum requirements as expected. So, again I kind
of view these as well, its Mayo asking it of
the healthcare system. They're thinking about, if
you're designing a system that's going to be dealing with
it, what other kind of basic, when you talk about
basic cyber hygiene, keep on a supported OS,
push OS updates down. They always talk
about antivirus. I really think in
iron net of things, stable devices like this
whitelisting is a better approach because it's a little
more resilient to things that aren't in your AV solution and
you have lots of challenges. Each hospital may have a
different AV solution they're deploying, and as a manufacturer
you have to test to make sure the AV engine is working
with all the different types, you know, so. The management of AV is a much
bigger thing in trying to deal with, than the management of
whitelisting along with that. But again, you receive 3rd
party software patches no default passwords. They want to have it
comply with might Mayo work account standards. So, they want to be able to
ultimately hook you in and I think Mayo's active directory,
in terms of using localized credentials for that. So again, this is kind
of their expectations. So again, and they have
promulgated this to others. One of the cool things is my
collogue in IT that I work with closely, we stole from Mayo, so
he knows all that stuff from the inside. So, it's good to
have him onboard. Since then we've seen
BlueBorne, which is a Bluetooth vulnerability. We saw various data breaches
in Equifax and ransomware Meltdown/Spectre
happened early last year, which is a hardware
vulnerability. And so if you haven't- the
thing that was interesting about Meltdown/Spectre is it
was a new class of attack. I mean it's a side
channel attack, so if you've studied- looked
at the side channel it's the ability you know to come in
through an alternate means to get at that. But it was a side channel attack
that was using the cashes and the information trying to
speedup modern processors by doing anticipatory execution. So, since then there have
been a few other attacks and vulnerabilities identified. Then it's like once you point
researcher to- here's another kind of thing you
can think about, but then they start looking
at the little ones, but. So, that's the new normal. So, that's been the world
I didn't get a lot into, obviously last week's
Medtronic discussion, which maybe where questions lie. But yes, those kind of things
have been happening as well. And as well as the short sell on
St. Jude's stock that happened by a financial firm who found
vulnerabilities in their systems and decided to short-sell the
stock and then announced the vulnerabilities to the world
as a way to make money. As most people in
the industries say, it appears to be legal, but
also seems really unethical, but that's me. So, we're going to talk a little
bit about safety and security. So again, I've mentioned the
term cyber-physical systems, and that's the broad
class of things. Whether it's our automotive
systems oil refineries, our hospitals commercial
aircraft building controls. These are all different things
that are interacting with the physical world, and touch that. So, I think, the next slide
after this will give the demo that if you haven't
seen that one. The U. S. has defined sixteen
critical infrastructures. And so, these are
infrastructures for which our economy, our society depends. There's interdependencies
between these. So, if your communication system
goes down you may have problems with transportation. So, it is very so- but each of
these sectors has been assigned to a government group from
health and human services and some of the healthcare
environment, Department of Homeland
Security, Department of Defense, to look at and understand
vulnerabilities in these systems, and study how to make
these systems more resilient to cybersecurity problems. So, it is said that healthcare
and public health (HPH) is one of the categories for which
we've had lots of government help looking at different
aspects of that. I will be in San Diego next
week at one of their meetings. So, let's see [indistinct]
There we are. So, let's see [indistinct]
There we are. So, this is an engine, a motor
that's being cyber attacked. They were able to let the smoke
out just with the cyber-attack. So, this is a demonstration
done to show feasibility. Okay. There we're
back, buzz free. So again, that was just- the
referral kind of in the military world is talking about
something going kinetic, and that is, how do I
make something move in the physical world. And so, here's an example
of an engine that was in a [indistinct] backup generator
kind of system. But they went in, basically and
hacked into the control software and started resonating it. How many people here have read
about in any detail the Stuxnet attack that happened?
So Stuxnet was done by U. S. and Israeli intelligence,
both who've denied that they did anything about it. But, if I would strongly
recommend- if you want an interesting cybersecurity story,
read<i> Countdown to Zero Day</i> by- Pardon? >> [Audience member]:
[indistinct] final project? >> Hoyme: Yeah,<i> Countdown
to Zero Day</i> by Kim Zetter. Kim writes for<i> Wired Magazine</i>
and is one of the most cyber-savvy writers out there. She also wrote just last week
she was the one that broke- I'd have to go look up
my Twitter feed, I again follow her on Twitter,
in terms of some of the other articles she broke. But anyway, she did a real
detailed analysis interviewing a lot of people. Anyway, Stuxnet was an attack
on the centrifuges being used to enrich uranium in Iran. And they basically- It
was an air-gap system. There was no connection
to the internet. There were PC's inside the
facility that were connected to the Siemens controllers that
were running the centrifuges in the system. So, they were all Siemens
Industrial controlled, closed-loop control systems. So, they ended up figuring
out what was there. Building malware, distributing
it as I understand on flash drives so that people would plug
in and get their PC's infected by - Hey, free flash drive. The easiest way to
bridge [indistinct] gap, drop flash drives around. The things you should never do
is find a flash drive and put it in your computer. And it infected
and crossed over, and this system, when it powered
up on the PC would look over the network to see whether or not
Siemens equipment of particular types and configurations
were present. And, as I recall, they sought
out the system that managed the software that was downloaded
into the Siemens controllers. And they ultimately deployed
that code and what it did was it took a 24-hour long snapshot of
normal operation and recorded it, and then it started
impacting the speed of the centrifuges while playing
back the recording. You've all seen in all the spook
games where they do the video and they feed the video back
so the people can't see in the camera? They did exactly that but by
computer data going back to the control, so the operators in the
station saw all normal behavior going on while they were playing
with the oscillation speed, and because the centrifuges ran
in a room that was mostly pumped down to vacuum, there was nobody
there hearing the weirdness in the fact that they were running
it up to a high speed and slowing it down, and their goal
was to essentially break the motors of the centrifuges
[indistinct] need to be replaced so they were essentially
not able to. So, that- And a challenge in
the world of hacked code is, once Stuxnet was out, the
ability to reverse-engineer Stuxnet and deploy it in other
configurations against other targets is really easy. And one of the things that
we're seeing is the time between initial identification of
malware and when somebody has redeployed it in some other
configuration for some other purpose is shortening to as
little as 24 hours, nowadays. So, it was Heartbleed that was
an attack against one of the systems and that was about
one month from the time it was published to the point where
they had seen malware based on Heartbleed, so it
keeps getting faster. Back in- another couple people I
follow on Twitter because they continue to do research, but the
two researchers back in- this is 2015 as well demonstrated
how to hack a Jeep. They went out and showed that
they can apply brakes they could do all kinds of things. They hacked into it through the
entertainment system because the entertainment system is on
the same internal buses as the systems that affect the
accelerator and steering and that type of stuff so they
could hack in through the entertainment systems
that have Bluetooth. One thing I would recommend
you not put on your vehicle; there's an OBDX port I believe
it's underneath the dash on them they use them to read all the
status if you go in and they want to find out why do you
have this idiot light on, and they can pull the data out. That also is a wright port, you
can write software and do that. And so, there's a bunch of apps
if you want to check your car's performance, you plug in a
module that basically is a Bluetooth connection to
your phone to an app, and basically exposes your car's
control system to Bluetooth which I'm sure it's secured
really well. Safety. So, Webster's definition
of what safety is like, the condition of being safe
from undergoing or causing hurt. I'll use 14971 here a lot and
in a couple weeks I get to go an ANIE conference and be on a
panel talking about standards, and there's nothing more
exciting in the world than talking about standards and
rattling their numbers off, but I have slipped down the
slippery slope of standards hell and I can quote standards
numbers with the best of them. 14971 is the standard that
drives safety analysis for the medical device world. It's an ISO standard used by
both Europe and recognized by the FDA as the way to do it, and
their definition is safety is freedom from unacceptable risk. Now security you talk about the
staying free of danger or the state of being protected
or safe from harm. So, there's some intertwining
there in terms of the harm. [indistinct] down in
dependability and I know I sent this to Brian to put
up in your new class. It really is a seminal paper if
you're into this sort of thing, but it is really a taxonomy that
looks at how you define aspects of dependability
from its availability. Is the system available
and you do correct work? Is it reliable? Does it do the right thing
across the period of time? Is it safe? How many people are
Marathon Man fans? Oh, go back and watch your old
movies because there's a dentist scene in there where the guy
keeps coming at Dustin Hoffman with a drill, and he keeps
asking “Is it safe?” That's one of my
favorite scenes. Having a daughter
that's a dentist, I always kind of cringe. Anyway, safety is the absence
of catastrophic consequences. Integrity is absence of
improper system alterations. Systems are supposed to be
altered but only by those who are authorized to do so, and
can you fix it and maintain it. So, if you look at security,
we're going to talk to you a little bit about CIA:
confidentiality, integrity, and availability. Well two,
integrity and availability, also are part of dependability. Confidentiality really is
not a dependability aspect. And so, confidentiality when you
hear all the talk about privacy, Facebook privacy, all the things
that are being aggregated and how you penetrate
process- privacy. It isn't necessarily
undependable from the standpoint of cyber-physical systems
and how they work, but having something available,
and having something to have integrity is both as central.
So again, there's interactions. So, loss of security can
cause loss of dependability. So, if you manipulate
the control variables, like what Stuxnet when
they started changing control variables, you can actually make
the centrifuges undependable. If you mess with it an
infusion pump's settings, you can make it undependable. You can't depend on it to
deliver drug at the rate that you expected. You can block
ability to maintain. A loss in dependability can
go to a loss of security and there are the certain
aspects of dependability. Poor reliability causes
maintenance that exposes. So, one of the things you have
to think about is if I send a maintenance tech out to a system
and there is patient health, other kind of private
data on that, does your service personnel
have access to it. The Facebook thing that was
discussed last week related to clear text passwords was not
them doing something stupid as storing your password
non-encrypted in a database. It was their logging system. It was as they logged- login
attempts they were storing the password used on login in the
logging server in cleartext. So, oops [laughs].
Don't do that. We have requirements we deal
with in the medical world, which is, as we're doing logging
are we logging patient name? Does our logs become protected
health information and therefore subject to confidentiality
privacy laws in the case of that information versus? So, because the personnel who
maintain security logs may be different from the personnel
who's dealing with the function of the device and
its operations. So again, not thinking about
those interactions leads to things like the
Facebook exposure. And it's like, “how in the
world?” and that talk about it. As I told Brian I
have this slide. The last time I used some of
these slides was to a talk to the Defense Intelligence Service
and there were people from the other CIA in the room. So, I had to make sure I
realized I wasn't talking about them. So, the CIA Triad is
central to security talk. Integrity is absence of
unauthorized system operations. Confidentiality is unauthorized
disclosure and readiness for correct services for
authorized actions only. So again, authorized comes
into all of these because the security overlay is who is
authorized to change values, who is authorized to
see the information, who is authorized to ask for
the device to do something. So again, some examples of
all those in the medical-device world. I got three
I-Charts coming up. I'm going to leave them up and
blather on about something else while you have a chance to kind
of read them rather than going through all these. But, again, you're giving
the slides out to the class. Yes, so we have them, but, yes. So, you can go
and read all these. But it's- I just basically
look at the what, why, where, when and who of the
three aspects of the CIA Triad. And I think a lot of who would
compromise so confidentiality tends to be the property that
those who want to monetize information go after. They want their unauthorized to
see information they want to get access to it because
that is monitorization. Our issues are reputation
so disgruntled employees can be a source. Somebody else a hacktivist that
decides that you're working for a bad company or somebody
that's doing something that's inappropriate may try to
break confidentiality so that they can embarrass you. Certainly, the Ashley Madison
hack had a lot of debt reputational damage aspects
to what they were attempting to do with that. So, and then nation-states in
certain nations are looking for intellectual property in
order to- there is a closer relationship between the
government and industry to build up- to steal intellectual
property and create competing businesses in their country. So again, intellectual
property then includes the confidentiality of the code
that's in your device because your intellectual property when
you are fielding a device out into somebody else's system is
embodied in the algorithms in the code that's
inside your system. And if you haven't seen any
YouTube videos or things related to people who can
reverse-engineer systems quickly- with that DARPA
contract I was in, one of our demos was there's
a type of attack called a ROP attack reverse oriented
programming and what they do is they exploit a buffer overflow
to put a series of pointers on the stack and then get the
system to execute a stack pop and those pointers will point to
you to a set of what's referred to in that vernacular GADGETS. And a GADGET is a sequence of
code that does something you want and you get a return. And so, if you can get an entry
point right before that code and doing entry return ROP and so
return oriented programming is you're trying to find these
places that are returned you create this GADGET you string
them together and by popping- by pushing those onto the overflow
stack and getting it to do it you can get the code to execute
a set of behaviors that it was not designed to do. There are people who
reverse-engineer X86 binary code and live in the binary world. This guy was using Ida Pro which
as I was telling Brian this weekend now has academic
licensing for their two old- but it is one of the very
sophisticated tools to lift binaries and analyze them
for where data blocks are, where all the various
other things are. And he just started searching-
for he knew the binary codes of what he was looking so he
would be searching for them and basically interactive while as
a demo built a ROP attack in 15 minutes. And it was like- this was
supposed to be a really, really sophisticated attack. The fun divergent story that
I'll just- for those who are <i>Simpsons</i> fans this guy worked
for a company called Cromulence and if you know<i> Simpsons</i>
the term cromulent was a <i> Simpsons</i> terms. So I always thought it was
ironic or fun that this company that he was working for named
themselves after a fake <i> Simpsons</i> term. The guy had a mohawk and his
laptop had a sticker on it that said my other computer
is your computer. So, he was hardcore. So, don't- there are
cryptographic code obfuscation techniques that can make
instances- there's SLRs or other kinds of techniques you
could use for deploying code. So, if you are in an environment
where the intellectual property embodied in your code is
important to your business, examine code
obfuscation technique, cryptographic code obfuscation
techniques necessary to protect the intellectual property. Integrity, again, in our
world, therapy settings, it's like, does the pump,
pump at the right rate? Does the heart beat
at the right rate? There's all the various
different thresholds. The settings that a physician
attends based on the disease diagnostics are important. It's like, are you taking
the right dosage of your prescription? So, settings
of therapy systems. But, because physicians tend
to trust computers and the data they're given, if you can
manipulate the diagnostic data to lead them to a bad decision,
that can also be bad. So, again, the things
you need to do. And also, a lot of deployment of
asymmetric key cryptography in these systems. But if companies are sloppy
about dropping hard-coded passwords in binary code, they
probably haven't done a really good job about securing
certificates and keys. So, understanding the TPMs
and other kinds of hardware protection mechanisms for
storing key management. Asymmetric cryptography
is great. There's great libraries.
It's all about the keys. So, this starts being someone
willing to risk patient harm. Targeting [indistinct]. Now, you're talking integrity,
you're talking about messing up things that are directly
related in our world to patient's health. So, you can argue an integrity
loss is also when ransomware encrypts all your data. You've basically lost the
integrity of the data so that would be an integrity attack. It's also an availability attack
since when it's in an encrypted state you are unable to use it
so it becomes a [indistinct]. And this is one thing that
in a traditional IT world, sometimes your best answer
when you detect something as anomalous is to shut down. You know, don't stop doing
it because being up puts you at risk. If medical devices are being
used to support life you're running a ventilator that's
keeping a patient alive by pumping their lungs. Shutting down is
not a good answer. In an infusion pump my
colleagues I've talked to from companies that
deal with that is, safe depends on
what's it pumping. Maybe that the best thing to do
if you detected if you detect anomalies in a bump
is to turn it off. But, if that particular drug
is really critical or there's really bad effects if they're
suddenly cut off a drug- which, it depends on the drug- it
can be very drug-dependent about what safe means. When I worked in
the airplane world, you know safe was enough
time to get it to the ground and land safely. So, an example, I believe this
was out of WannaCry when it came out, hit the UK first and so
there was a lot of the National Health Service in the UK that
had some hospitals locked up. And a story that came through
the grapevine on that one was, a hospital who had a vulnerable
MRI machine and it got locked up by the ransomware. And a patient had just arrived
in the ER having suffered a stroke and MRI is one of the
primary diagnostics tools to figure out how you
should treat it. And they ended up having to
chopper that patient to another hospital with a functioning MRI,
which took time and introduced delay and speed. You know MRI and heart attacks
are things- strokes and heart attacks are things where
response time matters. I never heard whether the
outcome was bad or good after that story. It was just the example that the
availability of equipment can affect patient health. And so, thinking through what it
means to respond to an attack, what things you do can be very
much dependent on the clinical behavior of that device. So, it's all about managing risk
and so after going over safety and security we're gonna talk
about how do you assess risk. Again, I told you I was going
to say 14971 more than once. 14971 defines risk as the
combination of probability of occurrence and severity. Here's what Webster
says about what risk is. Safety risk management with
this whole document of ISO 14971 defines a risk management
process that talks about analysis evaluation,
introducing controls, reassessing analysis,
ultimately evaluating residual risk reporting. And then the key aspect of all
of that is- and once you're done then you watch what you built,
monitor it and if you learn that there's something happening that
doesn't fit what you thought it was supposed to do, you
update and return and redo; lather, rinse, repeat. So, the expectation is
that you actually watch what you're doing. How many people think their
Philips light bulb is being monitored for correct behavior? Probably less
important but still, we are computer controlling
things that if somebody was able to go in and hack and shut
all your lights off in certain circumstances and that type of
stuff people could fall down stairs, you know you could
induce harm even from just the simple loss of lights
in an IOT system. So, it requires risk
ranking information. If you look at some of
the government things, it's the process of
identifying vulnerability. So, what I'll point
out in this one is, in both of these you start
talking about probabilities and we'll talk a little
bit later about, how do you predict the
likelihood of a hacker hacking. Well, some would say
its probability 1 but, reality is to our
earlier discussion is, they will try things and if
the first things don't work, they may move on. So, it's what's the
threshold of doing that, it's one of the key things you
think about in designing systems to be secure enough. Again, I think we've talked
through the ways you can get harmed from security
vulnerabilities, bad diagnosis, wrong patient,
wrong side of the patient which was a usability error
that they dealt with. We'll talk a little bit about
the interaction with usability here in just a little bit. So, what we did in this
tier 57 I referred to, that I had worked on
as a committee chair, was basically took the 1471
safety process that device manufacturers were intimately
familiar with and cast security risks in the same structure,
but then defined the steps using traditional security risk
management approaches as drawn from standards out of NIST. So, we broadened the definition
of harm so that we tried to make sure that things like data
breaches and that you know if you are harmed, a person that's
HIV positive that is trying to not have people know that
and that gets exposed, that can cause them harm. It can cause them harm
from a job perspective, although most of the protections
are in place to stop that now. But still, you know there are
things that people don't want known so you can certainly
broaden out and start saying that harm. We talked about
loss of effectiveness. So, if a hack even just
introduces malware to compute bitcoin on your device in
order to use the horsepower. And now, deadlines and things
are being missed and the pump isn't pumping as fast as it
should because it's too busy calculating bitcoin, that would
be a loss of effectiveness. You programmed it for X,
in the presence of malware, it's doing X, it's doing Y. The fact that those are
not there could be a loss of effectiveness. And then privacy encryption
kinds of things we did. The safety world, I tend to
refer to the safety world as, driving looking in
the rearview mirror. You're driving forward,
looking at your past. Safety engineers are absolutely
good at and I was one of these and worked with some really
talented statisticians at collecting detailed
data from the field. And they know exactly the
probability distribution around failure of this kind of
component likelihood of this. And from that, you can look
at aggregate probabilities of failure from a
safety perspective. And so, it tends to
be very quantitative with backed up data. And what the challenge always
is if you're in a safety world, trying to get them to think
about security is they'll look at the security
statistics and say, “we have no reported examples of
this so is the tail probability of it occurring in the future
zero because it's happened zero in the past.” And clearly,
these various examples tell us, no. That is, you can't do that. So, you have to look
at other surrogates for how you predict likelihood. Again, I'm gonna give you a link
later to some NIST documents but, NIST is free. NIST is a government agency
designed to look at- and they have a series of security
documents- ostensibly geared to how you secure the
federal infrastructure. You know, federal information
processing, FIPS. But they are generically useful
so you will see things there on, how do you secure
Bluetooth systems? How do you secure
mobile devices? How you deploy asymmetric
key cryptography effectively? And they have
guidelines for that. So, all available free online.
NIST is your friend! In this NIST special publication
800-30 which is on a security risk assessment- this is one of
three approaches they provide in there about how you
come to overall risk. And by this one is where
you look at threat sources and threat events. This is kind of a
threat-oriented- you examine vulnerabilities, look at
controls, figure out impacts. So, basically if I remember
it is a- identifying threats, vulnerabilities- so, there are
several different- there's also, and I don't have a
slide on it in here, but there's several
threat modeling approaches. So, Medtronic- Microsoft,
the other ‘m' company, has a free tool for threat
modeling that you can download. I don't think it works on
a Mac- they are Microsoft. But it is one- so, Microsoft has
actually gone from being in the doldrums of being recognized
as terrible at security to recognizing that being secure
was existential to their company, and they developed
tools and mechanisms internally and then published them
externally to try to help others also be secure. So, they do a method
called STRIDE, which is a risk analysis method. So, look up Microsoft's
STRIDE method and look at that. You will see ways that
you think about systems, and their threat modeling tool
allows you to model components in your system and
look at interfaces, and reason about the STRIDE
properties at these interfaces, and gives you- again, threat
modeling is a tool to identify potential security risks so that
you can make decisions about controls to insert
into those, so. They also publish a secure
development life cycle- SDLC- which defines the various steps
during a development process, and they have a draft,
they may be farther along, I don't know since the
last time I looked at it. But they were looking at
how to modify the SDLC for Agile Development. So again, impact assessment-
you look at the impact, if the device loses the
different properties, what's the impact, again. What's- in our case,
for a medical device, we will be looking
at things like, “If I lose confidentiality, if
I could lose patient-protected data, am I going to lose it
for one patient or am I going to lose it for 400,000 patients ?”
Because the impact and breadth of it may be- you can control
and manage a one patient loss. If a single device is stolen
with a single patient, it's manageable, but- or if an
attack has to get a hold of a physical device
for each patient, that's a different kind of a
risk level you're going to assess and turn of impact then
a remote hack from some Russian crime organization can break in
and download data for hundreds of thousands and put it on the
internet- in terms of how you have to manage that risk. So, again impact is
another part of it. And again, as I alluded to,
security likelihood is- there's that original
paper from 2008- Oh! here's Barnaby Jack, I
forgot I had his photo. Here's the $20 bills
spitting out on stage. So again, it's a- what's the
likelihood that a researcher will decide to make a public
display of the insecurity of your system for NBC News to
see and put on their program on national news at night? So, if the attractiveness
of your hole is that big, you should be thinking about
the likelihood that it is a much higher event than its- you know,
one patient's device and one thing that's really obscure, so. Again, it depends on what
you're working on and what the exposure is, so. I will again let you
read some of these- we got about 20 minutes left. I'm reasonable on
time here, but again, quantitative versus qualitative-
you're not going to get very quantitative in the risk
management world, so we typically will have- and
I think I have a chart here. This is the likelihood model
that S&P 100 talks about. And I think- CVSS is- so,
I don't know if you're all familiar with CVSS, but it is
the base scoring tool used- again, developed by MITRE- and
so when you see CVEs published by US-CERT, you will see a
CVSS score associated with them. The CVSS score ranges
from zero to 10, with 10 being critical.
They have a base metric. So, sometimes you will
just see the base score, sometimes you will see it
modified by temporal metrics or environmental metrics, but
typically- and I think like last week's Medtronic publication,
which came out in ICS-CERT, because of [indistinct] device,
had a base metric of 9.3. So, it was considered high. So, this has been kind of a
standardized approach using the challenge we've wrestled
with in the industry, in the medical device world is
it doesn't always take in to account clinical behaviors
and clinical things. You know one of the things we
know is nurses at the front lines working with patients
are incredibly resilient people, and if they see something
anomalous, they will probably
just turn the thing off. So, you know, isn't this the
sort of the left free operating because they have enough skill. So, understanding the impacts of
clinical environment- which is really an environmental factor
in this case- is something that- there's actually a project going
on to look at how you apply CVSS to medical-clinical devices. So, this is kind of more the
example of a- doing an index. So, this is a safety
index where- you know, an index of one, a low
impact, would be discomfort to high-impact, potential resulting
in death and some similar kind of things you apply in security. And so typically, what these
kinds of methodologies do is they expect you to come up
with definitions to say, “How am I going to understand
the impact of it?” And then evaluate based on them, and
use what you said you were going to do. Sounds like a- get buy-in
from the organization, that you're all going to
agree that this is what it is, and then apply them in a way
that you can later go back and say, “Here's how I keep.” And
then if you learn in operation in the field that one of your
assumptions of where it fell and something is wrong, you gotta
update and maybe decide- that triggers to decide
whether you have to update, patch, make some
change necessary for it. But it's all- it's not that you
have to drive to zero risk. It's like you have to
understand the risk, manage them at a
reasonable level, and monitor to see whether
or not your system is behaving correctly. Going to quick touch about how
you loop in usability because particularly in clinical systems
and aircraft systems in as you saw with the whole Lion Air and
Egypt- Ethiopian Airlines 737. This was an interaction with
usability and understanding how the system works. So, there's another standard
62366 that defines how you evaluate safety from a
usability point of view. Do I make mistakes and it could
be harm to the user of it or harm to a patient, so. It could be anywhere from
electrical harm things of that nature but how you
go about doing it. So, here are some examples
and I think I talked about the ventilator turned off. So, these are some of the
trigger things that made people recognize that usability
problems were a thing that needed to actually
be formally assessed. So, the ventilator example was
when a ventilator is moving and you want to take an x-ray of
a patient on a ventilator you have two choices. You need to synchronize your
x-ray machine to when you're at the peak or valley of a breath
so you have no motion happening. Or, the old-fashioned way is you
turn the ventilator off while you get the x-ray and there have
been too many examples where people forgot to turn
the ventilator back on and patients died. We have had order
entry system confusion. There was a order entry for
pharmaceuticals and the argument was it was a resident and they
had a- they were entering- or a prescription change
for patient on a floor, critical patient and they got
a text message pop up on their phone. It was a remote mobile and they
got distracted and they didn't remember that they hadn't
gone in and hit order and the patient died. So, understanding usability
issues in any kind of safety aircraft, industrial
controls, nuclear power, electric grid control, medical
devices there's a lot of really interesting hard problems but
you have to think about the details of how they're being
used and how these can interact. So, really there was this push
were there was how usability risks and safety related
risks can intersect. We had similar kinds of
experiences then a recognizing that security risks
can impact safety. I've gone through several
of those examples here. The reality is it's a balancing-
I always refer to a three-legged stool where your safety risk,
your security risk and your usability and you're going
to want to balance it again. In certain clinical environments
shutting the device off might not be the safe thing,
so from usability. But if the security risk is
coming from the network and the user is directly
touching the device, are there situations where you
might find it better to shut the network off and keep it locally
controlled with very clear indication to the user that
whatever functionality was network related
is now turned off. So, you start thinking about not
just blanket I do things but in the context of its usability, in
context of achieving its safety. Is availability a
safety property? How do I manage the
various different risks? And so, I'm a system engineer
by- I'm a doubly by education have evolved to a system
engineer and this is classic systems engineering
which I believe security engineering touches. Its thinking about the big
picture and what's at the next level out. How we doing, I've
got 15 minutes, so. And getting on my system
engineering soap box there is the concept of an
emergent property. An emergent property of a system
is a property that only appears when the system is connected
together as a whole. You don't find that property
in the individual components. And usability, safety, and
security are all emergent properties of the system and a
thought of that one is you can construct a system where all
the components are safe and the system collectively is not.
737 max x, max 8. That is exactly what
that system is. You can build something
for everything is unsafe and build a safe system. Fault tolerant redundant systems
will build by having three copies of something and voting
it will failure one does not cause you to not get the right
answer out from the other two. So, in fault-tolerant system
design it's really about assembling a safe available
system from unreliable failing components. So, it's just recognizing- and
then one of the challenges then that we have in these kind of
spaces is we build individual devices but we put them into
a hospital network where they interact with other systems and
you don't always control always the end environment. So, recognizing what you
have that what you control in security controls, what aspects
you put in the individual device, what expectations you
have about how that device is used in its operational context
is all important about ensuring that the safety and security
and usability of your device. My example of- from a usability
perspective in terms of that is we have pretty good standards
when it comes to our cars about how the interface is. So, if you rent a car it
all operates pretty close. Dash lights are one of those
things that are a little bit different but turn indicators,
driving shift, steering wheel, all brakes,
accelerator, infusion pumps. Every company's infusion pump
has a totally different user interface with totally different
things and hospitals use infusion pumps for years and
keep buying them and will buy them from whatever
contract they have. So sometimes a poor nurse is
dealing with pumps that have different user interfaces. You may do all of the usability
analysis on your pump that you want and the confusion in the
nurse because they're mostly using another
manufacturers device. They use yours wrong because
that which was totally sensible to you and the user that was
tested that was only trained on yours is very confusing when
it comes in the context. So, think about context. A few other things
to think about, we at Boston Scientific had one
of the things that my department is doing with essential
product security is creating standard requirements. We have kind of- you know
we look through the security requirements that come
from various standards, various harding specs. We think about which ones makes
sense when it's an implantable device, which one when it's an
ablation machine connected to a hospital network, which one when
it's a home monitor that's for a mobile app and
try to understand. So, I think about- and there are
sources and I will point to you some hardening specs and other
sources for standard- you know use what other people have done,
use them as sources of things for requirements to understand
what mechanism should be there. One of the things that's
challenging in requirements world is it's fundamentally
impossible to test the “shall not.” It's an infinite space. It's easy to- you can test
that something “shall do” this. But it's harder to do and a
lot of security world gets into while you really
“shall not.” So, what you need to do is more
think about how you specify what should the device do when
un-allowed behavior is seen at the interface. So, try to proactive so that you
can test for it and verify it. Look at abuse cases think you
know similar to abuse case. It's like, “how would an
abuser abuse the system?” Think about the normal
things that they would do. Talk to hackers.
Read about what they do. Gary McGraw, who has written
several textbooks in this space, really brilliant lecturer. One of the lectures
I heard he said, “If there's three things that
you should do to create a secure device it's: risk analysis,
security analysis, threat modeling, getting your
requirements driven through a proactive model and thought.”
Architecture analysis and coder analysis and security-based
testing so not just your standard testing, but
security base testing. So, second aspect was
architecture code analysis understand what the
bottle looks like. Document how the security
models going to work, review it. So, understand what
existing exploits are there. Think about it, if a standard
exploit was being applied against the system
would I tolerate it? We have authentication in
real life it's all these kind of things well we'll have passwords
we'll have passwords changed. This is the kind of suiting
that a clinician needs to use if they're dealing with
an Ebola patient. Does your equipment work when
they're dressed up like that? Biometrics suck. Hard
to get a fingerprint. Hard to do a retina.
Hard to type a password. So, you know understanding some
of those kinds of extremes of your use environment will help
you decide whether or not your system actually works
in its end environment. A lot of machine to machine. One of the things I keep talking
about is very typical for an IT system for the user to log into
a system and then be known to the network as the credentials
of the user that logged into the device. If you end up in a surgical
world where you're not requiring user authentication because of
scrub up and that you still have to think about how the device
registers to the network. So, thinking about
machine-to-machine authentication and how do we
recognize whether a device is a recognized legitimate device and
how you control what a device has access to separate from
the properties of what a user has access to. So, again in the in the
cyber-physical world, device- the device becomes
a more typical model. I had referred to this
earlier about this one. This is my- God, we
love COTS off the shelf, man we love Windows
operating systems. Manufacturers use standard
operating systems because the talent is easy to hire,
people who know it coming out of school. People know it now,
that type of stuff. It's potentially a skill they
want in order to stay marketable. Its licensing
costs are cheap. There's lots of available tools,
but we know that a typical medical device will often
be used 10 to 15 years, and you know Windows 7 is
going off support in January, and everyone in the business is
scrambling now with having just dealt with the Windows NT death
issue way after it should have been. We're now facing
the next one, so. We've talked about industry
initiatives and from a research perspective and things what
are things that can be done? I know that the Boeing
777 I worked at- worked on, 25 years ago did not have
any of that kind of commercial off-the-shelf software on it and
they're flying successfully and safely today, so. Testing kind of falls into a
couple of different areas, fuzz testing- I don't know if
you're familiar with the topic- fuzz testing is basically, came
out of user interface to fine design but it's been applied
a lot in protocol testing. So, if I have a Bluetooth
interface and Bluetooth has certain rules about what a
properly formed message should be, and you code those into your
system what happens if you get a message that doesn't
meet those rules? You, the correct responses is
that your system should parse it correctly and reject
it as a malform packet. There have been medical devices
who if you just port scan them on a network they crash because
nobody thought about what was going to happen when I- you know
why would you scan a medical device they're safety critical. It's like it's what
IT people do. So, you figure out what's
on your network as you scan. So, fuzzing and protocol fuzzing
can be a way to make sure that the implementation of standard
protocols in your device behaves correctly to malformed
behavior and it's robust. So, it's sometimes referred
to as robustness testing, and penetration testing is
really either internally or hiring externally somebody
skilled in the arts of the hacker looking at your device
and attempting to hack in using their different tools to see if
there's anything you might have missed, and so. And again, Gary McGraw has a
great chart I should have stolen from him with credit which is
that this is a bad nassameter. You know it is- if you fail
the test you know you have a problem, if you pass them
all you know is you passed those tests. So unfortunately, it only gives
you a sense of confidence, but it's not a proof.
Testing will never prove. So other post-market stuff, if
it's connectable understand- monitor vulnerabilities that
are coming out- or games. I'm dating myself for those that
are familiar with the ‘84 movie of that,<i> War Games.</i> Shall we play a game called
global thermonuclear war. But test it, we do war games. One of the things I'll be doing
in San Diego next week is doing a war game with a national
group on vulnerability. This is my links to
that, but harden. It's disable unused ports. Don't- turn off services
your device doesn't need. Get rid of your test code. The St. Jude hack that
happened in 2017 by Medsac. They found that they bought
programmers from that company off the internet off eBay, and
they had been compiled with debug on, so when they pulled
data off the hard drive of the programmer a complete map of all
of the write and- read and write commands that that protocol
understood was available in debug tables that
were parsable by it. It's like just don't give your
attacker a free guide by doing it. All that stuff is stuff you do
while you're in development, but think about that transition
from the development software environment to my production
software and what gets removed. What do I do? Do I do
an obfuscation of that. I- the day before our latitude
system was going to have it home box, it's an original home-box
go to production they were, we we're doing
some final testing, and I was doing
some adhoc testing. We had a Bluetooth link to a
weight scale and blood pressure monitor for our system and
we were getting some weird behavior, and I went down with
my thing I was testing to the lab and I gave it
to the engineer, and I said, “Something's
misbehaving.” I described it. He flipped it over, popped the
serial cable on a pin block on the back, and a few keystrokes,
boom all of a sudden, he was in, logged in to the
Linux system that was underneath it to go and check
some pay lines. I said, “what was the password
you just logged in on, and he said- it turns out
he logged in as root, root. And I was like, “oh that's like
within the first three guesses they would make.” And he said
well you wouldn't have to guess because you know the first
couple of hits on the carriage return was a bod rate lock on
the serial port to tell you and then the prompt that came up
to prompt you for your username password in the
prompt text told you, you should use root/root. So, I went up upstairs, as the
lead system engineer and wrote a system change request on there
to change that the night before it was- I was told that the
first change they had originally thought about apparently
involves some negative commentary on my genealogy. But anyway it did finally get to
be a rotating password in only certain blocks, so that they had
to know certain serial number ranges, so they fixed it, but
it was like developers looking shortest path to testing
don't think about it, so you got to set
those rules in. Anyway, there's a bunch out
there- the military has the STIGs, CIS has a
series of benchmarks, and I said NIST in those
publications has guide for that. Use the hardening guide,
do the things that are supposed to be normal. I think we talked a
little bit about logging. Let me get to a just a couple
minutes for questions at least. Configuration, configuration
management, patching, intellectual property, worry
about social engineering, so here's our conclusion. It's like- it's a
constant balance. You gotta think
about environments, if you're working in these kinds
of spaces where the devices work for- are being used for a lot
longer than you might think they are, and what is
the meaning of that? That said, I think there's a
lot of interesting problems in this space. I've spent all my career,
for the most part, except maybe the first couple
years of dabbling- that was space station stuff
and space shuttle. But the stuff that I was doing
has always been systems that if they don't work
correctly people die, and it's important
work it's interesting, it's satisfying, so with that a
couple minutes for questions. [applause] I didn't bring more pictures
of Brian so you'll have to give me your email for that. [laughter] >> Brian Drawert: Alright, I'll just open it up for
questions at the moment. There're some great questions
that I would love to hear answered, but any
responses before I get to that? >> [Audience member]:
I have a question. Do you foresee medical hardware
being implanted into the human body or medical hardware fixing
human body from the outside, so a cochlear implant
versus laser eye therapy? >> Hoyme: Well I think
both, I mean the world's open. There is certainly a lot of
learning happening with brain in terms of that, and there is- I
saw there was a FD just rejected the first therapy
that was proposed. I mean initial- that doesn't
mean that it's dead, it just isn't- it didn't meet
its initial clinical goals related to a brain therapy for-
I think it was to try to treat Alzheimer's, though I think it
was not viewed- I mean the data was- I think the summary I
saw was that the- the clinical results were no better than what
is known to be junk science. So, you know it's like
it didn't work, so. But there are people doing
research in terms of how you do that- certainly a lot of things
that are stimulate-able. A lot of work- there was some
interesting patents I just saw related to interesting ways to
do body area network so that you can communicate within the
surface of the body- within the confines of the body without the
energy leaking out so you can maintain privacy and
confidentiality if you wanted to communicate between wearables
and implantables and things where you can- because there's
you know obviously when you plant something in a device it
has a limited view you know they talk about six league EKG. They put bleeds all over your
body and they get a big huge view of what your
heart is doing. When we put a pacemaker
or defibrillator in, we put a lead in and we're
touching one point in the heart. I kind of view that as you're
looking through a straw and you're extrapolating
what you can, but do you have to be minimally
invasive because you can't fire that so wearables. So I think the wearables,
the world, the external treatment,
the implantables world. There is going to be the future
of the bionic<i> Six Million Dollar</i> <i> Man</i> is still alive and well.
Other questions What were they? >> [Audience member]: What was
the- your favorite thing that you invented and what kind of
software development should we get involved in if we want
to get involved in developing medical devices? >> Hoyme: I hate to say this but
my coolest piece of equipment I worked on was the system for
the Boeing 777 flight deck, integrated system
for multi-processing, and ultra-high
availability world. So, that was my first patents
were in that work and working on commercial airplanes is just
like way cool stuff in terms of that type of thing. I was young and I know I spent
more time away from home in 1991 than I did at home, though,
Julie and our daughters who are now grown spent the
winter in Phoenix with me. That wasn't an accident, so the
ability to work with a really large team on a really hard
problem that really mattered, it's like you know as I was
working on it I said I never want to tell my family they
shouldn't fly in that airplane because I know too much
about how it was designed. So, my eldest has now taken a
trip over to the Far East to visit a friend and flew on a
Triple Seven so you know it was kind of cool knowing
that that's safe. But in terms of the
kind of software, I think I've been- my career has
been around things that control the real world and deal with the
real world so I think a certain you know understanding of
real-time control systems, it's thinking about
defensive programming. What happens when
things go wrong, anticipating that kind of-
so you know developing a good paranoid personality. But you know it's like the
world- you know Murphy causes all kinds of failures that you
have to anticipate in terms of the operation of devices but in
the security world Murphy is an active antagonist who's
trying to get at you. So, you know I think the
understanding good design techniques- software design
techniques understanding software architecture, you
know really thinking through how something is structured in
order to contain boundaries, you know a lot of
compartmentalization in terms of software in terms of
those kinds of skill areas. Don't just build games-mobile
games. [laughs] That fad will go away
some day. [laughs] >> Drawert: Alright,
well we're short on time. Any final questions? Let's
thank our speaker again. [applause] >> Hoyme: Thanks a lot. ♪ [closing music] ♪ ♪ ♪ ♪ ♪