AI in Radiology at Stanford: Rise of the Machines

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
we're going to talk about artificial intelligence and radiology and the title of my talk is rise of the machines so i want you to think about a quote uh who might have said this first there is no function the computer cannot do in radiology was it elon musk jeffrey hinton dr g lodwick or rusty hoffman now i'm contractually obligated to say it was rusty but in fact it was dr j lodwig he was a radiologist in the 1950s and he worked on a computer algorithm that he thought could accurately identify abnormalities in chess x-rays and he went on for about three decades to work on this problem eventually receiving the nobel prize nomination in 1975 but he never achieved his goal a few years later a psychologist named frank rosenblatt uh came to into a lot of attention for a computer that he invented that he said would be able to talk write think and be conscious of its own existence in fact he was one of the first to use uh his knowledge of the brain to map out what is essentially the earliest artificial neural network that's ever been discovered unfortunately his computer didn't work and it was pretty soon that before he faded from public view and all these guys kind of uh went into obscurity and this is the reason why this is a five megabyte hard drive being shipped by ibm in 1955 right so imagine with all these great ideas and all this all these powerful ideas they just couldn't uh do what they knew was possible because they didn't have the technical hardware times have changed we now have powerful neural nets we have powerful gpus and they can do a lot of things they run a lot of our lives social networking what you buy on amazon your banking but they're also really really good at classifying images and this is a screenshot from something called imagenet which is uh something that fei fei li from stanford started it's uh it's a repository online of over 14 million images that are uh uh carefully characterized and this is just an example from the spider page but every year neural network scientists get together and they work on who can diagnose or who can classify these images the best and you can see every year the error rate goes down and down to the point where now the um the the performance exceeds human human ability so now we're really good at telling the difference between parrots and guacamole and blueberry muffins and chihuahuas but the question that we probably should be asking ourselves is can we use this for something a little bit more impactful can we be using this for medical imaging and you can think that from a conceptual perspective it's not hard to imagine that you can take any random image like a man in a black shirt is playing guitar and you can upload that and a computer can come up with an automatic caption for that the same thing should be possible for medical imaging you should be able to upload an image and have it return a caption why would want to do this well it turns out that there's a lot of human error in diagnostic radiology but for me the most important reason is that most of the world doesn't have any access to radiology at all and they're dying of things like tb and lung cancer and yet they don't have even a basic chest radiologist to look at their x-rays so um about a year and a half ago this is professor hinton he's a famous computer scientist and he went to a hospital and gave this talk if you guys can play that audio um we should stop training radiologists now it's just completely obvious that within five years um deep learning is going to do better than radiologists because it's going to be a lot more experience it might be 10 years but we've got plenty of radiologists already um so he said this and people freaked out um there's a lot of articles that came out um in big journals and a lot of opinion pieces were written still people are writing a lot of opinion pieces about this and then we did some work with andrew engan right before our sna last year and then he tweeted this out just to fan the flames i was just like please don't do this when you tweet it up so now we have lots of people really concerned that diagnostic radiology is in trouble i'm just waiting for the next tweet from kim kardashian to tell us that now we're all in trouble and this new hashtag will start trending and we're out of jobs but before we get too carried away we have to think about reality and this picture of an overturned self-driving car and this headline you may have seen just from a few days ago reminds us that while pr is great and it's really fun to talk about the hype reality is a whole different story so first of all medical images are really large and complicated i don't have to tell you that but i do have to tell the computer computer science is that there are hundreds of thousands of times larger than the traditional imaging analysis tasks and for the most part um the thumbnail of a kitten is a great example of the fact that most of the pixels in that image will tell you that there's a kit in there and so that classification task is not very difficult but most of the pixels in that chest x-ray of a healthy person who happens to have lung cancer tell you that that person's healthy so it's only a small proportion of that image that's actually letting you know that that's the important finding that you need to classify and so this is a big challenge for computer scientists another thing that comes up is uh having the right context so these are two brain mars from pediatric patients one is completely normal and one has a severe neurological disease there for all intents and purpose is identical this is polyesis bucker disease which is a delay in myelination the only difference is one is three months old and one is three years old if you don't have that context your model can describe the wrong labels and lead to disastrous consequences this is one of my favorite things so these models actually do really well as i showed you in fact this is a model that with video can identify many common objects around your house with high degrees of confidence and that bar indicates the amount of confidence so there's a banana in the image you can see that it knows it's a banana it also may think it could be a slug but probably not very confident that's a slug it's probably a banana however all you have to do is place an artifact known as a token in the image and it completely screws up the model now it's really confident that's a toaster that's not really right so um the same thing can happen in medical imaging right so we deal with artifacts every day because we're humans and we know how to deal with context yes that's a coil and that's streak artifact yes that's motion artifact that's a ring artifact we can deal with that because we do it every day but if these things lead to a model making a diff a wrong diagnosis with high degrees of confidence we can find ourselves some trouble the biggest the biggest challenge i think we've had in our lab is just the discussion around ground truth um it's really difficult for engineers many of you may be engineers to understand the concept that there is gray area there's no black and white in a lot of medical diagnostics what is this abnormality in the right lung well is it a finding is it a diagnosis well any of these could potentially apply and there's probably more and it's difficult to explain that we don't always know the answer so when we're training these models and building these machines we don't always know the ground truth and that's that's difficult all right so let's talk about some research that we're doing we started a center late last year called amy artificial intelligence and medical imaging um rusty hoffman's a member of this group kurt langlotz is one of the founders and we kind of have a fairly simple strategy we have teams that we put together consisting of clinicians statisticians and computer scientists once in a while we'll partner with industry and we leverage our database of 5.5 million exams for a lot of our work but what we have that no other institution in the world has and we're very proud of this we have 1.5 million exams that were prospectively labeled by the interpreting radiologists in the actual act of interpreting so what that means is that we have really good labels and we have an archive of really good labels and that sets us apart from those groups that are having to go back and retrospectively label their exams finally we're clinicians we want to get these out in the clinic we want to see what difference they make we want to try them out so we have a clinical evaluation pipeline and we've already got our first model in in practice now and this is our first model so a lot of the initial founders of this group are pediatric radiologists and one of the things we hate the most besides scoliosis films or bone age films we hate having to do this it's very it's repetitive skilled human labor essentially you have to go back to a book you have to compare it to other children's hands and determine what age of this child is and so we put a model together with a group of computer scientists about a year and a half ago and lo and behold in about two weeks this model did better than any of our radiologists in the section this figure is from our paper and so those are x-rays then there's this overlaid map and the areas that are red and yellow indicate the pixels that the model found that were important to make the decision and you can see that for those of you who remember your radiology training impedes uh it's looking at the proximal carpal robe it's looking at some of the um some of the first and second ray and that's exactly where we tell our residents to look when they're learning how to do bonus so it's really interesting it learned this organically this was not directed by any of the researchers the other thing we did is we said well we have these other reports millions and millions of reports that don't necessarily have a structured labor label and they're made up of free text can we teach one of these models to read the report for us and pull out the relevant labels so we can have better labels at scale and so we did this with the pulmonary embolism ct reports and we're able to tell you not only is there a pulmonary embolism in there how old is it and is it subsegmental or not um and this is the work i referred to earlier this is the work we did with andrew angus group on the pneumonia task where we compared four radiologists to our model and um and we had them re-400 x-rays and it turned out that the model did better consistently better than our radiologists we went a step further we took 12 radiologists from around the country at several institutions and we had them label that same test set for a fusion and it did just as well as the expert radiologist we did it for nodules did just as well as a radiologist and we did it for edema and we did it for 10 more labels the interesting thing about this work is that all of our experts and some of them are chest radiologists it took them about three and a half four hours to do this test set of around 400 or so chest x-rays took our model one minute and if we can do that again with very minimal work in such a short amount of time you can imagine the possibilities but we're not stopping there we're doing all kinds of other things and here's just some previews of that so we're detecting fractures automatically we're detecting pe automatically we're determining congenital abnormalities in pediatric neural imaging automatically we're now looking at dvt which is notoriously a difficult problem because ultrasound is so operator dependent but now we're able to do that relatively well and even advanced msk imaging we're able to do to accuracy levels meeting or exceeding that of practicing radiologists so we're really excited to be presenting this work and hopefully next year i'll be able to show you more of these results but we're also doing some space age things which i really am excited about so this is a project we're asking a model to look at the non-contrast image and predict what would that image have looked like had we given that patient contrast and the reason we can do that is we have lots and lots and lots of pre and post con right images in our archives so we just keep showing the model here's what it looked like before contrast and i want you to output the post contrast image as the as the label and we're already getting some exciting results in neuro this is some of the work we're trying to do with with liver um and it would be nice to never have to give contrast again is it possible i'm not sure but we're going to try another interesting project is there's no reason why these computer models have to look at the same images we do in fact we lose a lot of data when we turn a synogram into an interpretable image why not just look at the cyanogram to begin with labeling it with the output and seeing whether our models have better performance and we're doing just that the other thing that we're using is also uh exploring k-space it's a little trickier and certainly not my cup of tea but something that we're working on as well but we're not the only ones i don't mean to say that stanford's the only place doing this work in fact i don't know of any place that's not doing this work and i can certainly say that you're going to see many many more companies this year at arsenal even than last year these are just some examples from different groups uh in an industry all right so let's talk about the future which is why we're all here this is exciting stuff let's let's talk about what's going to happen all right so number one all of these projects no matter which of them you're looking at or if you're trying to do one on your own you need lots and lots of lots of accurately labeled data data is the new oil that is there's no better uh tagline for this current revolution and these are your new visitors in your hospital these guys are going to show up and they're going to be asking for your data they're going to be willing to give you money but they're going to want to have exclusive contracts with your hospital administrators if you're not prepared to deal with what they're prepared to offer you might find yourself locking your data up and preventing you from doing your own research but there's more to these projects than just looking at the interpretation task there's also ai opportunities in order decision making performing the imaging exam itself processing the image and even coding and building these are all tasks in our enterprise that require repetitive skilled human labor therefore they can all be replaced potentially with an ai model and there are companies looking to each and every part of this and they're not small companies these are the largest companies in the world and they're all interested in doing this work right now here's just some startups i grabbed you know a half dozen or so there's many many more out there and you'll see again you'll see some more this year at arsenal but i wanted to walk through a few scenarios because at the point of this is to say what's going to happen to your colleagues in 10 years so i'm going to give you a few scenarios and we'll walk through them so number one straight off the bat maybe ai just walks in and just completely replaces radiologists it's possible to possibly be like a lab test you just kind of send your patient down and you know print out comes out and you're done it might look something like this i don't think it's realistic but i just wanted to illustrate what could happen if the scenario plays out not likely all right so let's go on to something else let's say the ai lowers the bar to perform the diagnostic part of imaging so low that even someone like a cardiologist here shown as a devil-like figure could potentially do a radiologist job is it possible it's certainly possible in that scenario you can imagine that radiology jobs could be eliminated i mean i'm just putting it out there i wanted you wanted blood i got justified all right so but here's another scenario that i think might be slightly more likely radiologists become so incredibly efficient with these new tools that they're able to handle a lot of the workflow and these big mega groups will start continuing to sort of eliminate jobs and it's possible that a really savvy ai could partner with an ai and eliminate jobs there you go all right a lot of work actually to do that i'm glad you enjoyed that all right so um but here's my favorite scenario and the thing that we're working on the most is we really want to take these ai tools now i know there's going to be a lot of market forces at play but we're planning on releasing every model we build for free and because when there's no one sitting in that chair doing the diagnostic work there's a lot of people that don't have any access to the proper health care and you know so that for me is the reason why these ai tools are so excited and if you can provide diagnostic services to people again that don't have the access i think you can do do a lot of good and so this is a beta of something that we're going to put out there very soon so we're testing our algorithms uh clinically first but then we plan to release them for the world to use for free you can take a picture with your smartphone and upload it you can just upload it on the web um and it will return whatever labels that we've programmed our models to recognize and it'll give you a level of confidence that's that bar on the right and again it takes about 0.3 milliseconds for an x-ray to be read but we also want to do something for the research community so you remember i talked earlier about the the idea of imagenet which is this huge repository of classified images but we want to do the same thing with medical images because it's really not fair that these companies are coming into your hospital locking down your data and preventing the rest of us from from you know profiting from this or learning from this so we plan on putting a free uh medical image net out there where uh highly classified uh or well classified labeled data is out there for for uh research use from around the world because we do find that when this data is out there for free the best models come from places you wouldn't expect all right some key points as i wrap up here so medical imaging definitely has potential for disruption there are significant challenges as i mentioned in application but i think the greatest areas of impact are going to be in clinician-centered imaging tools and global applications and thank you for your time
Info
Channel: Matt Lungren MD
Views: 70,471
Rating: 4.9862137 out of 5
Keywords: artificial intelligence, radiology, machine learning, medicine, imaging, health care, deep learning, Stanford, AI, will AI replace radiologists, can AI do radiology, future of AI in radiology, can computers read radiology, will AI eliminate radiologists, how does AI in radiology work, when will AI replace radiologists, the future of radiology, global health, computer science
Id: Gigd1rkZTSE
Channel Id: undefined
Length: 15min 52sec (952 seconds)
Published: Fri Mar 23 2018
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.