Episode 23, The Future of Radiology and AI, Nina Kottler

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
okay welcome everybody to a new bytes of innovation webinar this is the first webinar in 2023 my name is Martin willemink I'm co-founder and CEO at secmed at segment we simplify access to Medical Imaging data so if you need medical imaging data make sure to check out secmed.ai we also organize bytes of innovation which is a webinar that provides a deep dive into the future of medicine and we do this together with renowned experts such as researchers Physicians investors and lawyers the concept is 50 minutes of presentation by the expert followed by 10 minutes of q a and this will be moderated by Rachel who's on this call and is a product manager at segment and by me so all participants make sure to put your questions in the chat during the presentation so that we can discuss them at the end during the Q a phase of the webinar after webinar today I am proud to announce that we have Nina Cutler amongst us which Dr Nina Cutler she completed both her bachelor's and her Master's in Applied Mathematics following which she received her medical degree from the University of Massachusetts Nina is a radiologist with over 16 years of experience in emergency radiology and with her background in Applied Mathematics and optimization Theory she has been using Imaging informatics to improve image quality and to drive the value in Radiology currently Dr kotler is the associate CMO for clinical Ai and VP of clinical operations at Radiology partners and she leads their data science and analytics Division and oversees the clinical AI for the practice so Nina is the perfect person to talk about the future of radiology with AI you know the floor is yours let me unshare my screen sounds good thank you and thanks for having me thanks everyone for coming I see all of these people on the screen while I'm sharing that's wonderful I think it's really important to talk about where we're going because until we have a vision of where we're going there's no way to know how to get there and I do think artificial intelligence is going to change how we practice radiology and I want to give a different Vision than been given before so foreign here all right so um so I think you probably all know Jeff Hinton he's become famous for for a really bad reason he's actually quite a a great uh artificial intelligence researcher but um but he has become well known unfortunately in the Radiology Community for this quote which he gave back in 2016. he said that I think that if you work as a radiologist you're like Wiley Coyote in the cartoon you're already over the edge of the cliff but you haven't looked down there's no ground underneath people should just stop training Radiologists now it's just completely obvious that in five years deep learning is going to do better than Radiologists and five years ago well I mean that was 2016 it were already past that time and we really needed more Radiologists than ever so I wanted to give a bit of a different future of where we can go in radiology and forgive the movie reference um with with Tom with um with an actor here pretending to be a radiologist but just pretend for a moment that a radiologist is interpreting all of the medical imaging that we have now okay so this is similar to today although the the screen here is a little different but now now we have information from artificial intelligence and artificial intelligence provides us additional structured information that we can then evaluate and as the radiologist we need to be the person that's evaluating those results because those results are sometimes accurate sometimes not accurate and the radiologist should be able to work with those results directly make changes and then the AI should be able to run again directly within our system and provide us those new results that is the collaboration and integration level that we need to have now I don't think this is going to happen just with artificial intelligence there's a huge amount of additional information that we can get from the new technology that's out there including molecular Imaging radiomics genomics protonomics all of the different omics all of this is structured information that needs to be combined together and someone needs to understand how to contextualize it how to understand what's accurate and not accurate and then provide that in information back to the referring clinician and perhaps the patient and to me that's going to be the role of the radiologist and you can't take on more than we're doing today unless we take some things off our plate which I think is what artificial intelligence is going to help us do now if you think about it today the information that the radiologist is looking at is mostly morphologic Imaging right it is visible Anatomy pathology and trying to derive pathology from what what looks abnormal in the anatomy there is also um some physiologic Imaging that we get but in the future we have to incorporate a lot more so what I'm suggesting is that we as Radiologists need to move from the Imaging expert to the information expert a ton of data is coming with artificial intelligence molecular Imaging radiomics genomics Etc and all of that has to be understood by someone so my suggestion is that is going to be the radiologist as the consultant consultant but in order to do that we need to develop the expertise so how do we develop expertise and that is through education and I think we need education and all of these things since we have a short talk today what I wanted to talk to you about was a few lessons that we have learned in using artificial intelligence over the years okay so I think a lot of the education you can get is through practice trial figuring things out because things aren't standard of care yet there is no set lessons that are already out there and written for you anyone that's using AI is using it as an early adopter and we have to figure out those Lessons Learned so here in my practice we've had a lot of experience using AI over the past five six years we started in 2017 and since that time we've rolled out artificial intelligence to thousands of Radiologists a lot of the work that we've done is in natural language processing because that has a huge amount of immediate value but we've also done a lot of work in computer vision we've had 17 million annual images available to nine different computer vision models and we've been evaluating those both clinically live and in the background and so with that amount of information I wanted to share Five Lessons that we've learned that hopefully can help fast forward your education process so that we can get AI to become standard of care okay so let's start with number one artificial intelligence helps find unexpected findings now artificial intelligence is different than a human it doesn't look at images the way that we do and that actually can be good so if it can find things that we're missing because it's looking at things a different way well then that can benefit everyone so here's a 50 year old male came in with a history of suspected PE um I don't even think that's ICD-10 compliant but it does actually is very helpful for us as Radiologists because we know what we should be looking for they're worried about a PE and the radiologist read this correctly there was no PE there was also an AI that ran found no PE but what we didn't find was this unexpected finding of a rib fracture now this might have been a cause of the pleuritic chest pain and why the physician thought that there was a PE but generally we're looking centrally in a different window it's hard enough to find rib fractures but when they're unexpected it's even harder so I think in this case we were biased by the history and that's why we missed this finding 56 year old male came in with left leg pain and we did find the cause of the left leg pain there was an obstructing distal left to read oral calculus causing hydronephrosis and all of that inflammation around the kidney but what we missed are these two really tiny Foci of free gas that were unexpected and it was found by the AI not found by the radiologist and in this case I think we were biased by our satisfaction of search here's a 52 year old male that came in with shortness of breath and this was during the Delta wave of covid so you can see there's coveted pneumonia here and in that background of covered pneumonia it's actually quite hard to find these very small pulmonary emboli the Radiologists didn't didn't find them but the AI did now the AI is just looking for a pulmonary emblem it wasn't looking care about what the background is but the radiologist does and we are biased by distracting pathology um so of course what do you do for a stroke exam you get a CTA of the head and neck and and this is the study we got now when you're looking at a CTA you're looking here centrally at the vessels the anterior circulation the poster circulation and the rad red this completely correctly the AI agreed there there was no vessel occlusion there was no dissection no issue causing the stroke but what we missed was this cervical spine posterior element fracture that is not something that you're generally concentrating on looking at boom windows and looking posteriorly here in the spine but the AI did pick this up and I think when you are biased by the exam selection you're looking for a very particular thing it makes it harder to find the other things this is another similar exam 47 year old male with subarachnoid hemorrhage and headache you can see the subarachnoid hemorrhage here and the hydrocephalus dilated temporal horns on this non-contrast head CT and so what are you going to get it's a young patient you're thinking about a ruptured aneurysm so we went ahead and got a CTA of the head and the most common location for an aneurysm is a Circle of Willis and we concentrate there because we've learned from our education that that is the most common place to find something and there was no ruptured aneurysm or aneurysm in this location however what the AI found and the radiologist did not was an aneurysm in a very unusual location down at the skull base it's um it's a five millimeter aneurysm coming off the left Pica which is a branch of the vertebral artery and it's not something that we commonly see but I'll tell you we found two of these in the time that we rolled out AI looking for brain aneurysm and it's because as Radiologists we're just not used to we look in the most common locations and unless you've seen something more recently it's hard to look really closely in other locations so I've been talking a lot about bias and bias gets a bit of a negative rap which in many ways it should but I wanted to provide a different perspective on bias and it's the reason why Ai and Radiology are better together now what I always learned when I was a resident was the highest quality combination was a resident attending so even as a young resident I felt like I was helping because the attending has been educated over time and had a lot of lessons that they've learned and they look in the very common locations that's places that they're they know they're going to find something or uncommon locations that they've recently seen things just based on their history right that that bias helps them find subtle things that they would otherwise Miss and a resident a younger resident we don't know where to look so we look everywhere on the bones and we may call up some things that that may be false positives but with the attending they can help resolve that now I think in the future the highest quality quality combination is going to be radiologist plus AI so similarly a educated radiologist or an experienced radiologist is going to be looking at certain key locations the AI looks at every pixel of the image or generally every voxel of the image but then it's going to concentrate only on certain areas and those areas if they don't overlap exactly with the areas that we're looking at well then that can add to our value it can add to the sensitivity and I'll tell you that brain aneurysm case that we were looking at we evaluated a thousand brain aneurysm cases and we looked at what the AI said and we looked at what the radiologist said and what we found was that the AI improves the Radiology sensitivity by 24 percent so when you put them together it it helps now interestingly if you just went by what the AI said the radiologist would have improved the AI sensitivity by 34 okay so that was lesson number one lesson number two AI helps Radiologists find subtle findings and we did find this most of the time you know like I said we've run these through thousands and thousands of cases and most of the time what the AI is adding are these super subtle findings and even if you're a radiologist looking at these these are very subtle findings to to see so I'll put on the heat maps to show you where they are but they're for multiple different models and this was typical now does every AI model find subtle findings because if they're only finding the common ones well then they're not really helping you and I'll tell you that ai's value grows exponentially once you get past an accuracy Tipping Point so you have to increase your sensitivity enough to find these things so most of the value happens once a sensitive activity is above around 95 percent if you're down at the 50s 60s or even 90 you're finding things that you as a radiologist are probably easily finding yourself and so what you want is something that's a little more sensitive that can help you detect those subtle findings that are going to make the difference all right so lesson number two AI helps find subtle findings but only if you have a highly sensitive model lesson number three AI might detect things that humans can't see and this is where things get really interesting I was completely floored by this case so this was a 70 year old male who came in with right-sided paralysis and word salad so like you're you're thinking Wernicke's there's going to be a stroke in that area and we got the CTA and we ran it through our large vessel occlusion model to look for a an occlusion in the M1 segment and we're specifically looking at the left M1 to find this stroke and we we didn't see anything in fact the CTA was completely normal in that area and on the left hand side I put up the axial image and on the right hand side I put up the 3D reconstruction because you can follow that MCA all the way out and there really is nothing that you could see that's discontinuous that would be causing this but the AI picked up something right there centrally within the M1 segment and um and it was kind of astonishing but when you looked back at the non-contrast head CT which luckily we also have right you get that first and we do evaluate those you can see a density within the left MCA right a dense MCA sign and that is a sign for a thrombus within the M1 segment that because of the density of it we just couldn't pick it up it was very similar to the contrast opacification couldn't see it pretty small but the AI did pick it up the AI was not looking at the non-contras head CT the AI was looking at the CTA so here's um actually the stroke and then um and it was in Wernicke's area which caused the the patient's paralysis and word salad okay next case 80 year old male with shortness of breath and chest pain um they were looking for PE and we we generally will look on I mean you get all different kinds of slice thicknesses they usually take them at maybe 0.625 but they may not even always send you the 0.65 slices that's sometimes hard for the network to manage so we may either only get the two millimeters or three millimeters or we may only look at those as sort of standard of care and I think on this two millimeter slice thickness it's very hard to see this small PE but when you look on the 0.625s it's a lot easier to see and generally we we try or we always should be sending the thinnest slice images to our AI model it's going to be higher quality for them and higher quality for us because we can then have something else that is looking at those in case we're going to miss it here's another case where on the left is a thinner Slice on the right is a little bit thicker slice but definitely standard of care for an abdomen pelvis very hard to see this rib fracture on the slightly thicker standard of care slices a little bit easier to see on the 1.25s and both of these were picked up by the AI and and not by the rad and I think that's hard in these cases so lesson number three is AI might detect some things that we can't see and that suggests to us how we should be working with AI right maybe in the future we should be letting the AI look at those super thin slices but should we as humans be doing that no we'd probably be wasting our time but if we have the AI look at it it can it can allow us to pick up things that we might have otherwise missed this is how we integrate Ai and Radiology together all right let's go on to the last two lesson number four RADS can also help improve the AI accuracy now I've been talking about how AI so great and it is picking up things that RADS Miss but it goes both directions and just for a couple of examples but I'll tell you I have plenty of these RADS must dismiss false positives so this is pretty obvious right there's non-contrast head CT the patient's moving we get this frequently they're streak artifact you may get streak artifact from from metal that's out there and all of these were called positive for intracranial hemorrhage by the AI now these are very easy for us to dismiss and I think that's fine it's okay if AI calls something that's the false positive that's easy for me dismiss what I don't like is way AI calls a false positive that's hard for me to dismiss so here's another example 66 year old male came in with fever coffee ground emesis and the AI model ran for free air in this and it picked up the gas that's just next to the stomach and and it's a little obvious to see on this image but I'll tell you I had a bunch of other cases where there was gas and a little diverticula or gas in the tip of the appendix that was picked up by the AI and and it tells you that ai's looking at things in a different way that we are now we're looking at things in three dimensions we're not always looking at every kernel and sagittal to follow the bow but we're certainly following the bowel going up and down as we're as we're going through the abdomen the AI is not the AI is not looking at anatomy in 3D the way that we are I mean it looking at a thin voxel of slices but what it does it can mistake some free air for something that is really just connected to the bowel as you can see in this case because it's not looking at the full Anatomy that we are in the same way that we are now again that's good here's another case 93 year old male who had a fall there are actually two c-spine fractures in this one and AI picked up one of them but the second one was this fracture that was parallel with the sagittal plane and the AI is not looking at every series of every CT exam in fact it's usually looking at one series and in this case for a c-spine fracture it's looking at the sagittal series much harder to pick up a fracture that is sagittally oriented where you just go from seeing bone to no bone to bone it's very hard to pick that up especially if you're not looking at it in thicker voxels of Imaging and so this one was one that the Radiologists picked up but the AI missed okay 65 you're the last case here 65 year old female with headache and this was one that ran through the intracranial hemorrhage model and it did pick up this um high density here in the basal ganglia which I think is appropriate to pick up but if you look at the prior which the radiologist has but generally the AI is not looking at you can see that this hasn't changed for years and so we could more confidently call this not hemorage but calcification whereas the the AI model is going to call it hemorrhage all right so lesson number five Radiologists and AI are better together and hopefully you've learned this lesson already but let's go back to that very first case of that 50 year old male with suspected PE where the AI picked up that rib fracture that was unexpected and fabulous so good at pick that up it actually sent this case to me and I got a chance to look at it so I said all right there's a rib fracture let me go in and when I went in with that bias of looking very specifically for rib fracture I was able to find three more rib fractures that the AI did not and I think this is how we're going to be working with AI in the future the AI will pick up something that we might have missed that's okay but then we as Radiologists Prime our brain to look for something like that or a pathology related to that maybe a subtle pneumothorax and we'll pick up stuff the AI didn't find that's how we integrate the two together and I will tell you I have lots of examples where the AI directed me to a finding but then I went and found way more than the AI did because my brain was primed to look very specifically in that area as opposed to looking for everything thank you all right so the future of radiology I think there is going to be a human cybernetic collaboration we need to be integrated together and that's why I showed you that case in the beginning that Vision um that we will need to have the AI directly integrated into our systems and running in our system so that when we work with the AI maybe we agree maybe we disagree the AI can run again taking that information into account so I put together a little depiction of what I think some of the lessons that we've learned on on what the AI should be concentrating on and what Radiologists should be concentrating on it would be an absolute waste if we all concentrated on everything again this is about how do we integrate the things that we do well as Radiologists and the things that AI does well as a computer system working together and that's how you take the best of both and one plus one equals three that's how you get to the next level and in in healthcare right now we can't think about just replacing things that we do we need to think about adding value doing more than we're able to do today and I think this is the method by which we can get there all right so today we talked about a few things we talked about the future of radiology as bright I think we're going to need Radiologists more than ever if we can have ai helping us with some of those things that we can do we can Elevate our role to take in more information and that takes education we need to understand how these things work so that we can be the experts and in that we talked about Five Lessons about how Radiologists will work together with AI and it's more about augmentation of our capabilities and of course the most important lesson is that Radiologists and AI Are Better Together so with that I'd like to say thank you and I will pause in case there's any questions awesome thanks so much Nina this presentation is super informative I think for people who are a little bit new to the space and for experts I wish there are many in this call so right now the floor is open for questions if you have anything feel free to drop in the chat and give them the time that we have left I'll start with one from Michelle and Michelle asks would the level that the radiologist improves the AI decrease over time as the AI continues to learn so yes it's a really great question um but right now it's actually not applicable and not a lot of people know this but the way that the FDA works so if you have a model that is going to affect patient care it has to be run through the FDA and the FDA clears or approves a model at a point in time they don't feel comfortable enough approving a model that is allowed to change over time because they don't feel they have control over what that change is going to be and so when a AI vendor goes to sell their product well it has to go through clearance they basically have it frozen and I think that's actually a benefit for us right now because it can allow us to learn how the AI works and then we can adapt to learn together I do think it's going to be harder once the FDA and it will probably take another year or so they are working on it you know how do you not approve the actual model but do you approve the process and the business that's that's managing that I do think it will be harder and we'll have to figure out new lessons learns and new ways to work together the good thing is that we as humans are very adaptable and we can adapt much more quickly this is just giving us the time to get our feet wet to get some expertise and then as AI models are evolving we will hopefully learn how to evolve quickly with them thank you thank you for that um I'll follow up with a question from Matt also in the chat Matt asks does having a very high sensitivity so greater than 50 lead to more false positives than a radiologist would have to go through yes um it does now when you're looking at an AI model I I don't look just at the numbers and maybe I could give another talk on this sometime but you have to look at things a little more deeply and for me when you're looking at an AI model that is finding a pathologic finding you're looking for a positive finding I'm looking at two things I'm looking at the positive predictive value which is exactly what you're saying how often is that test right when it says there's a positive test and if you have a very sensitive model you're going to get more false positives so your positive predictive value is going to be less now your positive predictive value is affected even more by the prevalence of disease than it is by the sensitivity or specificity of a model so it's actually your prevalence is driving that and a lot of our pathology is not that prevalent in our in our case um intracranial hemorrhage we see it in five percent of patients seven percent for PE if it's an incidental PE it's even smaller brain aneurysm small so all of those things that have a very low prevalence especially below one percent are very very hard for AI because that positive predictive value is going to be low because of the low prevalence now if you have a highly sensitive model however then you're going to find pick up some cases that are really astounding some cases that you as a radiologist look at and you say wow that was fantastic that really helped me so what you do is you want to balance that positive predictive value which sometimes is out of your control because it's based on prevalence with those wow cases and if you don't have those wow cases from a very sensitive model and you just have the negatives a positive predictive value you're going to be miserable you're going to hate it so I think having a more sensitive model even though it causes a few more false positives is way better than a model that's not going to give you any of those wow cases and then your balance is going to be completely off you're going to have your low positive predictive value with nothing to to weigh it out so it is a balance but you also want to look at the kinds of things that the false positives that you're given if like I mentioned if you're given a false positive that you can dismiss quickly because of motion artifact or something easy peasy but if you're given a false positive that's hard to dismiss then that makes more of a difference and that's why we take time in our practice to try to categorize all of the false positive types that we see and we educate our Radiologists about them so that they can go through them more quickly yeah that totally makes sense the radiologist has a gatekeeper function here as well right where the AI says something that the Radiologists you know that shows the value of the Radiologists won't go away with the AI there will always have to be somebody that actually looks at okay is this actually relevant or is this you know are we too sensitive right now um in line with this question uh Mercia Popa asks uh you know the balance between sensitivity and specificity so if the sensitivity needs to be above 95 what about specificity shouldn't that also be above 95 then it should absolutely in fact if you decrease or if you increase your specificity from 90 to 95 you cut your false positives in half so super important um it is why we need really accurate models but when you're going out there and you're looking at the accuracies that are published by the vendors don't just take them at face value that is not necessarily going to be the accuracy on your patient population so I'd encourage you to try it on your patient population you want to look at sensitivity and specificity but again look at positive predictive value negative predictive value is also a really good one because that's what makes you as a radiologist more efficient when you think about 95 of your head CTS are not going to have intracranial hemorrhage only five percent are it's the negative studies that are driving your efficiency so if it has a high negative predictive value that's also really helpful yeah totally so we have many more questions in the chat and I have a thousand questions that I want to ask you but I uh we're we are running out of time unfortunately so Nina thank you very much for a super interesting presentation it was great to have you here thanks everybody for attending this uh despite of innovation and we hope to see you next time at bites of innovation again thanks everybody
Info
Channel: Segmed
Views: 4,327
Rating: undefined out of 5
Keywords:
Id: mgDSJtnWcG0
Channel Id: undefined
Length: 29min 12sec (1752 seconds)
Published: Tue Feb 28 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.