AI for Radiology

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hello everybody thanks so much for joining us today uh my name is Jen Ridenour I'm product marketing lead at viz AI I'm honored to be joined today by Dr Susie bash and Dr Kevin Abrams um today we're going to talk about a background on ai go through a summary of clinical applications including some examples within clinical practice with lots of content to cover I don't want to delay I'd like to pass it right over to Dr bash to get us started thanks so much Jen yes my name is Susie Dash I'm a neuroradiologist at radnet in this talk we'll discuss an overview of the AI Market space as well as challenges and opportunities for AI integration in your Radiology clinical practice all areas of Our Lives impacted by technology are right now in the process of being transformed by AI so we'll discuss the present and future of the broad AI Market space and then narrow down more on Imaging AI I think we can all expect AI enabled robots to become part of our everyday lives some at some point in the future Sophia here has silicone skin she speaks with 62 facial expressions and she remembers her interactions with other people right now computers are actually better at pattern recognition than humans but humans are better at Global reasoning you can see here this AI program did a fairly good job of generating an AI image from the pattern of a photograph of me um but here are some of the problems so this AI could detect that this is me here or this is a person here there's water here there's sky here and a sun but it wouldn't really understand what's unusual about this picture uh whereas humans could immediately detect that the picture is upside down and AI doesn't you know isn't always robust it fails sometime this was a fully self-driving Tesla autopilot that resulted in an a car pileup in San Francisco so what is the purpose of AI in Radiology well I believe it's to increase accuracy and efficiency which we generally termed non-interpretive Ai and also to provide clinical decision support which we generally term interpretive AI I believe every single Imaging study will first undergo AI analysis before a human has ever seen it in addition to AI screening for disease pathology I think all quantitation will be completed by a computer alone in the future and also like you know if I were going to rank what would happen first I would say probably plain film studies such as checks x-ray for lung disease and endotracheal tube placement will likely have fully automated AI interpretation first followed by mammograms so AI applications can be applied before during or after image acquisition some of the AI tools that can be applied before would be things like scheduling Insurance authorization billing mining packs and medical records and trying to prevent no-shows I work for radnet which is the largest outpatient Imaging Enterprise in the United States and we do probably 8.5 million exams a year and we actually use AI for several of these things for scheduling insurance auth billing and to help prevent no-shows AI optimization during image acquisition would be things like optimizing image display deep learning or reconstruction and synthetic Imaging so let's sort of start here with optimizing image display AI can do automated image alignment it can automatically segment so here it's segmenting the volumes here of the lateral ventricles and then it can do automated subtraction Maps here's the current here's the prior here's the heat map overlay you can see how this would be very useful in things like patients with Dementia or NPH this happens to be neuroquant CT and it's also very good at a 3D Anatomy it can label the ribs for us which is hard for us to do in the axial plane then actually splay out the ribs and so you can see how this would be very useful if you're trying to detect osseous metastases in the ribs or bone fractures AI provides anatomical image intelligence with multimodal image fusion and quantification fusing here the pit to the CT it provides cinematic rendering with photorealistic 3D visualization of CT and Mr images combined with Microsoft hololens 2 in this particular case you can see here what some of these images look like it's pretty impressive here and a very nice view of the rib fractures uh here's the at cerebral vessels and so you can see that this is a fairly impressive technology now another thing that you can do during image acquisition is deep learning uh for reconstruction deep learning for reconstruction is a very exciting AI tool this is really one of the first things that I think is driving AI into standard of care for Imaging it allows for Superior perceived image quality higher perceived signal to noise ratio higher perceives facial resolution higher perceived contrast to noise ratio reduced artifacts reduced dose and it can enable faster scans so how does DLR work well it doesn't actually accelerate the scans itself basically Imaging facilities alter their protocols to decrease scan time which we call fast scans and then you can accomplish this acceleration by uh different means then the DLR is applied to the fast data set to restore image quality by denoising and often sharpness enhancement um the vendor neutral solution on the market space right now is subtle medical this is they were actually first to Market space but it was such a good idea all the oems have Then followed suit uh different stages of fruition with FDA approval but a lot of them are now FDA approved so an advantage of a vendor neutral solution like settle medical is they operate in the dicom space rather than k-space like oems so they can be applied to any scanner brand of any age and that allows a virtual upgrade to Legacy scanners so it really improves the ROI by extending the scanner live oems DLR products uh are are produced beautiful images but right now they're limited only to their newest scanners at this particular point in time so we can take a routine MRI exam from 30 to 40 minutes down to 15 minutes and that's not just scanning time that's on and off the table time this makes a big difference in workflow uh this is sort of what it looks like here's a standard image here required in nine minutes if you change your protocol and accelerate you can get that down to half the time but your image becomes very noisy and then you restore the signal to noise ratio and other aspects by um using a deep learning reconstruction to apply it to this fast image is this what you get look at the difference in gray white differentiation compared to the standard image here's another example standard fast and Fast DL again look at the gray white differentiation compared to the standard of care despite going uh 50 faster I like this case this is a patient who had multiple small intracranial metastases you can see this cortically based met is very difficult to see on the standard you can accelerate your protocol um 50 faster and actually when you go faster it actually increases the contrast to noise ratio but it makes the image noisy you can then take this fast image apply deep learning and you get a bump in and signal to noise ratio while maintaining that high contrast to noise ratio and now all of a sudden you can see how easy it is to see these uh metastases and detecting these early can mean the difference of saving a patient's life so this is a very useful AI tool here's again standard of care you're going 72 percent faster implied deep learning and this is what we get uh here's a standard and here's after deep learning just a tremendous uh difference here in image quality here's another example there's a standard and here's the fast DL um here's if you look at the hippocampus here's the original image and this is after deep learning is applied again here's the original we got it down to five minutes but we have a noisy scan apply deep learning and and here you see what it ends up looking like again the original scan took 28 minutes this is seven minutes look closely here as I apply this animation and this is what it looks like after deep learning in this patient with liver meds you can see why this would be very useful in the Pediatric patients they may have a hard time holding still 24 minutes was the original we got it down to six minutes we've got a noisy image here and look as I do this animation and the Deep learning uh corrects the image quality and here we went 75 faster and with less radiation dose but you can see that the images look very good um you can also use AI during image acquisition to create synthetic images this is a product and development here's the actual stir and here is a synthetic stir look how beautiful this looks compared to the actual stir so the way this is done is you create the synthetic stir from a T1 and T2 input here is the original stir here and this is the synthesized stir here look at the difference here between the original and this is essentially 100 acceleration you created an image out of nothing and saved a lot of time stir sequences in spine studies are our longest sequence and they're very prone to artifact which is why subtle sort of started with a stir as their first synthetic sequence again T1 T2 input here's the actual stir and then look at the synthesizer just look at the CSF here and see the difference between the actual stir really helping eliminate that artifact uh here's a patient with a mirror replacement in the S1 vertebral body and here is a synthesized stir see all this artifact Here Is Gone on the synthesized here's again the original stir and this patient has got a couple of compression fractures in the synthesized stir this is a patient with disk eye testosterone myelitis the original stir and the synthesizer we preserve all of the Imaging features but increase the signal to noise ratio here's again the synthetic stir that's uh you know really has a poor quality uh through the cord and then here's the synthesized stir now ai can also be applied after image acquisition we can do this for things like cancer screening triage apps quantitative volumetric Imaging and natural language processing so with cancer screening AI can detect prostate cancer it can measure the volume of the prostate and also the volume of the tumor here's another program here that's also evaluating prostate cancer it can also detect breast cancer early uh rad it acquired or partners with many different AI companies this happens to be one they Incorporated called Deep Health it will detect breast cancer one to two years before expert Radiologists here this cancer was missed by the human at one year but um by two years it was detected by the human but the AI picked it up first it doesn't matter how you run this data through if the AI tool always beat out uh expert readers when coming to detecting cancers earlier again because computers have an advantage with pattern recognition that will obviously help save lives here's a another company we incorporated that's in development for detecting lung cancers cortex has a on you neural product that will segment and detect tumors as well and then the other big thing is triage apps that can be applied after image acquisition um this happens to be this product you can see here that it's detected a large vessel occlusion here on the left this is a pre-packed solution 63 seconds between the time the scan was performed and the alert is um made to make everyone aware so it's an AI power notification for lvo and CT perfusion it uses High Fidelity mobile image viewing so everyone can view it through a HIPAA secured mobile app on their phone it provides real-time patient information and HIPAA secured uh communication between all of the members of The Stroke team including the neuroradiologist neuro and evangelist neurologist Etc this is their perfusion module and basically it saves 102 minutes from door in to Door out and 87 minutes are saved in the door into groin puncture it's all about getting to thrombectomy early in terms of outcomes so it will this software when it's utilized will decrease the neuro ICU length of Stay by 3.5 days in the regular Hospital length of state by 2.5 days this is a significant cost savings uh for the patient but more importantly it really helps improve their cognitive function which is what we really care about the most in addition to the intracranial hemorrhage detection there's also aneurysm detection which again is Will alert and prioritize and then outside of the brain AI tools can detect aortic dissection and also pulmonary embolism now another AI application that you can apply after image acquisition is quantitative volumetric Imaging uh this is uh what one company looks like cortex it's another company eichometrics Quantum ND commonostics uh darmian is actually in development here um but why is it appealing well it improves diagnostic accuracy it you know enhances clinical value by allowing you to monitor disease activity it helps with reducing reader subjectivity which is a very big issue in Radiology our referrers don't like it when our interpretations sound different between our colleagues and it can impact disease modifying therapy for uh patient care so like if you take here this is a 73 year old with memory loss as an example mild to moderate right temporal lobe atrophy this happens to be neuroquant this happens to be eichometrics a Quant post-processing was done you take a closer look here and it calculates the volume of pertinent structures in patients with a potential dementia so hippocampi and a rhinocortex the ventricles the a temporal parietal frontal occipital cortices anterior posterior cingulate gyri and it plots it on a graft here so anything in red is more than two standard deviations outside the mean and anything M can cure is also two standard deviations and here's another report here where again you're given the normative percentile for different areas in a spider graph here pointing to what a lobe is most affected uh this is a triage brain atrophy report where you get more structures here anything in red is statistically significant this patient then went on to have an amyloid pet study here's the PET CT Fusion here's a pet Mr Fusion diffuse binding of the amyloid Tracer to the cortex so this patient in fact had Alzheimer's disease so we can use plot for many different things dementia multiple sclerosis epilepsy Pediatrics traumatic brain injury neural oncology and spine Imaging I'll just give you another example here of multiple sclerosis where it can detect individual lesions and um you know color code them according to location in the brain and more importantly the dynamic change over time so we can see which lesions are are new or enlarging and which are shrinking over time this is what you would look at on your packs as sort of cine image here and you can see the new and enlarging acts and uh shrinking plaques that are in blue this is what a an MS report looks like this happens to be neuroquant you get the lesion burden by volume which is really the most important thing here's the current here's the prior it also gives you a lesion count but as plaques and large they can become confluent and actually artificially you know drop the plant count so I always rely much more heavily on the volume and then more importantly you know it tells you what's new enlarging shrinking and stable and uh also the other structures like cerebral cortical gray matter white matter thalamic volumes Etc current and prior and then you can get plot graphs as well tracking from the prior change we can use it in epilepsy this particular patient uh the right and left HIPAA Camp I were both within normal limits but the asymmetry index was out of the normal limits so this would be making you think that this patient could potentially have an early right side of mesotemporal sclerosis now here's ecometrics report and then it can also be used in neural oncology this happens to be on cue neural um sometimes it's hard to tell by Imaging alone in fact it can be impossible to tell what is recurrent disease versus treatment change um and so tumor segmentation can be done by um the software it's actually does a very good job of segmenting the tumor and the surrounding flare hyperintensity here's RSI which is sort of a high level diffusion uh showing active residual tumor here and so these are the type of reports that you can get here's a pre-operative patient and this is after segmentation is done um and uh this is a post-operative patient and you can see here the enhancing tumors in red the whole lesion was a combination of the enhancing tumor the necrotic poor and the surrounding non-enhancing flare hyperintensity and resection cavity is eliminated from the whole lesion size and then you can see how the enhancing tissue is changing over time in the post-operative setting here's again a look at the before and after segmentation and this is just sort of a dynamic view of what you see after the segmentation has been performed this patient is here a pre-op and post-op segmentation and then AI can also be applied after image acquisition for a natural language processing this particular company rad AI can learn the style of individual Radiologists through looking at multiple reports of theirs and take the findings and then create the impression this can save Radiologists a lot of time and it does it in your own particular style which Radiologists uh really all like everyone is comfortable and likes their own particular style so this could be a very useful feature now let's move on to some of the challenges for radiology Ai and we can look at this from the perspective of the radiologist the Imaging Enterprise admin referring clinician and the vendor so from the perspective of the radiologist I think probably the two biggest challenges what I think about it are time and Trust uh will it cost time to dictating speed of most Radiologists are paid on an rvu basis and even if an AI app provides value if it costs them a lot of time they're probably not that interested in incorporating it um and and so will it take time to learn about products will the workflow integration be seamless and is customer support available and efficient but going back to the productivity it actually can save time so in the in the realm of Quant you can actually read 38 more reports per hour when you use Quant with a pre-populated reporting template so 13 in an hour as opposed to eight um and then this is what the pre-populated reporting template looks like this is a summary of all the important findings that can be copy and paste it into your report if you would like to and then the other big issue for Radiologists is trust you know how accurate is the product how reliable is it in terms of failures uh how consistent is the software in terms of false positives and false negatives and what is the value of the product can we trust it as products develop they mature over time and and get better over time and so that's a wonderful thing so I've noticed that there's been a big trend of moving from machine learning to deep learning this happens to be an MS patient and you can see it picks up a lot more plaques when you convert to deep learning and then does the product really have value so here we're seeing anything in in red it's getting bigger anything in blue is getting smaller the cortex is getting smaller the ventricles and the sulci are getting bigger and this can provide uh value again value in terms of being able to see what's increase or decrease in plaques in this particular patient because with differences in slice sampling and head angulation it is very difficult to detect particularly if the patient has a very large plaque burning it can also improve detection disease activity staying in line with this Ms case 24 percent of Radiologists when they're just looking at the exams with their eyes will say there's active disease meaning in large or new plaques but when you apply Quant software that number goes up to 76 percent and that can impact disease modifying therapy so my referrals really love to use Quant for their Ms cases it can also result in 22 percent Improvement in intra-reader variability and 23 Improvement in inter reader variability and then from the perspective of the Imaging Enterprise the admin you know I think some of the things that are challenges that they're thinking about is will the solution align with the practice goals so certain AI tools are more practical in the outpatient setting versus the inpatient setting so you need to figure out which is right for you will Insurance reimburse the AI tool and if not who pays for it should the Imaging facility do a patient pass through which some are doing for certain AI tools success fully and what is the ROI is the added value and referral benefit worth the product cost there are issues to consider with the contract uh time investment for training the tax marketers and the MDS and how easy and seamless installation and rollout is and then from the perspective of the referring clinician uh it's really about education about the clinical utility of the AI solution you know referring Physicians often don't know what the right product is to use and if they should be ordering it and then from the perspective of the vendor really raising awareness of value in Radiologists the refers Imaging facilities and patients educational initiative social media and internal Champions that are believers in the product and can sort of spread the news about that through their Imaging Enterprise these are all challenges for the vendor also reimbursement challenges you know it may be beneficial for vendors to have a CMS Advocate this was the first company that was eligible for a new technology add-on payment because they were able to demonstrate that they were in improving cognitive function and outcome in stroke patients by getting them to the table earlier for thrombectomy and then the other challenge is getting refers to agree to an AI based Imaging tool and expanding and improving the product line now there are opportunities here so opportunities for vendors would be to establish trust and with customers and clinicians and that means a commitment to constantly improving product accuracy and also clinical validation trials Radiologists want to know that these products have been validated in clinical trials and then to really demonstrate value by developing new clinically relevant products develop products which differ from competitors and the Simplicity of the user interface this is a bigger deal than you think if you have to use outside Hardware that it presents a problem for you know for radnet we have 355 Imaging facilities so that wouldn't be a great solution for us other opportunities would be educational marketing Outreach that means clinical meeting presence social media a education website enhancement and physician Outreach and I put in here specifically resident training programs if you pull residents almost 100 of them want to learn about Ai and their residency but in truth a very very few residents are getting trained in AI during their residency so this needs to change also other opportunities would be collaborations with other AI companies whether it be oems or private vendors and a platform exposure to sort of fill the niche not every company can do everything but if we work together we can provide sort of a a multi-solution through potentially a single platform long-term goals would be to develop products which will drive imaging AI into standard of care so what is hype versus reality vendors have very high value claims for their AI Solutions but how do we know if they deliver what's promised and again that gets back to the importance of multi-centered clinical validation trials I happen to be a big believer in this so I've done some of these here we went 40 percent faster and then apply deep learning to the fast image this was a prospective multi-center multi-reader trial these are the kind of trials that you want um so that you eliminate bias what we found was that the Deep learning Master it's heated perceived image quality of standard of care spine Mr exams despite the 40 reduction and illitatively outperform standard of care for artifacts and signal to noise in a quantitatively preserved we used ssim a structural similarity index metrics and it really suggests the potential for a routine utility of deep learning Mr and clinical practice this was published in clinical neuroradiology and then we were wondering if it's possible to effectively apply more than one AI solution to obtain the benefits of both tools while still maintaining image quality and accuracy so this is another clinical validation trial again a prospective multi-center multi-reader trial where we looked at patients with memory loss we did a 60 acceleration and then we applied deep learning and we also applied neuroquine for quantitative performance both before and after deep learning this was published in ajnr at the end of 2021 and what we found is that fastio was statistically Superior to standard of care for perceived image quality across every single Imaging feature that we looked at it was actually very impressive the outcomes the volumetric segmentation when neuroquant matched both before and after so the top row looks like the bottom row because the quantitation was the same for both so whatever the uh you know the hip sample volume was before it was the same after um fasty yell which this you know which really uh portrays the robustness of the uh the quantitative software and as well the Deep learning software so deep learning reconstruction allowed 60 scan time reduction while maintaining that high volumetric quantification accuracy consistent clinical classification and perceived Superior image quality when compared to standard of care and it really supports the reliability efficiency and utility of DL uh based enhancement for quantitative Imaging and the hope is that shorter scan Times May boost utilization a volumetric font in routine clinical practice another one that we were working on here or actually we've already submitted this for publication at ajnr it's again a multi-center multi-reader trial and this was evaluating the synthetic stir images that we talked about earlier uh DL generated synthetic spine images uh turned out to be superior in image quality to conventionally acquired stir suggesting the potential for routine clinical practice so what do clinicians expect from AI companies well they expect that vendors AI Solutions will do what they claim to do they expect that you know products that they incorporate uh generally they will want them to be FDA approved or CE marked in Europe and they would like them to integrate seamlessly in the Pax workflow without requiring external hardware and they also want to to be provided with good customer support they want products that deliver value without costing the Imaging company or Radiologists money or time in other words a positive Roi thank you so much for taking the time to join us uh Dr Abrams will now talk about his personal experience with a triage app in his clinical practice thank you thank you Susie for that great introduction on AI and the Radiology space now what I'd like to talk about is our Real World experience with AI using this AI so my first question is does your daily work List look like this you have a whole bunch of what we call right dots or red squares showing stat brain stats test.ct CTA uh Chess at CTA neck and then at the bottom you have a few routine cases seems like everything is ordered stat these days now what happens if the last case on your status list looks like this a diffuse subarachnoid hemorrhage or the last case on your routine list looks like this somebody with subtle pulmonary emboli certainly you want to triage these cases towards the top rather than the bottom so this is the biz platform I'm showing it as the mobile app right now however they also have a desktop app that does the same thing if you look you have ai powered large vessel occlusion intracranial hemorrhage p e detection aortic dissection and aneurysm alerts these are the things we want to triage towards the top it has an advanced mobile image viewing which as you see you can do your ctas in any which way you want the team can put in clinical information and lastly the chat function which is full stack secure communication HIPAA compliant encrypted so that the whole team could be on the same page on the same page and communicating in parallel rather than in a Serial fashion so let's start with a few cases the first is an 85 year old woman history of atrial fibrillation on Eliquis in the outpatient setting she was brought to the hospital via EMS with stroke symptoms of slurred speech and what facial Drew here's her non-contrast brain seed tape now if you look here there's a hyper dense clot sitting in the right MCA trifocation and it actually looks like the club may actually be in an aneurysm the aspect score was 10. so we went on to do a CTA on this patient and we get this alert this will come on your pack so it'll come on your phone or I'll come on the desktop wherever you'd like this is the suspected lvo and it goes to exact slice of where the suspected occlusion is and here it is on our source images demonstrating the right MCA occlusion we also get a second alert here besides the lvo alert that we see in yellow we got an anx alert in blue which means it's detecting in aneurysm now it's unlikely it's detecting the right MCA aneurysm which looks like it has clawed in it but it was actually detecting a left Peak on aneurysm which we'll see later this group starts texting and ensuing coordination of care between neuroradiology neurology nursing and neurointerventional Radiology again parallel communication rather than serial communication here's the same patient their CET perfusion you could see on the bottom the the Gary and green showing the acute ischemia and the right mbca territory and the core infarct is calculated at zero so this is a good therapeutic benefit ratio here we also looked at the key com aneurysm and there we can now see it on the Sago and axial images and here it is on 3D reformatted images you could see that right in one segment occlusion as well as the left posterior communicating artery aneurysm patient goes on to thrombectomy by uh Dr daboose you could see the M1 occlusion and with passing up a thrombectomy device they were able to get an excellent angiographic resulting Tiki three result not only can you see the vessels nicely but now you see that aneurysm which had clotted since they knew about the possible left p-com aneurysm they also injected the left side and showed the left key con aneurysm that was slightly irregular this is the patient on 48 hour follow-up all symptoms resolved the patient was discharged so does this parallel communication do the patient uh justice does it expedite patient care and the answer is definitely yes this article came out in the ajnr January of this year it looked at this AI implementation of stroke augmented intelligence and communications platform in a comprehensive stroke Network this was out of the University of California San Diego they took 82 neurointerventional cases pre and post time periods they found that a 39 Improvement by using the AI augmented intelligence and communication platform thereby decreasing door to Growing times from 157 minutes to 95 minutes this is huge when it comes to stroke care here's another patient that picked up the ldo in the right MCA you see the lvo and it goes to the site of the occlusion but then when we did perfusion we look at our motion maps and having done perfusion now for over 20 years I will say the most common sequence a patient will move on is the perfusion and in the past we used to have to say perfusion is limited or non-diagnostic by motion and looking at this motion map it almost looks like a seismic right seismic earthquake I should say you could see in all directions the patient is moving we'd like to see straight lines here however AI in the automated motion correction in the background this is not done selectively it is automatically done you don't need to do anything it automatically will recalculate the maps and show you where your areas of ischemia are which you can see here at 94 CC's and core infarctic 46 CC's if you look at the qualitative Maps it also corresponds to that right MCA distribution very well in the past this would have been a non-diagnostic exam how does it do versus other vendors well they did a head-to-head comparison between his uh perfusion and Rapid perfusion Dr McGuire and colleagues did a head-to-head comparison and his AI showed 50 fewer motion artifact errors than the competitor next patient is a 78 year old with acute Aphasia and right side facial drew it started the day before then it completely resolved but then it recurred patient came in at this time with an antih Stroke Scale of three essentially Crescendo tias the burning CT was negative we look at the CT angiogram this 3D representation which we don't get immediately but good for illustrative purposes and it didn't show anything dramatic we look at the perfusion and it doesn't really look that dramatic either it gets a T-Max of only 0.5 CC's you don't even know if that's real or not however we then get the lvo alert this big yellow border now it gets a yellow icon and it goes to the slice of suspected lvo right in that genuinely left MCA now we go back and look at the perfusion maps and you can see a perfusion abnormality in the left temporal parietal region mainly Wernicke's area that rises to above TMax greater than four seconds but not quite T-Max greater than six because this is a near occlusion not a total occlusion now that the AI picked this up and pointed it out to us we can go back and find that near exclusive clot in the posterior Division and two segment of the left middle cerebral artery the AI helped the Radiologists uncover a potentially missed lvo now we can rotate that 3D CTA around and see the near reclusive clot in that M2 segment left middle cerebral artery patient goes on to thrombectomy now you can see there was a near total occlusion it could see a very thin contrast going through the area of the clock but very poor distal perfusion after just one pass of an aspiration catheter they received a tiki three result 24-hour follow-up the patients Aphasia and all symptoms resolved how good is the this AI lvo performance well it turns out it's pretty good they took over 2500 patients across 139 hospitals so not a single institution but multiple institutions and found 96 sensitivity and 94 specificity how are Radiologists in general picking up these CDTA geography how good are we picking up the LVL well this article that looked at CTA only but no CT perfusion showed that in general a radiologist is not going to miss an ICA occlusion or an ICA with M1 occlusion when it comes to an M1 isolated occlusion they found that it corresponded to about 37 of ldos and about 18 of missed lvos furthermore when you get to the M2 segment they accounted for 48 of large vessel occlusions yet 82 percent of missed lbos that's a big missed opportunity if you take the analysis of the odds ratio of missing an M2 versus nm1 it's almost six to one similarly if you compare a non-nororadiologist to a neuroradiologist it's also almost six to one so yes we do need help this I call my gallery of hemorrhage so they do have an algorithm that picks up intracranial hemorrhage and you'll see it's either a magenta border or this magenta ich icon here and it will pick up at the Dural Hemorrhage subdural Hemorrhage subarachnoid hemorrhage large parenchymal bleeds small parenchymal bleeds as well as intraventricular Hemorrhage in this case a combined interpreting mole and intraventricular hemorrhage performance is about 85 percent sensitivity and specificity for ich and for some girls it is 91 and 96 percent so let's go back to this patient that had a left thalamic bleed with intraventricular Hemorrhage that we were alerted on an exciting feature now that is under investigational use only not FDA cleared is called Fizz recruit if you look at it not only will it calculate the iph or the interparental hematoma volume here given as 10 CC's and outline it for you it will also give the interventricular Hemorrhage volume in this case 20 CCS it will also calculate the midline shift denoted as three millimeters here next patient is an 89 year old who had a CTA of the head and neck order to evaluate for carotid stenosis in planning for Middle meningeal artery embolization for chronics of neural hematoma this is one of the cases I've showed you before and you have the current images on the top and the prior on the bottom this is typically how we view things in the axial and coronal and you're always trying to get a good measurement where am I finding this subdural that it's most fair to measure at and it's not obliquely measured so you could spend a lot of time measuring these subterrals using this recruit again investigational use only not only can they calculate the thickness give it as 16 millimeters here in both the current and prior study but it was also able to give the volume of subdural because we know subdurals can migrate and redistribute so these subdural collection was 115 CC's here whereas those 100 cc's before but let's go back to this patient this wasn't just a brain CT remember they ordered a CTA for the header neck angio well it detected the ich but also detected in aneurysm on the CTA and it goes to the slice of Interest now when we go back we can look and see there is a right middle cerebral artery triflocation aneurysm here it is pointing superiorly and anteriorly now when we get our CTA images of the head and there's a lot of images to go through plus the nice pretty 3D reformatted images that the technologists give us I calculated we have over 600 Source images for CTA head and neck alone plus 64 images for the CT brain that's just using single window without response yet we know we use those plus countless reformat we can really use the help in Discerning things so now that we know where to look for that aneurysm because of the aneurysm detection algorithm you can now see that irregular right MCA aneurysm in both the um 3D reformatted displays in general how good are Radiologists at picking up aneurysms on CTA in subarachnoid hemorrhage this was a study back in AJR in 2011. it looked at on-call residents granted it's not attending so neuroradiologists but these trainees really deserve the best as well as our patients deserve the best care now these are head ctas alone this does not include a neck so they're not being distracted and they're also subarach node Hemorrhage so they know they're looking for an aneurysm and even given that per aneurysm residence sensitivity and specificity for detecting aneurysms of any size or 62 and 91 percent the pcom post your communicating order and internal carotid artery aneurysms work blind spots with aneurysms three millimeter or larger detected with sensitivities of only 33 and 50 respectively again we could all use the help this is just out in the Journal of neurosurgery actually came out uh two weeks ago validation of Automated machine learning algorithms the detection of analysis of cerebral aneurysm is using this algorithm they've had 400 patients for aneurysms greater than or equal to four millimeters it had a sensitivity and specificity of approximately 94 positive predicted value of about 88 if you look here it's very nice to get the heads up how you're dealing with an aneurysm so now let's go below the neck into the uh aorta this area uses deep learning to find aortic dissection it isolates the order from the rest of the study and searches for dissection flap from the aortic root all the way down through the iliac bifurcation if there is a suspected dissection you will get a push notification how quickly you asked average pictured and notification is 36 and a half seconds with a high sensitivity and specificity of 99 and 98 respectively so here's an example of a patient where the algorithm picked up an aortic dissection and it goes exactly to the slice and you can see the flap right there now we could go and look at all these Source images and you could see the dissection flap in the ascending aorta as well as in the descending aorta and now you could also see these curved reformat image that shows the dissection flat nicely for all the way from the As and then you're ordered down to the iliac arteries now I ask you do you really want this case sitting at the 15th case down on your stat list I would say absolutely not this PE there is now an algorithm to detect pulmonary embolism and find the clot in the pulmonary artery the caught in the main and segmental pulmonary arteries is detected and you will get a push notification this time it's about 63 seconds so just over one minute and the whole team can actually get the alert sensitivity and specificity are 91 and 92 respectively let's look at an example here's a patient with a saddle embolism we get the alert we get that key alert the perk alert or pulmonary embolism Response Team alert goes out to the entire part team simultaneously notifying the team rather than in serial fashion and beginning that coordination of care it will also calculate the important RV to LV ratio which under normal circumstances Untold should be under C or 0.9 here's 2.25 not only will it give you the ratio but it'll show you where it measured the ratio from here's the source images from that same patient you could see the Lord's saddle embolism going into the main pulmonary branches and here it is on oblique reformatted images in the right and left pulmonary arteries this patient goes on to pulmonary angiography with Dr Gandhi does the angiogram sees the large clots goes in with thrombectomy catheters and you could see an improvement on the post pulmonary angiogram there was successful pulmonary artery angiogram with mechanical frontabectomy of pulmonary embolism with substantial thrombus removed pressure measurements dropped from 52 over 19 to 42 over 9. here's the patient's follow-up decrease in pulmonary embolism burden as you could see on these images and also the RV to LV ratio is now 1.18 remember previously it was 2.25 so how important is it a learning team at the same time having this pulmonary embolism response team on patient mortality well this was studied in the American Journal of Cardiology and they looked at part associated with Improvement they found a reduction in mortality through six months 14 post part versus 24 pre-part relative risk reduction of 43 percent reduced length of stay that's six and a half days postpartum versus 9.1 days length of stay pre-part the time of triage to diagnosis of pulmonary embolism was independently predictive of mortality and the risk of mortality was reduced by five percent each hour the earlier the diagnosis was made so not only do we want to get to these images earlier on our work list but we want to get it to the teams simultaneously as quickly as possible now you may ask in getting all these alerts what if I don't want them well you could customize each alert from having no alerts to silent override so if you have your phone on silent it will even override it if you choose to you could do it for CTA LVL ich or the lvpe aortic dissection you name it so why is it useful to have redundancy in having both a mobile and desktop app well number one is I'm not always at my desk sometimes I have to get food sometimes I even go to the bathroom I need a mobile app not everywhere I read has good Wi-Fi or cell signals so I definitely need a desktop app as well sometimes our internet or tax goes down I definitely need the mobile app with cellular service communicating and visualizing is easier on a large screen so that's Advantage desktop app I forgot to charge my phone last night that sometimes happens Advantage desktop app we want to communicate with clinicians and other members of The Care team they need the mobile app so therefore we want and need both thankfully currently the VIS AI platform does have both it has the mobile app and the desktop app and it is now able to integrate into work lists as well so with that I thank you for all your time and look forward to the questions all right thanks so much Dr bash and Dr Abrams appreciate it very much we've got some good questions coming in here so why don't we get right to it um for Dr bash with so many Radiologists in your Imaging Enterprise how do you get everyone on board to adopt these new technologies yes that's a good question um we have about 800 Radiologists and so you could imagine it's a fairly big feat to get a new AI technology on board where all the Radiologists feel trained and comfortable with it and so we've tried different approaches and both have had their advantages so for example when we incorporated the AI tool of quantitative volumetric Imaging about 17 years ago I did educational webinars um and then all of our neuroradiologists would log into that and they could get a feel for what the AI technology is and how would they incorporate that in their report and then as that technology improved over time and as we hired new neuroradiologists you know I did another webinar and and they could log in and whenever they wanted to but also get a feel for what's new and train the new people now when we incorporated deep learning for image reconstruction throughout our Imaging Enterprise we took a different approach our chief technology officer and vice president instituted pilot programs on both the east and west coast this type of Technology was used by all 800 Radiologists so there's a very large number of people um and the images look different because there's improved signal to noise ratio contrast to noise ratio it has slightly smoother appearance and so we really wanted to make sure that everyone was on board now before implementing any AI tool I forgot to mention we always get it involved so everything is all completely ready to go by it before we ever introduced to the radiologist but then in this for deep learning the pilot program that he instituted he had the heads of sections on both uh east and west coast look at the standard of care image and as a patient came in they would just we'd randomly choose one sequence where they do the standard of here and then repeat with a fast protocol apply deep learning and then the section heads would compare side by side and they felt very very confident and convinced that the Deep learning the fast DL images were far superior to the standard and then that excitement kind of trickled down through their section so by the time we implemented throughout our entire Enterprise all of the Radiologists were really on board and it it really turned out to be a seamless integration and we used a similar strategy when we applied the the Deep Health deep learning for breast cancer detection on mammograms so started with the section heads and then trickled down um through the rest of the breast Radiologists on both east and west coast and that seemed to be a very effective way of implementing those are just a few examples but that's how we sort of have handled it right so it sounds like a little bit of work on on the front end to make sure that everybody's engaged and aware of the implementation and the value that the products can bring but it sounds like you know once uh you start to see some of the benefits of this it's kind of a no-brainer and folks tend to tend to go with it exactly excellent thank you so much um let's see what else we have here uh Dr Abrams you mentioned communication um there are many other tools out there uh for example WhatsApp or teams can you talk a little bit about how you've seen this communication work for you and anything that you think might be notable yeah well that's a it's a great question because we actually didn't start with this AI for the AI aspect we actually started with it because of the communication aspect we were having a difficult time communicating with the team we have nine emergency departments that feed into our one comprehensive stroke center so we would try and use WhatsApp because it was encrypted however you don't have the patient's information in there so you don't know which patient you are talking about a lot of people have similar or even the same names and we could have multiple Strokes going on at the same time at different hospitals or even at the same hospital so we needed that patient information to be going along with this type of communication the other thing it allowed us to do was actually get the images to the referring Physicians so they could also decide what to do once we got that and they started showing us what else the app could do I was a little bit skeptical because I'm a contrarian by nature and so they were saying but we have this lvo like yeah yeah and then I started using it I'm like wow this is really good it's not only picking up m1s it's picking up m2s this is pretty good then they said why don't you try our Hemorrhage app I'm like you know what I'm gonna hate it but I'll give it a try we'll do a three month demo we did a three-month demo and I said you know I was really shocked on how good it was and even when my partners said we need to you know do a paper on this I I can't imagine that it would have been this good and now we're looking at the aneurysm detection and as I showed in my talk that's really good as well so I think a lot of this technology has come a lot further than I personally expected and again it started out more as we needed a communication tool which now we have but now we have ai tools that are very helpful for our workflow okay so another related question um how does the chat function bring the radiologist into the arena with fellow members of the team so um something similar but do you feel it changed the way you interacted with other members of The Care team yeah so we used to do it I guess a maybe you'd call it a little old-fashioned would always be a phone call uh to the neurologist to give the results and the problem with that was sometimes the neurologist is doing the stroke or telestroke and they don't pick up and then you're waiting for them to call you back and you're starting to get busy with other things now we have a telestroke company that we work with that when we started with them they said guys can you stop the phone calls we'd rather you just text us on the viz app because they had it and uh that thrilled my colleagues and they said that's great we could just type it into whether it's on your desktop or on your phone you could type in the results they get it they say thank you and you're pretty much done until they go on to the CTA and CT perfusion so it's really been a big time saver for us now if it's a very complex case sometimes I still will call to discuss verbally but I'd say over 90 percent of of the cases now are just gone through the chat function and we've actually also improved our door to device times as well and I'd like to just add something to that you know communication is one of the biggest pain points for Radiologists and you know you see it in the hospital but you also really see it in the outpatient setting as well well we don't see is you know as lvos in the outpatient setting uh at least we don't like to see lvos ever in the outpatient setting but we do have urgent findings that need to be communicated and trying to track down the refer when you have an urgent finding particularly is it's some of us are working off hours or we're dictating late at night it is very hard to sometimes get a hold of and if the referrers added you know the hospital you know it's paging all over and it can be very difficult and it can also cost a significant amount of time so I think one of the things that viz really did right from the beginning was this communication tool to help alleviate that significant pain point for Radiologists thank you um so we have another question here I think Susie this would be great for you um could triage apps like viz play a role in the private practice setting yeah so as we talked about there are certain AI tools that are more amenabled in the outpatient setting as opposed to the inpatient setting and some really have crossover so there are certain modules within Vis AI that would be that could be very important in the outpatient setting so we see there's you know almost not a single day that goes by that I don't see some aneurysm now not all of those are ctas a lot of those are discovered on MRA as well but we still do CTA for aneurysm screening and for other screening where aneurysms just happen to show up so it's a very common thing to see aneurysms that is something that can be used in the outpatient setting the other thing is we um for intracranial hemorrhage acute subarachnoid hemorrhage and acute parenchymal Hemorrhage we sometimes see in the outpatient setting although not nearly as frequently as in a hospital setting but subdurals are something that we follow all the time and again we follow those with both CT and Mr but we see a lot of root team you know follow-up CTS in the outpatient setting for subdurals and order by our neurosurgeons and having that kind of tool um available you know um although it's still for in the sort of the research module but you know being able to calculate the volume as Dr Abrams was saying is a very important thing uh for subdural hematomas because again where do you place your measurement tool where I place it is probably not the same place that my colleague placed it before which is why it always requires us to measure and then re-measure on the prior ourselves because it just doesn't necessarily match with what our colleagues said so again volumetric quantitation of the subdural is very important and uh would be a great tool in the outpatient setting thank you I'm just going through the questions here Dr Abrams um we know that AI can provide you know false positives or false negatives is there a way that you generally provide feedback yes so um like I said I was more of an AI skeptic in the beginning than I am now and now one of the things I wanted to know was how do I give feedback saying oh this missed an lvo or it was a false positive uh bleed or something like that what I like about the app is you can actually just go to contact support you type in where you could just voice it and say you know missed M2 lvo and you just hit send and they send you back an email looking into it and then they'll respond with an email saying oh we're updating the algorithm due to this case and I will tell you when I first started doing um using viz which is I guess about two and a half years now it almost never missed and and then one occlusion but on occasion it would miss M2 it's now a very very rarely misses empties and I hopefully think that it was related to all the feedback that we give um so that they can get the data and update the algorithm so I would say the algorithm I could see in the two and a half years we've been using it has constantly improved excellent thank you um so it looks like we're right close uh to the end of the hour here I'd like to thank Dr bash and Dr Abrams again for joining us uh we really appreciate everyone's time today um it's been a great session and look forward to hearing from you both again thank you thank you thank you
Info
Channel: Viz. ai
Views: 14,422
Rating: undefined out of 5
Keywords:
Id: q4YElCi8-nQ
Channel Id: undefined
Length: 58min 27sec (3507 seconds)
Published: Thu Apr 06 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.