Artificial Intelligence in medical imaging: From research to clinical practice – Koen Van Leemput

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
So I'm going to talk for the next 15 minutes about artificial intelligence, AI, in medical imaging. And so the first question I'm going to put out is: why in the first place do we need AI in medical imaging? And I'll give one answer immediately. So one good reason is the explosion of data that you see in hospitals. Digital data that is acquired over the years. So if you look at the last few decades, the amount of images that are acquired has grown exponentially. And that's because that's mainly driven by modalities, such as computed tomography, CT, or magnetic resonance imaging, MRI. So these are three dimensional volumes that contain hundreds of individual two dimensional images. And so people have just acquired more of these exams with higher resolution, with different contrasts. So there is really an explosion of data. The number of radiologists so medical doctors who are supposed to read these images, look at them and interpret them, hasn't grown in the same pace. So these people are right now are facing a torrent of data that they have to analyse. So they get only a few seconds per image before they need to move on to the next one. Obviously, they're not supposed to miss any signals in the data. They're very stressed. And obviously AI seems like a perfect tool to help these people. So more generally, AI in medical imaging is used in two places. One is in acquiring data. So you get a scanner, you get images out. AI can help you get the data out faster, which is nice for the patient. Doesn't need to be in a very tight tube for a long time. The chances of accidentally moving while you're taking the images has decreased. And also, you just get much better images. So higher resolution, less noise. I'm not going to talk about that part of AI. What I will talk about is about another phase, which is starting from existing images to extract useful information out of it. And in this case, so-called segmentation, which is a delineation of different structures. So you can analyse, for instance, the volume or use it in your treatment planning. But it can also be some automatic diagnosis or checking whether a certain medication is working for a specific patient. And there are sort of two or three points that I would like to bring forward where AI can really help. One is exposing things that you cannot just see by the naked eye. One is to measure more consistently. And the other one is to analyse much, much faster than humans can do. And I'll give an example of each. So here's an example of things that you can't see with the naked eye. Imagine you're working in a hospital. You take an MRI scan of someone and you've seen that patient before. And now your task is to compare the two images to see if you can detect tiny differences. And obviously that's a problem because first of all, the differences are really, really small, maybe a fraction of a percent over this time period. And the patient, of course, is not in exactly the same position between the two scan sessions. Another example is shown here. So that's an application in radiation therapy. You have an MRI scan which shows a tumour, so the soft tissue. You also have a CT scan, which shows the bones which you need to calculate the treatment plan because the bones tend to block the radiation. And of course, the patient is lying in two different scanners, so the body has moved. And so they are not really compatible with one another. And so what you can do with AI is, pretend that the MRI is kind of printed on a rubber type of material and locally stretch it to make the anatomies match. Here's another example where AI can help to measure more consistently. So that's an example in multiple sclerosis. What's important here is that the brain has lesions, essentially a type of scar tissue. And what's important is both the number and the volume of these lesions, and especially as they evolve over time. So here, seven different human experts were asked to delineate these lesions. And what you see is that there are enormous inconsistencies between the opinions between these experts. Up to a fraction of three or four times more lesions or more volume between different raters. Then, as I said, analysing images faster is an application area. So we're looking at an MRI scan, a brain MRI scan, with complete so-called segmentations with delineation of maybe 40 different structures. That's useful in scientific studies, analysing how diseases affect brain structures. And if I ask an expert to manually delineate these images, so generate an image on the right, how long do you think approximately this will take? Would 3 hours be more or less, you know, realistic? Maybe I can ask to raise hands if you think 3 hours is enough. Yeah, okay. So not many people think 3 hours and that's correct: it takes almost one week. So it's a very tough task to do. And with AI, you can actually completely solve this problem within a few minutes on my laptop, I can get the result. Okay. So now I've offered lots of solutions, AI is fantastic. So what's actually the problem? So the problem is to bring these tools into a clinical practice. And one example is shown here. So let's say I go to a local hospital and I want to write a software tool that can automatically analyse all the brain scans that are acquired in that hospital. And you'll see there is a whole lot of things going on here in sort of this wild setting. One is that you, first indicated with number one, is that you get different scanners, so if you have a method that works very well for scanner one, maybe it doesn't work very well for scanner two. There's lots of knobs you can turn in the scanner, like the image resolution in three different directions, of course, it's a volumetric image. Different contrasts, there are different modalities. I already mentioned CT, MR, there is also PET scans to see functional information. Of course these people are in hospitals, so there are lots of different diseases going on. You might have people that come that you've seen before, but you don't know some patients you've seen only once, some you've seen five times. So there's a lot of variability there. And I call that the imaging zoo. So if I formulate the problem, what we're trying to do, we try to go from images to useful information. That by itself is not so difficult, but doing it in a sort of realistic clinical setting where you have all this variation going on, that's hard. The other problem is interpretability. Let's say I have an AI that predicts for a cancer patient how long they're going to live. And I now have a patient that predict, yeah this person the treatment is not working, this person is going to die soon. That is, of course, very useful information, that the clinician might decide to actually focus more on the quality of life rather than very intensive treatments. But of course, the person who's making the decision is going to want more than just some computer who says this person is going to die soon. Right? And presenting evidence, building trust, that's actually a very hard thing to do. The other thing is uncertainty. Is the computer 99% certain that some diagnosis is true or only 50%? It makes a huge difference. Okay. So now we have listed all the problems. What are the solutions? So the solution I've concentrated on in my research is so-called analysis by synthesis. So we want to take images and analyse them. And the way we can solve these problems is by turning the whole thing around. So I start with some quantity of interest. Let's say I want to diagnose Alzheimer's. Then I learn how to model images of Alzheimer's patients, like generate images randomly that look like images of Alzheimer's cases. When you do that, you can actually then invert the model to get the answers you're looking for. And that kind of solves all our three issues of the imaging zoo, of the uncertainty and interpretability. And I'll show two examples. So let's start with the imaging zoo. The setting is that you want to automatically segment some data coming from some hospital, acquired in some way you have no control over. So the way it works is, looking at the forward model, is you have some models of anatomy. So these are built from manually delineated subjects. So I know the average shape of a brain, I know how it typically changes between individuals. That allows me to generate randomly different images that all look like anatomical labellings. Then I can sort of pretend that I'm in a scanner and I can tweak the acquisition parameters. I can say, for instance, do I want the fluid to be bright or do I want it to be dark. There is some imaging artefacts that we know always exist. You put everything together, then you get an image. Once you get the image, then you can invert the model. You get a segmentation. And of course, the trick of this whole model is that now we can actually change the intensity profiles. So I can play with these parameters and automatically still get segmentations out of it. I can change the parameters. I can, instead of having one image, maybe I acquired four different contrasts, and that will work. And of course, if we're in a hospital, we have some diseases going on, that's also convenient. We just have different shape models of how lesions look like, for instance, brain tumours. And these are trained on completely different individuals, so we don't start from scratch. We have this whole machinery that we can just plug and play and make different combinations to overcome this sort of imaging zoo problem. Here's an example of applying these techniques. So what you see is a post-operative scan taken in Copenhagen. So in that hospital they decided to use three different contrasts. That's a scan after the surgeons have gone in and tried to remove as much of the tumour as possible. And then you can, with these tools, without knowing anything about the settings of the scanner or what sort of acquisition was decided on, you can get an automatic segmentation, both of the tumour and of many other different structures. And there's sort of two main applications that I want to mention. One is radiation therapy planning. So you want to radiate the tumour, but you also want to avoid hitting eyeballs and etc. The other application is you can try to predict how long individual patients are going to survive. And that's shown in the graph there. So as you can see, on the horizontal axis is the time. And the vertical axis is the fraction of patients that are still alive after that time. And obviously you see in the beginning that everybody is alive. And after three years, almost nobody has survived. And so what's interesting is that based on these segmentations, analysing the shape of different structures, you can actually pull the patient group apart into two groups, people that are going to live relatively long and people that are really short survivors. And that's obviously very useful clinical information. So I'll conclude with another example about interpretability. So here the setting is that you get an image and you need to predict something, for instance a diagnosis. Is this treatment working for this specific patient or should we switch treatment? And the example I'm giving here is just very simple. I give a brain scan, what's the age of this person? And you'd be surprised how easy it is to accurately estimate your age from a brain scan. So an error of around two years is very typical. And so imagine now that instead of having an age prediction, this would be a diagnosis or a prediction of how long you're going to survive. Obviously, the clinician will want to hear more. Well, you know, in order to build trust in these systems, we'll need to find some sort of explanation of these predictions. And one obvious thing to do is just to explain to the clinician what the machine has done. So if you look at these coloured pixels there, these are the areas that the computer essentially has used to calculate the predicted age in this case. And that's technically correct. It's a perfectly valid explanation, except that if I showed it to clinicians, they would not trust me at all, because these are the "wrong" areas, like a human would never, ever look at those areas. So our objective of building trust and giving interpretability to the system is zero. So what can be done? Well, once again, we do analysis by synthesis. So we build a forward model. If you know the age of a person, what sort of images you expect to see. And if you've done that, then you can obviously invert the model to predict then age from individual subjects, and now you can show explanations that make sense to clinicians. So what is shown there is essentially the effect of age on different structures. So if I were God and I would pick any one of you and want to make you instantaneously ten years older, these would be the areas that I would start fiddling with inside your head. To conclude, AI medical imaging has a lot of potential. As a patient, it allows you to get scanned faster and much more comfortable. You get better scans, you get a better diagnosis and follow-up, with better treatment outcomes. As a health care provider or a company, these computers can analyse tons of data day in, day out. They don't get stressed or tired. You don't make hopefully as many errors. And of course, you can increase the efficiency of the entire workflow. Thank you.
Info
Channel: Aalto University
Views: 2,274
Rating: undefined out of 5
Keywords: aalto university, aalto-yliopisto
Id: iiw4j-Frljo
Channel Id: undefined
Length: 14min 59sec (899 seconds)
Published: Wed May 10 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.