GRETCHEN HUIZINGA: You’re listening to Collaborators, a Microsoft Research Podcast showcasing the range of expertise that goes into transforming mind-blowing ideas into world-changing technologies. I’m Dr. Gretchen Huizinga.
[MUSIC ENDS]
I’m excited to be talking today with Javier Alvarez and Dr. Raj Jena. Javier is a Senior Director of Biomedical Imaging at Microsoft Health Futures in Cambridge, UK, and part of Project InnerEye, a machine learning technology designed to democratize AI for medical image analysis across the spectrum from research to practice. Raj is a radiation oncologist at Addenbrooke’s hospital, which is part of the Cambridge University Hospitals system, and he was also a collaborator with Project InnerEye during the research phase. Javier and Raj, welcome to the podcast. Now, before we peer into InnerEye, let’s get to know you a little bit better! Javier, I’ll start with you. Give us a brief overview of your training and expertise and then tell us about Microsoft Health Futures and your role there.
JAVIER ALVAREZ: Thank you for having me here. I’m Javier, and I lead the biomedical imaging team at Microsoft Health Futures. We are responsible for research, incubations, and moonshots that drive real-world impact across healthcare and life sciences inside MSR. Uh, yeah, my team is very diverse. We focus on end-to-end solutions. We collaborate with people like Raj, mostly clinicians, and we work on high-quality research, and we hope others can build on top of our work. We try to integrate our AI as a “friendly colleague.” And yeah, I have been in Microsoft for 10 years. My background is in computer science and engineering, and I have been always working on research and innovation projects, uh, focusing on high-risk/high-reward projects. And yeah, my first job at Microsoft was actually working on the first telemetry pipeline for Microsoft on, on the Azure cloud. And we helped several products like Skype, Xbox, Office, and Bing to get better insights into their data. And yeah, after that I joined Antonio Criminisi and Raj in 2016 to work on InnerEye. So yeah, I’m super, super excited to be here to share more about our work.
HUIZINGA: Well, Raj, our audience is a super smart one, but probably not all that well-versed on radiation therapy and neuro-oncology. So tell us about your work as a cancer doctor and a researcher, as well. What’s your background, and how would you define your role — or roles, plural — at Cambridge University Hospitals?
JENA: Thanks for the opportunity to join this discussion and to fly the flag for radiation oncology. It’s a really useful and very modern anti-cancer therapy. Half the people diagnosed with cancer who are cured will end up having radiation therapy as part of their treatment pathway. So I’m passionate about making radiation therapy as safe, as smart and accurate, and with as few side effects as possible. And I do that both in the context of my clinical work but also research work, where I focus mainly on sort of the analysis of images. We use an awful lot of imaging in radiation therapy to really target the radiation therapy. And it’s in that context, really, that I kind of started, you know, with this collaboration over 10 years ago now.
HUIZINGA: Wow. What would you say your “split” is? I mean, as a doctor or a researcher, how do you balance your time?
JENA: Some people would say I have the dream job because I do half and half. Half clinical work and half research work. And I really like that because it means that I can anchor myself in the clinic. I don’t lose track of why we’re trying to do these things. We’re trying to bring benefit to patients, to my patients. But it also means I’ve got the time to then explore on the research side and work with the best and brightest people, including, you know, many of the guys I’ve met at Microsoft Research.
HUIZINGA: Right. You know, as a side note, I just finished a book called The Butchering Art about Joseph Lister, who was both a surgeon, in the Victorian era, and also a researcher and sort of discovering this idea of germ theory and so on with Louis Pasteur, etc. So I’m, I’m ensconced in this idea of research and practice being so tightly woven together. So that’s really awesome. Well, before we get into specifics on the collaboration, Project InnerEye warrants a little bit of explication itself. From what you’ve described, I’d call it a “machine learning meets radiation therapy” love story, and it’s a match made in heaven, or at least the cloud. So what’s the catalyst for InnerEye, and how have the research findings changed the game? Raj, why don’t you talk about it from the medical angle?
JENA: Sure. So, um, as with many things, it started by chance. I went to a talk given by Antonio Criminisi, who Javi mentioned. He was the person that kind of established the InnerEye group at Microsoft Research back in 2011, I think. And he was talking about the way that his team, that did computer vision at the time, were using algorithms that had been developed to detect the human pose so that actually you could play video games without a controller. So this was technology that we all know and love in terms of systems like Kinect and the Xbox. You know, I had one of those! But I went to listen because Antonio wanted to apply it to medical imaging. So in the same way that they were using algorithms to mark out where the body was or where the hands were, could we also mark out tissues and structures within the body? So I said to him, after the end of this, you need to come and see what we do in radiation therapy because this really matters. And to his credit, he did! A couple of weeks later, he came to the department, and he went into a room where dozens of my colleagues were sitting in front of computers, working as fast and accurately as they could, to manually mock up all this normal anatomy on CT scans so we could get our patients onto radiotherapy as quickly as possible. And that was the light bulb moment where he realized, yeah, we need to make this better; we need to make this faster and use, initially, algorithms that came from computer vision, but now, you know, we’ve moved slowly over to things now that we would consider to be sort of machine learning and AI algorithms.
HUIZINGA: Right. Well, I should note that I’ve interviewed Antonio on this show, um, a few years back. And so if listeners want to go back to the archives and find the episode with Antonio Criminisi, that was a great one. So what you just described is sort of a “I can do this, but I can’t do it very fast” scenario. So let’s go into the geek side. Um, Javier, talk about the technical aspects of InnerEye and what it brought to the game. How has the research evolved? Where did it start, from your perspective, and where has it come in the cloud era?
ALVAREZ: Sure, yeah. I would be happy to geek out a bit! Um, so one of the biggest challenges that we faced in radiotherapy was working with CT scans. So CT scans are 3D images that contain around 20 million 3D pixels. We usually call them voxels. And we need to classify each of them as background, different organs, or tumor. And this actually requires a lot of compute and memory. So when we started in 2016, actually we started using very simple models called decision forests, and these can be trained on CPUs. So it was really easy to train them, but one of the problems with decision forests is that you actually have to do the feature extraction manually. So we had to code all that, and it’s a bit of a limitation of this approach. So in the second iteration, we started connecting the hospital to the cloud, and that gave us access to more compute, and we started introducing what we call the InnerEye-Gateway. So this actually helped to automatically route de-identified CT scans to the cloud and run the computation there. And we managed to integrate the model seamlessly into the workflow. So clinicians, when they go to open their CT scan, they already have the segmentation ready to be used on their favorite planning tool. They can review it and refine it. And then on the third iteration, we actually moved to deep learning, and we started using GPUs in the cloud. And this actually helped us create bigger models with more capacity to learn these complex tasks. So we started training models with 30 million parameters. And this was a huge breakthrough. So we started to get really good feedback from Raj and his colleagues at Addenbrooke’s. Uh, yeah, it was a great experience. We had to iterate many times and go to the hospital down the road here in Cambridge. And yeah, it wasn’t a straight path. We had to learn a lot about the radiotherapy workflow, and yeah, we actually learned that it’s actually very hard to deploy AI.
HUIZINGA: Yeah. Every time we do a podcast, um, listeners can’t see the other person shaking their head, but Raj has been shaking his head the whole time Javier’s talking. Talk a little bit, Raj, about that marriage of workflow and machine learning. How did it change your world?
JENA: Yeah, I mean, I think I’m really interested in this part of the story, the “final mile” story, where you actually take something and instead of just topping out at saying, “Hey, we did something. Let’s write a paper” — which we did do! — you actually stick with it and get it all the way through to clinical impact. And actually, you know, from my point of view, in 2016, some changes came to the team. Javi joined, and we were so excited because he was a software engineer, where before we had been researchers talking to researchers. And it was the ability to know that really good software engineering was going to be able to take something we built as research and make it good enough to plumb in the hospital as Javi described. That was a real exciting moment. And then the second exciting moment that followed from that was the first time our clinicians saw the output from that third iteration that Javi mentioned, the deep learning model, and you looked at their reactions because they’re thinking, I couldn’t immediately tell this was done by AI.
HUIZINGA: Wow!
JENA: And that was the moment I will never forget. Because they were very kind to us. They evaluated the models at the beginning, when the output wasn’t good enough and they said, hey, this is interesting, but, you know, we’re not really going to use it. It’s not really going to save us time. And they stuck with us, you know, the clinician part of the team stuck with the researcher part of the team, and we kept going. And it was that moment really when everything came together and we thought, yeah, we’re onto something. That was … that was huge.
HUIZINGA: Yeah. It sounds like you’re talking about how you met, but I’m not sure if that’s the whole story. So let’s talk about the meet-up and how the two of you, specifically as collaborators, started working together. I always like to call this “how I met your mother,” but I’m interested to hear each side of the story because there’s always an “aha moment” on what my work could contribute to this and how theirs could contribute to mine – the kind of co-learning scenario? So, Raj, go a little further in describing how Javi and you got together, and then we’ll see if Javier can confirm or deny the story! [LAUGHS]
JENA: Yeah. So as, as I mentioned … so I had already been working with Antonio purely as research for a little while, and Antonio was tremendously excited because he said the team was going to expand, and Javier was one of the first hires that we actually had to join the team. And I remember Antonio coming in and said, “We’ve just interviewed and appointed this guy. You wait till you … you wait till you meet him,” kind of thing. And then Javi joined us. From my point of view, I am a doctor that likes to code, so I like seeing code come to action, and I know the joy that that brings. And there was this amazing time, shortly after Javi first joined us, where I would come and meet the team about once a week and we would say, hey, you know, maybe we should do this and maybe this would be the way to solve this particular problem, or we need to design a tool so we can visualize the imaging and the machine learning parts of our workflow together and work on them together. And I come back next week, and the thing was practically built! And, you know, to me, that was just the amazing thing … is what you realized is that where before we had been struggling along with just researchers trying to do their best — you know, we know the maths but not how to build things — all of a sudden, Javi comes along and just the rate and the pace at which stuff move forwards, it was incredible! So yeah, that’s my side of the story.
HUIZINGA: I love it. Um, in fact, a doctor that likes to code … I’m wondering if Javier is a computer scientist that likes to … I don’t even know how to fill in the blank on your end … radiotherapy? Dabble in operation? Javier, what’s your side of the story?
ALVAREZ: Yeah, I think for me, it was really amazing to work with Raj because he was telling us about all the physics about radiotherapy, and this was super exciting. We went on multiple trips to Addenbrooke’s to see the radiotherapy department. So actually, yeah, for me, I, I … that was my first project on healthcare, so I had to learn a lot. So yeah, it was super useful to work with Raj, learning about the workflow in radiotherapy, how the data moves, as well. It was super useful. I think actually we met here with Antonio during lunch in the lab. Uhh, yeah…
HUIZINGA: During lunch in the lab … ! [LAUGHS] It would be a good time now for me to just clarify that Addenbrooke’s is the old name of the hospital that’s part of … um, Raj, explain that!
JENA: That’s right. So we’re now called Cambridge University Hospitals to reflect the fact that we’re a big biomedical campus and we actually have multiple hospitals: Addenbrooke’s, the Rosie, uh, Papworth Hospital … but affectionately, people who have lived in Cambridge still call it Addenbrooke’s.
HUIZINGA: That’s good. We can call it both. Javier, as we’re recording this podcast, some big things are going on in the UK. Um, it’s the 75th anniversary of the National Health Service, or NHS, and you guys recently got an award from that organization. You’ve written a JAMA paper and even the prime minister posted something on LinkedIn about your work, which is pretty cool! Tell us about some of the accolades associated with InnerEye right now, from where it started — you know, as a twinkle in someone’s eye — to where it is now, what kind of attention it’s getting. What’s the buzz?
ALVAREZ: Yeah, absolutely. Yeah, maybe I’ll talk about the JAMA paper, and I will let Raj talk about the NHS part, because I think this has been mostly his work.
HUIZINGA: Perfect.
ALVAREZ: So yeah, I think when we started getting really good results with our models in Addenbrooke’s and sharing it with the clinicians, we thought that yeah, we wanted to run a bigger study on evaluating the models for prostate and head and neck. Uh, so we ran a study that was published in JAMA, and here we asked the question of, OK, are these models actually acceptable and accurate enough for radiotherapy planning? And can we actually reduce the time in the workflow? So we, we actually got around eight datasets from all around the world, very diverse datasets from radiotherapy planning, and we set aside a couple of them for external validation. So we didn’t use those for training. And then we used the, the rest of them for training the model. And we actually show in the paper that the model generalizes to the external datasets, so it’s quite robust, using different protocols in radiotherapy. And we also did some interobserver variability study to check that the variability of the AI model is similar to the variability that we observed between different clinicians. And, yeah, as part of the paper, we actually open-sourced all the code. This is how Addenbrooke’s actually started to think about deploying the models clinically. Uh, yeah, in fact this work was recognized with this NHS AI Award and now with the NHS anniversary, but, yeah, I’ll let Raj talk about this part in the hospital.
HUIZINGA: Well, before we go to Raj, I want you to just clarify, because I think this is super interesting. You’ve got the paper and you’ve got practice. And what’s fascinating … I’ll say it again—I just finished the book—but what Joseph Lister did was practice and show how his theories and his work made a difference in his patients’ lives. But what you’re talking about, as you mentioned, Javier, is background, organ, tumor …
ALVAREZ: Yeah.
HUIZINGA: So those three things have to be differentiated in the radiologist’s workflow to say, I’m not going to shoot for the background or the organ; I want to get the tumor. And what you’re saying, Javier, is that this tool was able to do sort of human-level identification?
ALVAREZ: Yeah. Yeah, exactly. Yeah. This is what we, we showed in the JAMA paper. Yeah.
HUIZINGA: Well, Raj, talk about it from the medical angle. Um, what’s the buzz from your end?
JENA: Sure. Yeah. So, so InnerEye is a toolkit, and it was great to see it being used for all sorts of things, but in radiation therapy, we’re using that toolkit specifically to mark out the healthy organs that need to be shielded from radiation. At the moment, we’re not using InnerEye to try and mark out the tumor itself because tumors change a lot from person to person. And so what our design was, was to build something that very much assists rather than replacing the oncologist so that when the oncologist sits down to do this task, about 90 percent of the time is spent marking out all of the healthy organs and 10 percent of the time on the tumor. Actually, we’d love it to be the other way around. And that’s what this tool does. It means that when the oncologist sits down, all of the healthy organs that sit around the tumor that need to be shielded as much as possible from the radiation, that’s already done. So the oncologist goes through … they have to review it, obviously, and check each one is accurate. And in our real-world testing, we found out that about two times out of three, the tool does a good enough job that its output can be used directly without changing anything, which is really good.
HUIZINGA: Wow.
JENA: That means they can then focus on contouring the tumor, and it means the overall time taken to complete this task can be about two and a half times faster. Now, when you think, for the complex tumors that we deal with, that can take up to two hours, that’s a lot of time saving and that’s time given back to the oncologist to spend in front of the patient, basically. So from our point of view, Javi mentioned this, uh, NHS award—it was this AI award that we were given by our national healthcare service—and what that was charged to do was to pick up the baton, once Microsoft had turned InnerEye to an open-source tool, because to turn that open-source tool into a potential medical device that could be used in the cloud for real clinical care, needs a whole other level of sort of checks and evaluations. And that’s what we did, basically, in our team. We worked together with the team in our hospital that builds things as medical devices. Usually, in our hospital, that team builds what we call prosthetics. So things that you would put into a patient or onto a patient when they’ve been injured or something like that. They’d never done it for a software device. But it was great because we had some really strong starting points. First of all, we knew that the actual InnerEye code was fantastic, and secondly, we knew from the JAMA paper that the initial evaluations, in terms of how useful these things were, stood up very well. So that, together with our own clinical evaluations of having the tool plumbed in and seeing it being used, meant that we kind of already knew that this was going to be possible, that we were likely to succeed in this task.
HUIZINGA: Hmmm. Go back a little bit, Raj. You’ve mentioned that tumors change from patient to patient, so it’s not always the same. Do they also change over time?
JENA: Yes. Hopefully, they shrink after radiation therapy and the treatments that, that we give! And so yes, I mean, it’s a big part of what these sorts of tools will continue to be explored in the future is actually tracking how tumors change over time, and that’s a big area. But, you know, we chose to pick on something that was achievable, that wasn’t too risky, and that would already achieve real utility, you know, in, in a hospital. So we already did that with even what it does in terms of marking out the healthy organs. The tumor stuff will come, I’m sure, in time. But we already proved that you could use these tools and build them to be useful.
HUIZINGA: Right. Javier, you mentioned earlier that one of the mandates of the lab is high-risk/high-reward research. This seems to have super high reward, but it’s about now that I ask what could possibly go wrong to everybody that comes on the show. [LAUGHS] Some people hate it. Some have worried that AI will take jobs away from doctors, and I’m sure there’s other worries, as well. What thought have you given to potential consequences, intended and unintended, as you move forward with this work, and what strategies are you employing to mitigate them? Let’s hear from the technologist first, and then we’ll hear from the doctor.
ALVAREZ: Yeah, absolutely. I believe, uh, AI safety should be our top priority in any of our AI products in healthcare. And yeah, it is super important to consider the intended and unintended consequences of deploying these models into the clinical workflow. One of the top-of-mind concerns for the public is that AI might take jobs away from doctors, but actually, we need more doctors. So one out of five jobs in oncology are not being filled in the UK, and the way we are thinking about deploying these AI models is to augment the clinicians. So we want to help them be more productive and deliver better patient outcomes. So the models are working alongside the doctor. And in the case of InnerEye, we are delivering more accurate and faster segmentation. Other concerns could be biases in the models, and to mitigate this, we usually work with clinicians like Raj to build diverse and good datasets that are representative of the population. As always, we make sure the clinician has the ultimate decision and they approve the work of the AI model.
HUIZINGA: Raj, what’s your take on the “what could possibly go wrong” question?
JENA: Yeah, it’s an interesting one. You know, we’ve identified 500 risks, and we’ve gone through each and every one of them and made sure either that the software means that it can’t happen or we mitigate it, basically. Actually, though, the biggest thing that you can do to mitigate risk is talk to patients. And as part of this award, we got to do two really interesting consultations with patients, because then you understand the patient’s perspective. And two things, very briefly, that I took home from that: the first is, is that patients say, yeah, OK, this isn’t what I thought of when I think about AI. I understand that you’ve used incredibly advanced machine learning tools, but actually, this is a very simple task, and the risk is relevant to the task rather than the technology. So that was a useful thing. And the second thing is that they said, it’s all about who’s in control. I understand how this system works to assist an oncologist, and the oncologist retains ultimate control, and that is a huge thing in terms of enhancing trust. So I think as you move from these types of systems to systems where actually you start to push the envelope even further, it’s really important to take patients with you because they keep you grounded, and they will give you really good insights as to what those real risks are.
HUIZINGA: Right.
JENA: The other thing is, is that everyone knows, just like any job, you know, there are the bits that excite you and reward you. And then there are the bits that are kind of dull and tedious. And, you know, Eric Topol has this famous phrase that he said, you know, which is that good AI should give clinicians the gift of time, and that’s what you really want … is, is that you want the AI to allow you to spend more of the time that interests you, excites you, fascinates you, motivates you. And I think, you know, from my point of view, I’m a great believer that that’s what AI will do. It will actually, you know … doctors are very adaptive. They’ll learn to use new tools, whether it’s a robot from a surgeon’s point of view or a new AI algorithm, but they’ll use it in the best way possible to actually kind of still allow them to achieve that patient-centric care.
HUIZINGA: Well, that’s a lovely segue way into the next question I had for you anyway, which is what could possibly go right. And you, Raj, referred to the triple benefit of InnerEye. Go a little deeper into who this research helps and why and how.
JENA: I think it’s a really important illustration of how you can democratize AI. A lot of AI research work stays as research work, and people don’t really understand how these tools … they hear a lot about it, and they read a lot about it, but they don’t understand how it’s actually going to make a difference for them in the clinic. And I think that’s why, you know, stories like InnerEye are particularly meaningful. We’re not talking about building an AI that lets us understand something that the human couldn’t understand before. So it’s not earth shattering in that sense. And yet, even despite that simplicity, so many of my colleagues, they get it. They go, OK, you know, we really understand you’ve actually built something, and you’ve put it here into the clinic. And I think, you know, from my point of view, that’s the real value. There are other value propositions relating to the fact that it was open-source that lends itself to democratization and sharing and also because it runs in the cloud and that basically you don’t need a hospital that’s already got a quarter million-pound computer and only those hospitals with the latest kit can actually use it. So it means that it is just as easy to deploy in a small hospital as it is in a big hospital. So for me, those are the key messages, I think.
HUIZINGA: Javier, Raj just alluded to the open-source nature of this tool or toolkit. I want you to drill in a little more on that story. Um, I understand this lives on GitHub. How did that decision come about, and why do you believe this will benefit people in the future?
ALVAREZ: Yes. So the decision to make the code open-source came from the desire to democratize the access to these AI models. So we wanted to make sure everyone would be able to build on top of our research. And that was the way that we found to give access to Addenbrooke’s to create their own medical devices. We thought that also having open-source code allows us to be more transparent with our research and to gain trust on the technology. It also helps us, as well, to get help from the community on building this project. So we had people helping us to fix bugs and to make sure, uh, the algorithms are not biased. As part of the open-source, we made available three big components. One is the InnerEye-Gateway that routes the images to the AI models in the cloud and de-identifies the data. We also made available the InnerEye inference code that basically is an API that the InnerEye-Gateway uses to run the models. And also all the training code to be able to reproduce our work. Uh, yeah, we are super excited to see how people will use the open source in the future. We also have some startups that are using our code and trying to build products with it.
HUIZINGA: Go a little further, Javier, because this is interesting. Obviously, radiation therapy is one application of InnerEye, but I imagine it could be useful for other medical applications or other … actually, probably anything that you need to identify something, you know, the signal in the noise.
ALVAREZ: Yeah, um, segmentation in medical imaging is super important, so it allows you to actually strike measurements from the images. So, yeah, it can be useful, as well, in some radiology scenarios like clinical trials where you want to track tumors over time. And also in surgery where you want to plan surgery, so you need to understand how vessels are feeding into the tumor. So, yeah, segmentation is super important, and I think the components that we have could be useful for many different scenarios in medical imaging.
HUIZINGA: Well, Raj, I always like to know where the project is on the spectrum from lab to life, and as I understand it, after the InnerEye team completed the research and made the code open source, Addenbrooke’s took the regulatory baton for medical device approval in the UK, but it’s still not over. So continuing with that analogy: if this were a relay race and the idea was the first leg, who else is running, where are you in the race, and who brings it across the finish line?
JENA: Yeah, that’s a really good analogy. I, I might use that one in the future. So, uh, there are other commercial organizations that have systems that will perform this work. They are quite expensive, actually, to buy into if you want to buy them outright. There are some where, a bit like ours, you can scale it so that you pay as each patient’s data is processed. They also are quite expensive for some emerging, uh, healthcare markets, and by emerging healthcare markets, I include my own in the, in the NHS. To our knowledge, we are the only cloud-based, open-source medical imaging device that we’re actually trying to build within the NHS. So that is truly unique. And in terms of where we are on that journey to take the, you know, the InnerEye open source all the way through to a medical device that actually, you know, you can buy off the shelf and have all of the associated support and, you know, technical specifications that you need to use in practice, we’re at this point where the hospital has basically finished all of that work. The hospital has been incredibly supportive of this entire research for the last 10 years, but it can’t act as a manufacturer. It’s quite difficult to do that. So we’ll then partner with a manufacturer, actually a company that’s a friend to us in the hospital and to the InnerEye team, too, and they will be responsible for basically taking all of the work that we’ve done to prepare the medical device certification documents and then actually going through that device certification and bringing it to the market. So it’s very exciting, you know, to be literally at that final stage of the, of the story.
HUIZINGA: Right. Ready to run across the finish line. I like to end each podcast with a little vision-casting, and I’ve been shocked at how profoundly healthcare has advanced in just the last hundred and fifty years. So I won’t ask you to project a hundred and fifty years out, but if InnerEye is a truly game-changing technology, what does healthcare, and especially oncology, look like in the future, and how has your work disrupted the field and made the world a better place? Javier, why don’t you talk about it from the technical aspect, and then maybe Raj can bring the show home from the medical aspect.
ALVAREZ: Sure. Yeah. One exciting, uh, development on the horizon is the use of GPT-4 in radiology or maybe even in radiotherapy. We are also working on multimodal learning now and trying to expand the work that we have done with InnerEye to radiology, where there is a much bigger opportunity. Uh, with multimodal learning, we are trying to integrate multiple sources of data like medical images, text, audio, and also different types of modalities because we want to make sure we can use CT scans, MRI, x-rays … and yeah, this requires developing new types of models, and these models need to be able to generalize to many different tasks because we have a huge need for AI in healthcare, and the current way of, uh, building these models is we develop one model for every use case, and this is not scalable. So we need more general-purpose models that can be specialized really quickly to different needs. And I think the other thing that excites me is actually … maybe this is quite far away, but how do we create a digital copy of the human body for every person on the planet and we create some sort of digital twin that we can actually use to run simulations? And I think medical imaging is going to be a big, important part of this. And we can use that digital twin to run interventions and figure out how can we treat that patient, what is happening with that patient, so, yeah, I think it’s super exciting, the potential of AI in healthcare, but of course we need to make sure we look at the risks, as well, of using AI. But yeah, there are many positive opportunities.
HUIZINGA: Right. I’m just shaking my head and my jaw is dropped: my digital twin in the future! [LAUGHS] Raj?
JENA: I mean, I think it’s a tremendously exciting time, and we live in an exponential age where things are coming and new innovations are coming at a faster and faster rate. I think what we have to do is to really, as doctors, learn from history and adapt to make sure that we stay able to retrain and reconfigure ourselves, and reconfigure medicine, to keep up to speed with the digital technologies. You know, just to give an example to what you were talking about with Joseph Lister; it’s fascinating. You know, I always think about, you know, Semmelweis and a similar story. So he was an Austrian obstetrician who, for the audience, a hundred and fifty years ago worked out that actually if you wash your hands after delivering a baby from a mother, the mother was less likely to get a fever and less likely to die. He was 29 when he worked that out, and yet it took nearly 20 years for him to convince the medical community basically because they felt threatened. And, you know, that was the key thing. They just, you know, there wasn’t that level of understanding of, you know, that we need to think and adapt and incorporate new ideas and new thinking. And we will be challenged, you know, severely, I think, in the years to come, with new technologies. I’ve just come back from a conference talking about foundation models and GPT in medical imaging and, um, you know, there was a huge amount of excitement. One really interesting point that I heard is that these models were built on all of the images, mainly generated by cameras, on the internet and social media sphere, and if you add up all of the medical imaging that’s ever been done, it’s only about 1 percent of that image data. So it’s always going to be hard. And of course, we can’t always access all of that information, you know, for patient confidentiality and, you know, numerous factors. So it may take a little while before we have these amazing, generalizable AI models in medicine, but I’m sure they’ll come, and I think the biggest thing that we can do is to be ready for them. And the way I believe that you do that is in little steps, is to start bringing very simple, explainable, transparent AI into your workplace—of which, you know, InnerEye is a really good example—so that, you know, you can look inside the box, start to ask questions, and understand how it works because then, when the next AI comes along, or maybe the AI after that, that integrates more data than the human mind can hold together to make a decision, then you need to be comfortable with your ability to query that, to interrogate that, and make it safe, you know, for your patients. Because at the end of the day, for thousands of years, doctors have evaluated things. And yeah, I think, I think those things won’t change, you know, but we just … we’ve got to up our game, you know, so I’ve got to be as good as Javi is in kind of understanding how these things, how these things work. So …
HUIZINGA: Well, I love my job because I learn something new every show. And this one has been a humdinger, as they say. Thank you so much for taking time to educate us on InnerEye today.
ALVAREZ: Thank you.
JENA: Thanks. It’s been a pleasure.