Skip to main content
Podcast

Gut Check: Michael Wallace, MD, on AI in GI

Host Brian Lacy, MD, is joined by Dr Michael Wallace to discuss the present and future applications of artificial intelligence in the practice of gastroenterology.

 

Brian Lacy, MD, is a professor of medicine at Mayo Clinic-Florida in Jacksonville, Florida. Michael Wallace, MD, is a professor of medicine at the Mayo Clinic-Florida in Jacksonville, Florida.

 

TRANSCRIPT:

 

Any views and opinions expressed are those of the authors and or participants, and do not necessarily reflect the views, policy or position of the Gastroenterology Learning Network or HMP Global, its employees and affiliates.

Dr Lacy:

Welcome to this Gastroenterology Learning Network podcast. My name is Brian Lacy. I'm a professor of medicine at the Mayo Clinic in Jacksonville, Florida. I am absolutely delighted to be speaking today with Dr. Michael Wallace, professor of Medicine, Mayo Clinic Jacksonville. Many of our listeners will recognize Dr Wallace's name as he's an international expert in advanced endoscopy. However, another area of Dr Wallace's expertise is that of artificial intelligence. And today, our topical focus is on AI, artificial intelligence, in the field of gastroenterology. And this is really becoming increasingly important in our field. And so today's podcast is a great opportunity to find out where we are right now with AI in GI and where the field is heading.

So Dr Wallace, welcome. Artificial intelligence or what we'll refer to today as AI for our discussion is playing an increasingly prominent role in health care, however people use the term differently for our discussion today. How do you define artificial intelligence?

Dr Wallace:

Artificial intelligence is a very broad term. It really refers to, you know, if you take it literally, it refers to, systems which can make intelligent decisions, things we typically associate with human intelligence, but using machines to do that, you know, typically computer based systems. And that can be as, you know, as broad as facial recognition on an application on your smartphone. Or it can be as specific as detecting an early cancerous lesion on a CAT scan that a radiologist may not see.

Dr Lacy:

Great. So a big broad topic, and that's a perfect segue because when people discuss artificial intelligence, they oftentimes use the terms machine learning and deep learning. What's the difference between machine learning and deep learning?

Dr Wallace:

I really think about this a little bit in traditional statistical terms. So many of your readers might be familiar with traditional statistics like regression mo, linear regression, or logistic regression models where maybe you wanted to predict if someone was having a severe gastrointestinal bleed, and you might put some variables in like their age, whether they're on anticoagulation, whether they've had a previous peptic ulcer or H pylori. And we put in traditionally those known variables in deep or in machine learning.  we still put in re relatively fixed definitions. We train the model with what are called annotated outcomes. Someone goes in and says, this is a person who has bleeding and this is a person who didn't have bleeding. And perhaps these are, you know, some variables that we might include in a model and train the model.  

However, in deep learning, it doesn't require a human to go in and label those things. It learns just from pure pattern recognition. And the biggest advantage of deep learning over machine learning is really the efficiency. You don't need a human to go in and annotate to define, you know, what your outcome of interest is and what your inputs of interest might be. And so it can learn itself and often we start with machine learning to train a model and then allow it to continue its learning and become more and more precise with deep learning. Because without the need for sort of human annotation, it can look at much, much larger data sets and do that in a much more rapid fashion. And that's really the deep power, so to speak, of deep learning.

Dr Lacy:

Well, that's great. I think a lot of our listeners now feel much more comfortable with that distinction, so thank you. And Mike, thinking about AI and GI, how common is AI now being employed in our field?

Dr Wallace:

It's actually being used in a lot of background ways that perhaps we're not even aware of. You know, if you right now are using an email, you know, such as your Microsoft Outlook email, you may notice recently that it starts to generate responses for you. It generates a common suggested response to a common email. You know, sometimes it's very simple, like, great love it, keep going. And those are AI-generated responses. It's looking at the text of the incoming email and it is predicting what a likely response is based on millions of previous, similar emails in the specific field of medicine.

However, the one major application that's in our GI practice right now are polyp detection tools. There are several FDA-approved tools out there now for assisting with polyp detection, and now those are going to expand this year to some other very simple things like during colonoscopy, for example, measuring the bowel preparation, measuring your withdrawal time, and we're starting to see it in our related field. Certainly radiology is using this to assist with reading common x-rays like chest x-rays and mammograms. And even our pathologists are using it to assist with common pathological situations. You know, pap smears, where thousands and thousands of these are read every day across the country, most of them are normal. So AI can be used to assist with the efficiency of those. But those are some examples of things that are in practice right now.

Dr Lacy:

Yeah, I think that's really wonderful to think about some of the diagnostic reasons we can use AI. And as you've already mentioned, it's not just limited to endoscopy, it's really going spread into other fields as well. And so if we think about the future then a little bit, where do you think AI will play in the diagnosis of GI and hepatology disorders in the future?

Dr Wallace:

I think it's really going to help us, for example, in the early detection space. So one of my areas of research is how do we detect pancreas cancer and other GI cancers early? You know, traditionally we do screening tests or maybe we get some blood tests, a tumor marker like a CA 99. But one area we're really seeing is the ability to take the large data that's already in the electronic record and extract useful information out of that that we didn't necessarily think was useful. I'll give you an example. In pancreatic cancer, it turns out that a couple of very simple laboratory tests like a blood sugar, a cholesterol level, a weight, and an age, if you combine those in a specific way, you can actually get a very accurate predictor—someone who's going to get pancreatic cancer even about 3 years prior to their clinical diagnosis.

And yet, you know, we don't have the capacity to look at the millions of patients in our electronic health records and put together that subtle piece of information and certainly, you know, to do that for all of our relevant diseases. But you can train AI algorithms to run that search, you know, every single day in the background and start to create flags that a pattern is emerging on this individual. And perhaps you might want to enroll them in a screening program or offer them an MRI. So that's one disease. And then you can really take any disease where early detection is important. Obviously cancers, that's important, inflammatory bowel disease, you know, worsening of liver failure in a patient with known liver disease. All of these, we can start to use these tools to detect diseases early and then intervene early, hopefully for better outcomes.

Dr Lacy:

Those are such great points, and I really like the idea that one, you know, we're kind of awash in data and sometimes it's hard to keep track of all these millions of data points that may not seem much at one time, but this helps pull it all together for us and also tracking things over time to look at those trends. I really like that. So let's shift gears just a little bit and think about AI and therapeutic purposes. Is AI currently playing a role in the treatment of GI or liver disorders?

Dr Wallace:

 I would say not yet, but this is certainly being investigated.  I'll give you an example in my space. So in interventional endoscopy, some of the newer procedures we do are things like POEM procedures, which is done for achalasia. When you do a POEM procedure and you start cutting through the, you know, the submucosal plane, the most important perhaps thing that we need to recognize are large perforating blood vessels, that come down from the deep layers of the esophagus up to the surface. And they can be hard to see. They're often not red, they're, in fact, arteries are often white. We can train AI algorithms, and this has been done to detect a vessel in your field and alert the endoscopist, you know, just ahead of you on the right, there's a blood vessel. In fact, they just put a nice little color overlay over that blood vessel and say, you know, don't cut here, or coagulate here before you cut.

So things like that are potentially applications in my space of interventional endoscopy. If we think about kind of even more futuristic and drug development and creating new, you know, particularly proteins and biologics, people have spent decades trying to find how a particular monoclonal antibody might bind to a protein to figure out the exact shape of that protein and custom-make a monoclonal antibody to bind and inactivate or activate that protein. Very bright scientists at Google have developed a program called Alpha Fold. You've probably heard of it. And it took the amino acid sequence of proteins and predicted how they would fold and create a protein shape. Now, this used to take decades. People would spend their entire career, you know, doing x-ray crystallography to predict a protein shape, and then you'd spend another decade trying to create a monoclonal antibody to it. Now they can literally do that in minutes. They can take the amino acid structure, predict the shape of that protein, predict where an antibody might be able to bind it, inactivate that protein. So I think it's really going to accelerate drug development, especially complex drugs like biologics.

Dr Lacy:

I think that's really great. I appreciate, you know, your expertise is so critical, but looking ahead in the future, where are we going to be in 5 to 10 years? That's really just fascinating. And if we think about the future a little bit more, are there other applications for AI maybe, you know, in terms of training or education? Are we underusing it? Should we be using it more? And what do you think the future holds for that?

Dr Wallace:

Yeah, obviously, you know, anybody who's got kids or even colleagues, you know, under the age of, I would say 40, you ask them a question and the first thing they do is open up their smartphone and go to ChatGPT, and it generates often a pretty good and sometimes very good response. And so it, you know, the idea of opening up a book and trying to find an answer or even worse, reading a book, a textbook as a medical student, so that you have that answer in your head and you don't need to look it up. I think that's shifting and it's shifting how we learn, in maybe good ways and maybe bad ways. You know, if, if the entire world's body of knowledge is available to you on your smartphone app, you don't need to store it in your brain, but there are clearly times where you need to do very complex and very rapid thinking, the middle of a cardiac code, or a big perforation that happens during a colonoscopy. You need to think on your feet. And so I think a big unknown right now is how are these tools going to impact people's ability to learn, their willingness to learn the traditional way of reading a book or, you know, on other online resources, or will they just expect that ChatGPT will always have the answers so why do I need to know it?  

I think the other thing, a good example like in our field in colonoscopy where, you know, increasingly we're using these AI systems to detect polyps and those same systems are going to be able to detect colitis and early cancer and H pylori and other things. Are the endoscopists of the future going to be able to detect those polyps by themselves? Or are they just going to be scope pushers and let the AI do the detection part for them? I think there is some good news that, you know, it doesn't seem, there have been studies now on, for example, training of fellows on with AI systems. They do seem to learn to detect polyps, for example, and they do seem to retain that even when they're not using the systems. Interestingly, if you take a fellow and give them an AI support for their colonoscopy and compare them to an experienced endoscopist, they perform pretty close to the level of the experienced endoscopist in terms of lesion detection. So it does appear these AI systems are bringing our young doctors novice trainees very quickly up to speed on some of the, at least the simple tasks. Is there a polyp there? There are more complex tasks that it's not been trained to do yet, but it will impact learning. It will impact how we learn what we learn. And I think the big picture is, you know, is it going to replace our human brain with just an external brain? Hopefully not. Hopefully it'll give us more things to work on, and continue to be smart, smart physicians.

Dr Lacy:

Yeah, I like all those ideas and it's such a huge kind of thought process about how, how do we really learn? It's really almost a philosophical question, isn't it? So Mike, as we wind down here, one last question. Many providers and patients or worried about AI and possibly the loss of jobs as you've kind of already mentioned, or the creation of a more impersonal workplace or, you know, the very real potential for data leaks with a release of sense of information. What are your thoughts?

Dr Wallace:

You know, as with any sort of major technological revolution, there are obviously risks of bad outcomes and there are potential benefits of good outcomes. And I think the most important thing is that we think through what those are and we try to mitigate the risks and take advantage of the positives. So one, I'll give you an example. There are many systems out there that are helping us do documentation. We all know that for the better part of 2 decades since the real introduction of electronic health records, it has made physicians' lives, I would say, worse. We're spending much more time with our inbox, we're spending much more time clicking. We're spending a lot less time looking the patient in the eye and listening to the patient.  I think AI has the potential to solve some of those negatives if we do it in the right way.

So for example, some of these ambient artificial intelligence systems are simple microphones that listen to a doctor-patient conversation like what you might have in your office with a new patient. They then generate a relevant note, they even generate relevant orders and bills, and take away that however much time that is, 5, 10, 15 minutes you spend documenting. And it's been shown now that when these systems have been implemented in large health care systems, they can save up to 2 hours per day of a physician's time just doing simple documentation. So that's a real positive. If we leverage that and say we've got 2 hours extra to, you know, even spend more time talking to a patient and have a positive interaction with a patient, on the other hand, we leverage that in the wrong way and just cram more patients into our very busy schedule, and have, you know, even less time with the patients, then I think that's a potential negative. So I think we as physicians need to be aware of these potential risks and be leaders in how to use this in a way that that is truly positive. You know, a really good leader in this space is Eric Topol. You know, if you've read some of his books, he's talked about how these systems can be used for good and avoid using them for bad. So he is someone I like to read a lot from and get his sort of ideas on, on how to make this a positive change.

Dr Lacy:

Yeah, that's great. And as many of our listeners know, there are some studies now showing that providers spend more time documenting on the computer and doing those tasks as opposed to speaking to the patients. That whole thing has shifted, that balance has shifted, which is not great. So Mike, this has been a wonderful conversation. Any last thoughts for our listeners?

Dr Wallace:

Yeah, I wanted to share maybe a little bit of a story just how I got into this field, because I think it’s a little bit humbling for me. You know how I got into it? It was about 7 or 8 years ago, we were doing a clinical trial with a new drug that helped detect polyps. It was actually a methylene blue delayed release tablet that you took with your bowel prep and its danger polyps blue and it in theory made them easier to find. We did a big clinical trial. We recorded hundreds, thousands of colonoscopies. Somebody went and annotated all those colonoscopy videos and said, here's the polyp, matched it up with the pathology. And we went to the FDA and said, you know, here's the po, the trial was positive. The company wanted approval for the drug, and they said no.

Dr Wallace:

And so, the company was sitting on, you know, 1200 colonoscopy videos that had been intensely annotated. What do we do with this? It was right when AI was starting to come into the, at least not the mainstream, but the kind of the feeder streams. And we and several others said, you know, maybe we can develop an AI tool to detect these polyps. We've already got the videos, we've already annotated them all. We've already labeled even every frame of every video. There's a polyp in that frame. And so we partnered the company that that sponsored the study, identified an AI partner, a computer science group. They developed and trained an algorithm to detect those polyps and made a very sophisticated algorithm. They did some early phase studies that eventually led to two large randomized controlled trials, one in Europe and one in the US that then led to the FDA approving that AI technology.

That technology is now called GI Genius. It's the first FDA-approved AI system for gastrointestinal endoscopy use. And it started a little bit of, as an opportunistic mistake. You know, we had a clinical trial that, it didn't fail, it was actually a positive trial, but it failed to get FDA approval for the drug. But we tried to get creative when the chips were down and say, what can we do with these chips? And, you know, it turned out to be, I think, a very sort of valuable exercise, and it led to a pathway now that many are following to train endoscopic and other computer vision AI systems. So, you know, for me, that story, it started with a very simple exercise. And it, you know, it was a lesson in how do we take advantage of these modern technologies?

And I think for me it was just recognizing that the important source material that we as gastroenterologists have is lots of annotated data, whether it's colonoscopy videos or electronic health records with long-term outcomes of patients. We have that content and we can train AI systems on that content to predict important things, whether it's a colon polyp or getting Crohn's disease, or response to a certain intervention. So just wanted to share that story and, you know, hopefully inspire some other young GI doctors out there to think about how they might develop the next GI Genius.

Dr Lacy:

Yeah, I think that's a great story. I did not know that. Great bit of history. And also a reminder, don't give up on studies when maybe they seemed like they didn't work, you didn't give up, and you led to this amazing, innovative technology. Perfect way to wind down.

Mike, again, thank you so much for lending your expertise on this important topic to our listeners on Apple and Spotify and other streaming networks. I'm Brian Lacy, a professor of medicine at the Mayo Clinic in Jacksonville, Florida. You have been listening to Gut Check, a podcast from the Gastroenterology Learning Network. Our guest today was Dr. Michael Wallace from the Mayo Clinic in Jacksonville, Florida. I hope you found this just as enjoyable as I did, and I look forward to having you join me for future Gut Check podcasts. Stay well.

 

 

 

© 2025 HMP Global. All Rights Reserved.
Any views and opinions expressed are those of the author(s) and/or participants and do not necessarily reflect the views, policy, or position of the Gastroenterology Learning Network or HMP Global, its employees, and affiliates.