ADVERTISEMENT
Improving Patient Care Through AI Clinical Decision Support Tools
James Hamrick, MD, MPH, Flatiron Health, discusses the ways in which incorporating AI in cancer care improves the timeliness of care for patients, nudges providers to use novel therapies in practice, and reduces inequities in health care.
Dr Hamrick presented on this topic at the Association of Community Cancer Centers’ 49th Annual Meeting and Cancer Business Summit.
Transcript:
James Hamrick, MD, MPH: My name is James Hamrick. I'm a Medical Oncologist and I'm Vice President of Clinical Oncology at Flatiron Health.
I spoke today in a session entitled “AI Enabled Clinical Decision Support Tools.” My talk specifically talked about a tool that we're building at Flatiron Health called Flatiron Assist. And this is a point of care decision support tool that helps clinicians find it easier to make the best choice in therapy for their patient and harder to make a worse choice.
How has AI changed the timeliness of care?
Dr Hamrick: So when I think about artificial intelligence and clinical applications, especially for clinical decision support in cancer care, I think a lot about building trust. And right now, the phase where we are is we are allowing technology, for instance, in the tool that we use, Flatiron Assist, to surface certain evidence-based guidelines, so specifically NCCN, right at the point of care for clinicians. And so there is an element of trust in trusting the technology to surface the right algorithm or the right treatment recommendation for you, and perhaps feeling comfortable not going directly to the primary source, which would be going out of your EHR and into the NCCN content through their web portal.
And I think, at this point, trusting technology to do that is a fairly big step. Physicians are often fairly change-averse, and I think that is actually a really good thing because we are reluctant to take risks on things that we know could have a negative impact on patient outcomes. So when you start to open the door though, to trusting the technology to surface vital information for you, you can really see the possibility for artificial intelligence. So things like surfacing the patient's relevant history; what tests have they had, what scans have they had?
It's one element to say, "Have they even had the PET scan? Have they had the molecular profiling test?" And then you start saying, "Well, would I trust that? Does that help me and speed me up in terms of understanding the patient story? What about the result? Do I trust a biomarker result that's been surfaced for me?" And I think at this point, we're at the point where we're using fairly manually built technology to surface things like treatment guidelines, treatment pathways, preferred treatments. That's building the type of trust that we need to really take advantage of some of the capabilities of machine learning.
And I do think that as this becomes more prevalent in other parts of our lives and it becomes more normalized, things like using it on our phones to predict the next word in a text message, we're going to become more comfortable with what we can and can't trust. And that's a really important step in terms of the adoption of machine learning and artificial intelligence for clinical decision making. I want to underscore all of this by saying I am an optimist that artificial intelligence will do a lot of the work that allows doctors to actually get the computer more out of the way in the exam realm, and allow some of the humanity of medicine to return.
If you can serve up for the doctor the patient's history with all of the important information about tumor biology and scans, etc., you can serve up the choices for the doctor, whether it's a standard of care choice, newly approved by FDA, whether it's a clinical trial that's open at their site. If you could sort of set all that up for the doctor, they can spend their time listening to the patient, understanding the patient's priorities, helping the patient understand the decision before them, and helping them arrive at the best decision for that patient.
And that's something that really the clinical teams that are working with patients, there's a huge appetite for that is, "Get me out of the computer. Let the computer help me and allow me to spend more face time with my patient."
How does AI help to implement novel therapies into practice?
Dr Hamrick: I think there is a big opportunity for artificial intelligence to help with earlier adoption of therapies. We don't have a great infrastructure for busy clinicians to be completely up-to-date on the latest breakthroughs, the latest practice changing clinical trials, the latest FDA approvals or NCCN recommendations. And so because we don't have a strong infrastructure that's uniformly applied for that, you get some doctors, for example, ones that spend all of their time treating one disease, who are really up to speed. And as soon as something is FDA approved or new practice-changing data comes out, they're going to adopt that in their practice pretty quickly.
We see in the data that there are busy generalists like myself who may not be aware of a new treatment option that could really impact the outcome for their patient. And so I think AI, again, by keeping up with the latest breakthroughs, summarizing those, putting those in a digestible form into the physician's workflow, can help make people aware. So the way we think about our decision support tool is the doctor goes through, they use the logic to select their therapy just the way they normally do, it sort of guides them to a good decision.
In some cases, they may be going off pathway or not picking a preferred option. That may be totally appropriate. There's a lot of nuance to patients and sometimes that's the best thing. But the tool actually does flag for them, "Hey, Dr Hamrick, there may be a preferred option for this. Do you want to switch over to the preferred or do you want to stick with what you're doing?" And what that does is that's not a hard stop. The doctor still makes the decision, but they get a nudge in the right direction.
So for instance, the one I think of today is there's a new adjuvant therapy option in lung cancer, super common disease. And to be honest, I wasn't aware of it. And if I can be nudged in the right direction, then that may open up a new treatment avenue for me. I think doctors used to be worried that this might lead to a cookie cutter medicine, people forcing me to do something. The reality is the innovation is so fast now, most of the doctors I talk to, I know myself, we really welcome a tool that makes sure that we have seen the latest and greatest when we can't go out and find it all for ourselves.
How will AI impact health equity?
Dr Hamrick: I think that there is a really good opportunity for AI to reduce inequities in health care. We all know about unconscious biases, and so the one example I can think of is we know that we don't have enough diversity in the representation of people that are enrolled in clinical trials, and that's how we learn. That's super important, and that's a priority of FDA and really anyone who's thinking about this space.
I think unconscious biases can lead to things like not offering a clinical trial to certain elements of the population for whatever reason that might be. If you have a tool that reminds you to do that, it may overcome some of your unconscious bias and it may result in more people being offered trials, which will result inevitably in a more diverse set of people winding up enrolling and participating in trial, and therefore we can learn from those patients, and they look more like the people in the real world that we are actually treating.
So I think there is a big opportunity for AI to improve there. However, we have to be really careful because it's absolutely possible for a machine learning model to be built that has bias built into it. And if you're not looking for it very carefully, you could wind up doing harm. So for instance, we have a tool that helps to predict emergency room visits in the next 60 days, and we are very meticulous about the way we employ that tool and the way that we use it. So we look at the predictions that we make, and then we actually retroactively look at what happened to the patients that we made predictions on. Did they actually wind up in the ER?
If our tool starts to look like it has bias and it's not accurate, we go back and we try to look at the model and decide how to make sure it's not got some type of bias. The last thing we would want to do is make an inaccurate prediction about who's going to wind up in the ER and divert resources to the wrong group of patients, away from patients who actually need it. That's a very real risk with artificial intelligence and machine learning. And it's really important that people designing the models, the people using the models and the people tracking the models in the real world, validating the models, pay careful attention.
You can't turn a model loose in the wild and just let it run. You have to continually monitor it because things change and biases can creep in. So that's a big caveat. But overall, I'm very bullish and optimistic on the potential for AI to reduce inequities in health care.
Another thing that I'd like to say that's really important about artificial intelligence and building machine learning models is domain expertise is really important for the people that are designing the models. And what that means is if you're making a machine learning model that you're going to train on a data set for an oncology use case, it's really important that you spend a lot of time with people, with clinicians, with domain expertise in clinical oncology who understand the space, who can help you design your model, label your data set, and make sure that it makes sense clinically.
This is ultimately a tool for humans to design and build and for humans to use. And so I think it's going to make humans jobs easier, it's going to make us much more efficient. It's going to do things like reduce the time it takes to adopt great new therapies. But the human element will never go away. I'm not afraid that people are going to lose their jobs over this. I think it's going to make us all able to practice at a higher level, and really use our skills at the most nuanced situations that only a human can handle.