Skip to main content
Breaking Down Health Care

The Government and Health Care: A Volatile or Necessary Mix? (Part II)

In the second part of this Breaking Down Health Care conversation, John Hennessy, MBA, and Michael Kolodziej, MD, cover the US federal government’s role in health care—Medicare, Medicaid, and the Affordable Care Act. They discuss the benefits, challenges, and potential room for improvement in government-sponsored health care in the United States.


Read the transcript: 

John Hennessy, MBA: Welcome to Breaking Down Health Care, where we'll be discussing evolving topics in health care in the United States. I'm John Hennessey. I'm a principal at Valuate Health Consultancy. I'll be in conversation with Michael Kolodziej, an oncologist and currently an advisor to Canopy in an electronic patient-reported outcome company. We're also going to talk about his new sub-stack, decoding health care. We're using our expertise to dive into some of the nuances of the health care industry in the United States.  

Breaking Down Health CareSo Mike, I wanted to continue our conversation about big data and the EMR and maybe get into, you know, other forms of technology in health care across the board. As with most new technology, the EMR was supposed to revolutionize health care and make doctors more efficient, but there's so many other sort of technological tools, whether that's accessing radiology studies virtually, or whether it's remote patient sensors like temperature and things like that. So have these things really been helpful or is it just a whole lot of, you know, ones and zeros that make it really challenging to keep up with all the inputs that are sort of coming at you from all these directions?  

Michael Kolodziej, MD: Well, so I think, John, there's kind of two halves to that question. The first half is a continuation of our last discussion, which is, is there such a is a big data set where these things are coming in. Now that, of course, begs the question of whether the data that's coming in is worth anything. We'll come back to that in a second.  

So the first question regarding big data requires that the data be diverse. It needs to come from everywhere. And we talked about the impasse regarding access to various data elements because of self -interest, more than anything else. If we do not have a good, diverse data set, then the data set isn't worth hope. It's worth nothing.  

There is a wonderful, a wonderful piece in the New England Journal just in January, where some authors from Harvard talk about developing a predictive model to use in the ER for abdominal pain. This model was developed in the early 1970s, 1970s, and it was developed in the UK. And it was based literally on manual review of all these data, what happened to the patient. We had over 90 % predictive in emergency rooms in the UK. So they tried to use that predictive model in Copenhagen and it was no good. It was only 60% predicted. Why? Well, abdominal pain met different things to different people, right? How you define them? Drinking patterns, I mean, they go through all this. My point is this, if the data set isn't big enough and diverse enough, we don't get useful information. We get a very skewed view of the world, right? So we need to think about that, that we need to get everybody to pitch in. And that will, with the new technologies, include the new technologies, because without it, we'll never get there.  

The second thing is that data has to be accurate. Absolute accuracy. If you've ever read your note from your primary care doctor, from an office visit, you probably picked up half a dozen errors, at least. In fact, it used to drive me crazy when I was in practice because we print out the patient's record for them. They'd read it and they'd come back with it. They'd come back with the office note and they'd be big circles of this and that because it wasn't their grandmother, it was their grandfather, or something like that. Now, that's inaccurate. And that kind of inaccuracy is not that hard to fix. But just think about that kind of inaccuracy when you talk about big data.  

When I was at Aetna, I got to do some foolin' around with Aetna data, it was very fun. And I did this project, I was really interested in watchful waiting at prostate cancer. So I said, "Okay, I can develop a query, a set of rules to find out how often it happens in the commercial age population." My hypothesis was that in young patients watchful waiting was rare because they had a long time to live. So I said, okay, we're going to look for people with diagnosis of prostate cancer when it first appeared in their in their claims data. And then I'd look for either surgery or radiation. And the ones who didn't have either within, let's say 6 to 12 months, they must have chosen watchful waiting. So I got the first part of the data back. And a third of the patients were women. Now just think about that for a second was prostate cancer. Women. Why? Because they were the holder of the insurance card. So the account was under their name. My point of that is this. If the data isn't accurate, we don't get anything useful out of it. With all this new technology stuff, that data needs to come in too. but it needs to be accurate, right?  

So different people collect ePROs, electronic patient-reported outcomes, differently. They collect different elements, data elements. A lot of them don't collect health outcomes. So we've got that. Then we've got the remote physiological monitoring, the Apple Watches, or Apple Watches. And not only do we not know exactly how accurate they are, we don't know if they add anything at all to health outcomes. Now, the article published in the reading journal about the Apple Watch, if you read it carefully, you'll see that the documented arrhythmias, were only really confirmed in about a third of the patients and it wasn't confirmed real time. It was confirmed within a couple of weeks. So what I'm saying is this, if we can't get that data from all these companies, and they're all private companies, they're all private companies, into the data set, and if that data isn't accurate and representative, we're going to wind up with a total mess, something that isn't really very interpretable. So we've got some work to do because first we've got to liberate the data and then we've got to validate the data, and then we can develop predictive models.  

Hennessy: It's fascinating as you were talking about this, I was thinking back to clinical trials, which is maybe the perfect example of small data, where we have very limited data sets, very well controlled. But we make a lot of decisions, both clinical and economic, based on those relatively small data sets. And we know there must be something out there with all the ones and zeros that they're sitting out there.  

And maybe this leads to this next question, which is you've talked a little bit about potential short term uses for AI in health care and other conversations and given the conversation we've had about the EMR and ePROs and other sort of tools that we have, it is AI in health care, the implementation of that and maybe some of the early learnings, is that going to live up to its promise of sort of a miracle will occur or is it going to face some of the similar challenges we've just been talking about of you know accuracy and timeliness of data and ownership of data.  

Dr Kolodziej: Yeah so the answer is I don't know obviously. I think when it comes to AI we probably should think about walking or even crawling before we try to run right so there are some there are some easy short-term wins I think that we can focus on. The bigger stuff, you know, the really interesting stuff, it's going to take years for us to sort through that.  

You know, it's interesting to me that we have known forever that clinical trials are critical to our understanding of truth, of knowledge. And yet we know very well that the mechanism by which we do clinical trials right now is imperfect. Why? Older, whiter, richer, better educated, nicer zip code. There's so many studies that show that what the randomized clinical trials shows isn't necessarily applicable to the general population. We know that. This is not a question. Now, how do we get past that? That's where real-world evidence should ultimately help us understand. Those are simple queries, right? Those are not complex AI. They're extracting data from the electronic medical record and marrying them to health outcomes.  

So for this to happen, ultimately, all the data has to come into one place. I outline this in the post about real-world evidence. I have maybe a dream that we have data comes, that we force the electronic medical record companies, that we force the academic medical centers to submit the data into the data comments. And that data comments is accessible. It's accessible to investigators. It's accessible to companies. They have to pay a licensing fee. If they develop something good, they share royalties to help fund this thing. Ultimately, it could be self-sustaining without any problem.  

At the beginning, Congress is going to have gifts money to make this happen. But forming a data commons allows us to start realizing how we can ask questions and where we cannot, right? Now, the other thing having a data commons does is it forces the electronic medical record companies to come to some sort of grips with how they communicate information, define the terms for data submission. And once you do that, you've solved what patients consider to be one of the greatest failures of the electronic medical record and that is interoperability.  

We have failed to make it possible for one doctor to talk to another doctor, to a hospital, to get those scans. We've made it virtually impossible and interoperability will benefit patients immediately. You can use AI to solve problems of syntax, to do real natural language processing, which to this point has not really happened. So those are short-term things that we could start thinking about doing, but we got to get everybody involved. And the only way you're going to do it is with a stick. The government has to decide this is something we're going to do, and we're going to do it. Right? Because the approach so far, it just hadn't worked.  

Hennessy: As you were talking through all that, I was remembering, you know, EMR implementation in our practice, and we were late to the game because we were convinced that if we went too early, we'd have sort of a ready fire aim sort of solution where we didn't fix the bad processes and just hard coded those things. And it sounds like, you know, what we may have done is we've, you know, sort of have this tool we could have had EMRs, but because they're being used in sort of a way to capture billing information, not important clinical information, we're not getting everything we could out of this. So, you know, what lessons can we learn from, you know, maybe how we implemented EMRs, and how we're working with big to moderate data today. And are there ways to avoid that moving forward? And is AI a tool to help us avoid those things, or is it a tool to help us get past some of the mistakes we've made to date, and maybe get us to a point where we're actually yielding what we'd hoped to from all of these ones and zeros were gathering.  

Dr Kolodziej: So I think the answer is both. I think, like I said, I think there are some short-term runs for AI that will help everybody. They may sound mundane, right? Fixing scheduling, for example. We know AI is going to have a clear role in radiology stuff. There are short-term gains and there are longer-term gains. Perhaps I'm Pollyann-ish. I believe that the longer-term gains will really be to the benefit of virtually everybody and will accelerate.  

But again, my fear, especially with all the new technology stuff, is that if it hasn't been validated, then we're going to wind up falling in love with the technology before we know it's worth anything. It's so funny. A long time ago, I went to one of these HIMS meetings, these meetings where all the tech people get together and some of you gave a talk. And this person had a big stack of papers and she said, I brought these to my doctor's office and it tells my blood pressure. I thought, what the heck? I mean, does your doctor have any other patients scheduled other than you on that day for them to actually look at all that nonsense, most of it is garbage. It's just noise. We have a lot of learning to do, but I think we can learn together. Validation, veracity, all critical to making this part of how we improve health care in America. and we just can't let, we cannot let that profit mode get in the away. You can do well by doing good here. I really believe it.  

Hennessy: Our friend Kumar Rajan used to always say good medicine is good business and I think that's exactly what you're saying.  

Dr Kolodziej: Yep.  

Hennessy: Thank you for watching this installment of Breaking Down Health Care. We hope you enjoyed the conversation and learned something you didn't know about health care and how it works in the United States. If you have questions or topics you'd like Mike and I to discuss, you can use the Contact Us feature on the website. Tune in for future conversations because we're just getting started. 

© 2024 HMP Global. All Rights Reserved.
Any views and opinions expressed are those of the author(s) and/or participants and do not necessarily reflect the views, policy, or position of the Cancer Care Business Exchange or HMP Global, their employees, and affiliates.