ADVERTISEMENT
The Ticking Bomb of Bad Health Data
Our industry is using digital technology to become more effective in delivering better patient outcomes while lowering costs, but a byproduct of that success is the generation and propagation of bad data which, longer term, may threaten many of those benefits.
In fact, health care is a veritable bad data engine when compared to other industries, yet we should be the least willing to accept it.
The World Is Awash In Bad Data
Most industries have begun their digitization and digitalization journeys and many, such as financial services, telecom, and technology companies, are well on their way. Such digital transformation has been uneven because it is being accomplished in real time. Companies do not have the luxury of operating in ideal circumstances like, say a pause in business for a strategy to be explored and fine-tuned. The benefits of using data are far too many and too impactful to wait for perfect conditions in which to implement them.
A major side-effect of this transformation has been the generation of bad, or “dirty” data (also called rouge data) that are inaccurate, incomplete, or inconsistent. “Bad data is the norm,” declared MIT Sloan Management Review only a few years ago in an article that estimated it cost most companies between 15% and 25% of revenue. Gartner estimated that the average annual cost for companies of poor data quality was $15 million.
Health care tracks with these broad statistics — 70% accuracy is an unofficially acceptable rate for many industries, and a recent JAMA study put electronic health record errors at 80%—but there is a huge difference between getting denied a loan and receiving the wrong medical treatment. While estimates that medical errors in general are the third leading cause of death in the US may be overblown, they speak to the life and death nature of health-related decisions and the heightened importance of accurate data.
Bad data in health care is not just an error or expense. It is a ticking bomb ready to disrupt care.
The Unique Data Challenges For Health Care
As noted earlier, we are good at generating bad data in large part because of the constant evolution in patients’ conditions and life circumstances, as well as changes driven by advances in science and shifts in the regulatory environment. And with each day comes more change, new challenges.
There are any number of ways bad data can be first captured and then applied across the spectrum of health care services. Patient identity mixups top the list and that can lead to missed diagnoses and incorrect treatments or prescriptions as well as breakdowns in coverage and payment services. The collection of data remotely, including visits to unconventional care sites during the pandemic, only further complicates the challenge. The explosion in health wearables further complicates things, as does the changing nature of the science underlying diagnoses and treatments and the ever-evolving definition and number of diagnosis and procedure codes (which were at 67,000 and 87,000 by this count).
Were health care just another industry undergoing change, we might comfortably apply the “1-10-100 rule” to meeting this challenge, which posits that every $1 not spent to verify data at the point of entry can lead to a $10 expense to clean up as its processed and $100 if it’s allowed to impact operations or customers.
For example, the profound impact on patient health illustrated by this real-world example. health officials in Idaho recently reported that hundreds of its reported COVID-19 cases were misclassified due to data entry issues. Now, consider the potential long-haul journeys those patients will undertake over the coming months and years, and imagine the potential difficulties of access, or mistakes in care, that could result.
The potential cost in human terms of even one bit of bad data should be too much for us to bear.
The Journey to Better Data
Just as it has taken time for the problem of bad data to emerge, so too will the solution and, just like the journey of embracing digital transformation, it will have to happen within the context of uninterrupted patient care. The ultimate solution is to implement a single patient identifier that is consistent nationally and ensures there is a single language being spoken. In the meantime, there are at least three things that we could do, all of which have already been implemented and vetted by other industries that are on similar paths as ours.
First, set up more accuracy checkpoints within and between systems. Other industries have done this successfully (you see it, for instance, every time you use a credit card online and your identity is confirmed within a few seconds), so it’s not true that doing it will necessitate a slowing of processing or care. There are technology solutions to do it as well as established processes for human intervention if and when needed.
Second, look beyond your four walls for help. Again, the idea that you have to invent the solutions yourself and/or manage them is not supported by the facts; whether you choose to use a third-party to vet your data or consider ways to push certain vetting responsibilities out to your sources of data (such as visits to independent facilities, doctor office visits, etc.), the tools for doing so are established and commonly in use.
Third, stop making the problem worse. Every data error that is preempted at the point of entry is one less error that will make its way through the system and potentially impact care later on. The best way to move forward on the journey to better data is to defuse the bombs of bad data before they start ticking.
Disclaimer: The views and opinions expressed are those of the author(s) and do not necessarily reflect the official policy or position of the Population Health Learning Network or HMP Global, their employees, and affiliates. Any content provided by our bloggers or authors are of their opinion and are not intended to malign any religion, ethnic group, club, association, organization, company, individual, or anyone or anything.