Skip to main content

Using Algorithms for Population Health Management: Looking Under the Hood

Mary Beth Nierengarten

December 2019

Technology continues to play a prominent role in the evolution of health care delivery but it comes with its own batch of challenges. What happens when systems are based on faulty programming, inaccurate data, or algorithmic calculations include biases?

Increasingly health care, like other sectors of society, is using and relying on algorithms based on big data and machine learning to generate protocols for health care delivery. Efficiency and improved outcomes may be the goal, but unintended consequences of relying on these algorithms may potentially harm patients and the health care systems that deliver care.

This was highlighted in a recent study published in Science magazine that showed significant racial bias in a widely used algorithm by payers and hospitals to target higher-cost patients for more intensive case management services.

In determining which complex patients were good candidates for the more intensive case management services, the algorithm used health care costs as a proxy for health care needs or care management allocation to identify patients, according to Brian W Powers, MD, MBA, a physician and researcher at Brigham and Women’s Hospital, Boston, MA, and author of the study. He emphasized that using cost as a proxy in this setting opened the door to introducing bias in the algorithm.

“If an algorithm that predicts cost is being used to allocate resources to those with the greatest need, it is going to disadvantage black patients and others who incur lower costs for any given level of health,” Dr Powers said.

Consequences of Racial Bias

In the study, researchers examined an algorithm used to identify patients with multiple chronic conditions who incur high costs to determine their enrollment in high-risk care management programs. The explicit goal of the algorithm was to “predict complex health needs for the purpose of targeting an intervention that manages those needs.”

Each patient selected via the algorithm was assigned a risk score to determine whether they were a good candidate for enrollment. Patients with a risk score in the 97th percentile and above were identified for enrollment and those with a score from the 55th to 96th percentiles considered possible for enrollment (depending on further input from the patient’s physician). These calculated risk scores predicted who would incur the most health care costs in the future.

According to Dr Powers, the algorithm was sufficient at predicting cost for both black and white patients. What the algorithm did not take into account, however, is that black patients generally utilize the health care system less than white patients despite similar risk scores or even when sicker than white patients. The fallout is that the algorithm flagged more white patients for enrollment in the high-risk management program, when, at closer inspection of the data, black patients actually had more chronic illness than white patients but were not flagged as well for program enrollment.

For example, Dr Powers and his colleagues found that black patients assigned to the 97th percentile by the algorithm actually had 26% more chronic illness than white patients, on average, yet only 17.7% of patients identified by the algorithm for enrollment in the high-risk management program were black. If the algorithm had been unbiased, substantially more black patients, 46.5%, would have been enrolled in the program.

Although Dr Powers and colleagues did not directly assess the impact of this bias on patients or the health care system as a whole, he said that some idea of the consequences can be extrapolated from what is known. 

“There is moderate quality evidence that care management programs improve patient reported health and, in some cases, prevent unnecessary emergency department and hospital admissions,” he said. “Since we know that using this type of algorithm will reduce the number of black patients identified and referred to these programs, I don’t think it is unreasonable to say that there are real consequences.”

Marshall Chin, MD, MPH, Richard Parrillo Family professor of healthcare ethics, department of medicine, University of Chicago Medicine, Chicago, IL, who has written extensively on health disparities including ensuring fairness in the use of analytics in health care, said that most health care systems should be looking for ways to predict value-based decisions that emphasize both efficiency in costs as well as improvements in health outcomes.

As such, the focus on cost without inclusion of health outcomes created an algorithm that is not really in line with the outcomes that should be driving most health care systems.

“I would argue that for most managed care companies, cost is important but the mission is really value, and you want a balance between patient outcomes and cost,” said Dr Chin. “I would ask organizations what their true mission is, and if it is to maximize the value of patient care at efficient cost then an algorithm solely focused on cost is a problem in and of itself.”

Opening the Door to Building Better Algorithms

The need to better understand, build, and implement algorithms in health care, and in particular for population health management, is highlighted by the study’s findings of bias. According to the authors, the algorithm is not an outlier but is an example of a typical commercial high-risk prediction tool used on roughly 200 million people a year in the United States—one that payers and large health systems rely on to target patients for high-risk care management programs. These programs are used by many health systems to improve health outcomes and reduce cost for a population of patients that incur the highest cost burden on the health care system as a whole, especially patients with complex health needs.

Given their wide use, such algorithms are able to direct the care of many patients. As such, their accuracy in identifying high risk patients and determining who will benefit from the additional resources offered through high-risk care management is critical.

For Karen Handmaker, MPP, a population health management expert, Principal, 4sightHealth, Chicago, IL, ensuring that the data used to develop an algorithm heightens its predictive ability. “The model is only as good as what is being fed to it,” she said. “The more we recognize up front and the more we have information from outside the health system for inputs, the more we are able to know not only who is at risk but what they are at risk for and therefore what we can do about it.”

For population health management, she emphasized the need to include the social determinants of health whereas the existing the algorithm only included clinical factors. Ms Handmaker cited the example of people with diabetes for whom access to care and management of their diabetes may be compromised because they live in certain census tracts that represent areas in which people face challenges with transportation and food access. 

“This could lead to greater risk of unmanaged risk factors and higher use of acute vs preventive health care services,” she said. “By incorporating these social determinants into the model, the results become more tailored and actionable for specific population segments and individuals.”

The lack of social determinants of health in algorithms created for population health management is also highlighted in an article published in Rand Health Quarterly, in which the authors conducted a literature review and spoke with subject matter experts about the use of analytics to identify and coordinate care for complex patients. Despite the growing number of health care organizations who are investing in and using algorithms to identify complex patients for specific interventions, the authors note that poor quality data and the lack of data on the social determinants of health are barriers to progressing in analytics.

Agreeing that inputting the right data is crucial for building an algorithm, Dr Chin added that how an algorithm is tested prior to deployment, how it is deployed in the field, and how it works over time are all essential steps in what he calls a “longitudinal course of an algorithm.”

Underlying this entire “longitudinal course” is the need for organizations to be “intentional” in how they develop and use algorithms. “Organizations need to be intentional about thinking whether there are any ethical or nefarious issues raised by an algorithm,” he said. “Look at the goal of the model and determine if that is a worthy goal for diverse stakeholders.”

One way to ensure algorithms represent the health needs of patients, as well as cost concerns, is to include all stakeholders in the development of algorithms—providers, patients, and the public. “It is important to work together with the people on the front lines and other data sources available to incorporate nonclinical factors that can impact health and costs,” said Ms Handmaker.

In addition, organizations need to think carefully about what an algorithm is for and how it will be used. For Dr Chin, there are trade-offs that need to be carefully thought through. Do you want an algorithm that is focused on equal patient outcomes or one that ensures equal allocation among populations? 

“I would guess that the vast majority of health organizations aren’t thinking about these issues in any detail,” Dr Chin said. But they may be starting to. 

In a follow-up blog published in Health Affairs, Dr Powers and colleagues describe a new initiative to address bias in health care algorithms based on the response they received from their study. Soon to be begin in conjunction with the launching of the Center for Applied Artificial Intelligence at the Booth School of Business at the University of Chicago, the initiative offers (pro bono) health systems and other groups assistance to help detect and remove bias in health care algorithms. 

“As algorithms become more widespread, I hope our findings serve as a reminder to pause and consider unintended consequences before implementation,” said Dr Powers. “I hope this may also be a call for clinicians and algorithm developers to collaborate more closely prior to implementation, to build algorithms that are more efficient, effective and, as we show in the study, more equitable.”  ν