Skip to main content

Big Pharma Embraces Big Data: What This Means for Payers and Patients

Jeff Craven

Drug discovery can be costly, imprecise, and the FDA approval process can be lengthy. A 2016 analysis by Gail A Van Norman, MD, University of Washington, found new drugs take an average of 12 years to be approved and the total cost of a drug throughout the approval process can exceed $1 billion.

Although many potential drug compounds go through different stages of approval, “very few finally reach patients,” Charles Karnack, PharmD, BCNSP, First Report Managed Care editorial board member and assistant professor of clinical pharmacy at Duquesne University in Pittsburgh, PA, said in an email interview, noting that approximately 1 in 1000 drugs reach patients.

Because so few drugs reach potential patients, pharmaceutical companies need to put many more drugs out to market and decrease the amount of time drugs take to get to market. 

The pharmaceutical industry is ripe for innovation, not having changed significantly in decades. Increasing success in the pharmaceutical market requires changing factors such as the “number of drugs being brought to market, decreasing the time it takes to get to market, determining need and value for products, and developing strategies to increase use,” Catherine Cooke, PharmD, BCPS, PAHM, president at PosiHealth, Inc and editorial board member, said in an interview with First Report Managed Care.

Ideally, pharmaceutical companies should deliver new treatments to payers and patients faster, more efficiently, and with less overhead. Many pharmaceutical companies are throwing their support behind large-scale data analytics to achieve these goals and overcome the traditional bottlenecks of the industry.

Changing R&D and Drug Delivery

“I think that what we’re seeing now is, really looking at these kinds of big data for health economics and outcomes evidence, I think that’s where I’m really seeing the use,” Dr Cooke said. “The pharmaceutical industry is using [it] to get a value proposition for their product.”

Making use of data to interpret real-world information, such as how patients are using health resources and whether they are adhering to medications, is valuable, Dr Cooke said. In addition, analyzing large data sets of real-world applications of drugs aid post-marketing surveillance, showing companies how patients who are excluded from trials, such as older patients or those with rare diseases, react to these treatments.

“Large data sets may show which therapies work and at a reasonable cost,” Dr Karnack said.

In the drug discovery process, automation can raise technical, organizational, and conceptual challenges, such as in the case of small-molecule drug discovery. Other companies have increasingly turned to specialty diseases where few or no treatments exist to take make use of expedited development and regulatory pathways, such as in the case of chimeric-antigen-receptor and gene therapies.  

Data from pharmaceutical companies may also inform post-approval benefits such as expanded indications, increased market access, and reimbursement.

“In many ways, especially with Medicare, when we think about people getting a drug benefit, there’s an ability to kind of get access and to analyze that data. So, we’re seeing not only the government and [academia], but also government and industry organizations working together to try to look at what’s happening,” Dr Cooke said. 

“It’s one thing [if] the drug gets approved and we think this is how it’s going to be used, but we really aren’t sure until it gets in the market how it really gets used, and what outcomes we see in the real world,” she added.

Increasing Efficiency

Big data analytics may also help treat patients in large populations instead of as individuals, said Tom Henry, chief data officer at Express Scripts, in an interview.

“Using machine learning and large, fully integrated data sets, we can identify patient or population health risks and prescribe the treatments necessary to improve patient health, at a very early stage in their health journey,” he said. “These capabilities allow us to see the best possible way to care for someone with heart issues, but more specifically, the best way to care for a specific, unique 45-year-old patient with heart issues. Based on this information, we can then not only predict where the patient may have health and financial risk, but also lower risk by closing the health and financial gaps that we are uncovering,” he said.

Genetic profiling and next generation sequencing can also help treat a patient and cure a disease, such as in the case of immunotherapy in cancer treatments. “That type of data analysis can lead to population-based medicine—treating disease state in a population, [rather than] treating an individual patient,” Dr Karnack said.

“Population health management is where the future is headed, coupled with predictive analytics and gene-based therapies that will help us deliver personalized medicine,” Mr Henry said. “By analyzing behaviors through predictive analytics, understanding the most effective treatments for an individual’s genetic composition, and using devices that influence behavior outside of a doctor’s office, we will be able to intervene early to prevent negative outcomes and lower the cost of health care. We will essentially be able to prevent sickness in ways that we haven’t been able to before.”

Being able to predict which patients are costing money through poor outcomes and how to intervene would also be of huge benefit to payers, Dr Cooke said.

“What happens now, many times, is that the bad outcome or the cost happens and then we enroll them in a program,” she said. “If we can figure out who’s going to experience that cost or that bad outcome, we can intervene sooner.”

Overcoming Hurdles

Integrating artificial intelligence and machine learning into electronic health records (EHRs) as decision-making tools is the next step, which potentially tie real-world outcomes to reimbursement in the future. However, more research is needed into how to solve current problems first, such as algorithms identifying statistically significant instead of clinically meaningful patterns in data. “Even in studies, we find differences that are statistically significant but they’re not clinically meaningful,” she noted.

“It’s interesting because in data mining, where you don’t really have a question, you’re just kind of going around and seeing if anything comes to the surface—which isn’t necessarily a bad thing to do,” Dr Cooke said. “You’re going to find something, but the question is whether that finding is meaningful. There’s a balance—data mining may be good for signals, but I think we’ve really got to think about what we’re trying to achieve by looking at the status.”

While machine learning and artificial intelligence become increasingly better at analyzing existing data, another problem to solve is accounting for patients who are lost to follow up, such as people who do not fill their prescriptions and may not present back at a center until the next appointment, if at all. “You can look at how people are using care, but what you are missing is the people [who] are not necessarily using care,” Dr Cooke said. “You need to have kind of some denominator of the people that you are trying to look at, not just those accessing care.”

Additionally, what data is being collected matters. Payers have a large amount of data, but greater insight lies beyond simply analyzing claims data. “I think we have to be creative about how we get data,” Dr Cooke said. “We [have] to think beyond just claims and talk about linkages to other data sources than narrowly just what insurance claims would be—community databases where [there] is more information about socioeconomic demographics or other factors that might influence outcome curves.”

Aggregating this data has the potential to create a fuller picture of a patient’s experience. However, the data is spread among a wide variety of sources, which may make cataloging difficult, Mr Henry said. “Right now, if someone were to create a comprehensive patient profile, they would need to pull from multiple data sources across the care continuum, including capturing events that take place outside of doctors’ offices,” he explained.

In addition, the type of data being combined can vary and not always match up, and industry consensus estimates up to 80% of data in EHRs are unstructured. Restrictions on the data—factors such as limiting what is shared and data gatekeepers choosing not to share negative information—can complicate creating a comprehensive patient profile. 

Some organizations have attempted to solve this problem of integrating disparate data sets. The Oncology Research Information Exchange Network and the American Association for Cancer Research’s Project GENIE are attempting data linkage across multiple institutions. Cochrane’s Project Transform aims to use their data in a more collaborative way, allowing researchers to identify randomized controlled trial reports and continuously updated systematic reviews. Express Scripts is also working on their own patient profiling systems, Mr. Henry said.

“Express Scripts is working to create individualized, comprehensive patient health journeys by integrating prescriber data, patient activity and behavioral data, medical data, and pharmacy data,” Mr Henry explained. “[The aim is to] provide decision makers with the information needed to make the most clinically appropriate recommendations... for their patients.”