Skip to main content
Q&As

Review Author Urges Caution in Use of Current Clinical Prediction Models for Epilepsy

 

Corey Ratcliffe, PhD Student, The BRAIN Lab at the University of Liverpool.
Corey Ratcliffe, PhD Student, The BRAIN Lab at the University of Liverpool.

Drug-resistant epilepsy affects 35% of patients diagnosed with the chronic brain disorder. For the best treatment outcomes, reliable clinical prediction models are a useful accessory in neurological clinicians’ toolboxes to help guide patients towards effective treatment and improved quality of life. As one recent systematic review found, though, many of these prediction models are unreliable because of their high risk of bias.

Neurology Learning Network connected with study author Corey Ratcliffe, PhD student, BRAIN Lab at the University of Liverpool, to discuss the implication of these review findings, published in April 2024 in Epilepsia.

For more expert insights, visit the Epilepsy Excellence Forum here on Neurology Learning Network.

Editor’s note: This piece has been lightly edited for length and clarity.


Neurology Learning Network (NLN): What led you and your co-authors to research clinical prediction models for treatment outcomes in newly diagnosed epilepsy?

Corey Ratcliffe, PhD student: Prediction models are a key part of the clinical pathway and have tremendous influence over treatment decisions and patient quality of life. My senior authors (Dr. Laura J. Bonnett and Prof. Simon S. Keller) identified a gap in the literature related to the validation of the models in question, which was made more pertinent by the recent push to adopt standardizing frameworks such as the Transparent Reporting of a multivariable prediction model for Individual Prognosis or Diagnosis (TRIPOD) and the Prediction Model Risk of Bias Assessment Tool (PROBAST). I undertook the necessary study,

NLN: Your study highlights a universally high risk of bias in the prediction models you reviewed. Could you elaborate on the primary sources of bias and how they affect the clinical utility of these models?

Ratcliffe: One risk of bias is that, contrary to statistical practice, outcomes (i.e. seizures at another timepoint) were often treated as baseline factors. The fallacy here is that an outcome is being used to predict an outcome, and this could be easily avoided by including statisticians in the multi-disciplinary team behind a clinical prediction model paper. The second is that these prediction models are often based on retrospectively gathered, clinically operationalized features, which are subject to subjectivity and therefore unsuitable to systematic evaluation.

NLN: Seizure characteristics, epilepsy history, and age at onset were significant predictors across many models. Were there any surprising findings regarding these factors, or were they expected based on existing literature?

Ratcliffe: All 3 of those features, as well as (potentially) treatment delay, are recognized factors of treatment outcomes in epilepsy. Any surprises would have been welcome!

NLN: The study mentioned the underrepresentation of electrophysiological and MRI findings in multivariable models. What are some potential reasons for this, and how might incorporating these factors improve future prediction models?

Ratcliffe: The depth and richness of the data provided by neurophysiological and neuroimaging methodologies is such that researchers globally are working on using them to quantify biomarkers associated with epilepsy phenotypes as we speak. Unfortunately, much of the routinely clinically acquired MRI/EEG data is unsuitable for quantitative analysis, and research level acquisition is extremely costly. Biomarker and network analyses based on these modalities have revolutionized the epilepsy research field in recent years, and whilst we are yet to see broad application to policy, evidence for the utility of such data is continually being provided. For a big-data overview, one should look to the ENIGMA-epilepsy consortium.

NLN: Given the high risk of bias in current models, how do you see TRIPOD adherence improving the quality of future research?

Ratcliffe: When the methodological basis of a model is called into question, it can cast doubt on the utility of the model. This does not suggest that the model is bad, just that it can't be used reliably, either for informing treatment or for future synthesis. By standardizing both the formulation and the reporting of clinical prediction models, frameworks like TRIPOD can reduce potential confounds in the methodology and facilitate communication of the results.

NLN: For clinicians managing newly diagnosed epilepsy, what key takeaway from your study would you suggest they consider when predicting patient outcomes or deciding on a treatment path?

Ratcliffe: When possible, use prospectively gathered data, adhere to the relevant framework for your model, and include a statistician in your multi-disciplinary authorship team. This doesn't have to mean abandoning the use of retrospective data, rather, it encourages it through the recommendation of the limitations to address when reporting any retrospectively defined models.

Try to avoid clinical prediction models that have not been validated. A reliable feature identified with univariable modelling might be more suitable.


Corey Ratcliffe is a final year PhD student researching advanced MRI biomarkers of seizure recurrence using neurocystcicercosis as a natural disease model. He is based out of Simon Keller's BRAIN Lab at the University of Liverpool, U.K., and working jointly with the National Institute of Mental Health and Neuro Sciences in Bangalore, India.


 

© 2024 HMP Global. All Rights Reserved.
 
Any views and opinions expressed above are those of the author(s) and do not necessarily reflect the views, policy, or position of the Psych Congress NP Institute or HMP Global, their employees, and affiliates.