ADVERTISEMENT
What is an empirically-supported clinician?
In Garrison Keeler’s fictional Lake Wobegone, all the children are above average. While this may be an amusing characterization, it is not entirely far-fetched in terms of how some groups of people assess themselves.
Take professionals, for example. People may be well aware of their personal foibles, but many embrace a new sense of above average identity upon attaining advanced education and professional credentials. The question within behavioral health is no different. How effective are clinicians as therapists? What would we do if we knew the answer?
The immediate puzzle is how we would calculate the effectiveness of therapists. In research studies that evaluate the effectiveness of medications and psychotherapies, the gold standard is to use patient self-report questionnaires to judge change before and after treatment. Are these measures acceptable for evaluating clinicians in real world practice?
Primary care has numbers
The reality is that a visit to our primary care physician involves measuring our weight, our temperature, our blood pressure, and many other health risk factors using biological markers in urine and blood samples. Can we take the same approach with psychotherapy? Should we?
My training as a clinician taught me about having good clinical judgment. I should be constantly evaluating how my client is responding and then modifying my response based on this. For example, if my client is developing a powerful transference relationship, then I have been trained to know what to do. If my client is stuck in negative thoughts or inactivity, then I know how to address cognitive distortions and encourage behavioral activation. Measurement was never an issue.
But measurement is actually a very big issue. Are our organizations’ clinical interventions valuable or worthless, objectively speaking, and as we strive to control the costs of medical care, should our services be cut? Is psychotherapy efficacious based on more than half a century of clinical research?
In fact, we have a definitive answer to this question, but most practicing clinicians actually have no idea what it might be because there is a wide gulf between clinical research and practice. However, the good news is that psychotherapy is remarkably efficacious.
One way to understand this is to read a book by Bruce Wampold, “The Great Psychotherapy Debate,” with its second edition published in 2015. While the good news about the effectiveness of psychotherapy should put you in a good mood, Wampold’s sobering companion message is that no model of psychotherapy is more effective than another.
So those of you who champion cognitive behavioral therapy (CBT) or psychodynamic therapy, or whatever technique you have invested in, should adopt some humility about your superior techniques. This does not mean that clinicians shouldn’t continually try to improve their skills, but rather that research has not been able to sort out any clear winners in terms of treatment approach.
I won’t belabor the method of getting to this conclusion, but Wampold uses meta-analytic statistical analyses to determine what all of the strong clinical studies show about competing clinical approaches.
The differentiator
If clinicians can’t proudly promote their practices on the claim that they have great expertise in CBT, and accordingly, charge ever increasing fees for service, then what is the marketing claim? If the recommendation for consumers is to care less about empirically-supported treatments (because none appears superior in clinical comparisons), then what is the basis for selecting a treatment if you are someone in need?
The unfortunate circumstance is that it seems the clinical results of the individual clinician make the most difference. Yet, no one seems to have access to any such results because few have bothered to measure them.
The real world of psychotherapy is less Lake Wobegone and more bell curve. Most clinicians get good results, some get excellent results, and some get very poor results. Differences in outcome within the large middle range of the bell curve are small, but Wampold found that outcomes in the top quartile are more than twice as large as for those in the bottom quartile.
You can consult Wampold’s work, or that of Michael Lambert, or my work in managed care at PacifiCare Behavioral Health, which resulted in several peer reviewed articles. Again, this is not intended to give you all the boring source material, but rather to address how we think about our work. We should be concerned less about empirically-supported treatments and more about empirically-supported clinicians. But what is an empirically-supported clinician?
The question today is whether we dare to measure results. If your clinicians are in the bottom 10th percentile for clinical outcomes, what do you do? Is ignorance better for you since data may suggest that no one should trust their problems to your organization? Of course, since most clinicians are helpful (remember that psychotherapy is remarkably efficacious), should you avoid measurement of results out of concern that your organization is at the extreme left tail of the bell curve? This is a difficult choice.
Just trust me
There is a less difficult choice for the field of psychotherapy. Healthcare reform is coming whether we like it or not, and it will be based on increasing healthcare quality and lowering the cost of care. While research about the effectiveness of psychotherapy is quite encouraging, reform will question the results of real world work with specific populations.
If psychotherapists choose to embrace a position of “we don’t measure results, just trust me,” then the judgment of the marketplace will be to treat psychotherapy as a mere commodity to be discounted at every opportunity. If we don’t measure our results, we will be treated as an undifferentiated ancillary service, alongside real interventions that have a demonstrated value.
There are good and bad practitioners in every profession, and psychotherapists need to wake up to the reality that health reform wants to know what works and what doesn’t, as well as who succeeds and who doesn’t. Psychotherapists have been trained to obsess about being faithful to a particular treatment model. This is not a bad thing, except insofar as it blinds us to the larger demand to demonstrate the value of our services.
People go into treatment wanting relief from the pain and discomfort of their life, not because they want to embrace a particular treatment philosophy. We need to show that we can provide that relief. The odd thing is that we can, we can prove it, and yet we generally don’t bother to do so.
Living in the artificial world of Lake Wobegone may be comforting in some sense since everyone is acknowledged to be above average, but I would rather live in a bell curve society where we acknowledge that skills exist on a continuum and I can tell whether the person in front of me is good or great. Life is actually more complex than statistical results. One may choose a clinician for personal, idiosyncratic reasons, rather than selecting a therapist for marginally better statistical results.
This is the full measure of human judgment. Statistics matter, but they are not everything. We choose, however we decide to choose. Let’s just give the best picture we can of the options before us.
Ed Jones, PhD, is the senior vice president for Strategic Planning at the Institute for Health and Productivity Management