What is a paradigm shift?
The idea of a paradigm shift has been hijacked by advertising agencies and used to describe everything from a new smartphone, to a lower fat cheese. But in science, the idea of a paradigm shift has a slightly more nuanced meaning.
Physicist and philosopher Thomas Kuhn coined the term in 1962 in his book “the structure of scientific revolutions”. He argues that science does not progress in a linear fashion, gradually increasing knowledge, actually science progresses through leaps followed by periods of consolidation.
Science progressing through the development of theories, which describe all observable phenomena. The great thing about scientific theories is that they make predictions which can be tested. The better the theory, the more accurate the predictions. Unfortunately, the better we know an area, the more we push our theories to the very limits of their applicability, and then something strange but predictable happens; the theories start to fail around the very edges of knowledge.
A few unusual findings do not fundamentally alter a theory, but when scientists see a pattern of new knowledge not fitting a known theory it is a clear sign that their theory is either wrong, or at best incomplete.
Ptolemaic cosmology had served mankind brilliantly for millennia, it has survived the rise and fall of dozens of empires, and fueled all of these empires by allowing their farmers to know when to plant crops due to its precise and accurate predictions of the seasons and the months. Unfortunately, it’s prediction of the movement of other objects in the heavens was flawed, leading Copernicus to realise that the Sun, not the earth was in fact at the center of the solar system.
Similar paradigm shifts have occurred with Pasteur’s germ theory of disease, Einstein’s general relativity, Mendel’s understanding of genetic heritability, and a host of others.
Psychotherapy: stuck for 60 years.
It is my contention that psychotherapy research is sitting on the precipice of a paradigm shift that needs to happen. It seems like almost every other week we as therapists are hit with another piece of research evidence that does not neatly fit into our known worldview and theory as to how therapeutic change occurs. These results do not appear to be fabrications and are not driven by shoddy science, yet our current theories are not adequate to explain them.
The evidence-based practice paradigm that I believe we are currently nearing the end, has served psychology well since its inception at the Boulder conference in 1949. In brief, this theory argues that psychological disorders can be categorised, these disorders can then have targeted treatments developed to treat them, these interventions can then be rigorously evaluated using the scientific method in order to determine their effectiveness.
It is this paradigm that has given us the dominant model of cognitive and behavioural therapy (CBT), but it is also this paradigm that has given us Interpersonal Therapy (IPT) Acceptance and Commitment Therapy (ACT), Schema Therapy, Dialectic Behaviour Therapy (DBT), Eye Movement Desensitisation Reprocessing (EMDR) and about 150 others.
When i say, this paradigm has served us well, I mean it. Thanks to the rigorous evaluation work of literally thousands of researchers in psychotherapy, we are now able to quantify the effectiveness of psychotherapy. We are able to demonstrate how effective therapy is compared to drugs (often as good, sometimes better), and we as practitioners are able to consult a canon of evidence-based resources that give us a basic foundation of how to treat just about any client presenting with any known disorder.
The problem is that the paradigm is starting to break down around the edges as new findings emerge. There are now a number of indisputable truths in psychotherapy research that do not neatly fit within the evidence-based practice paradigm and are robust enough to not be easily explained away as idiosyncratic results due to experimental error.
Problem 1 – Technique does not seem to matter
Evidence-based practice (EBP) worked brilliantly when we were only comparing one technique to either wait-list, no treatment, or medication. CBT seems to nearly always deliver the goods. When we introduced other techniques EBP still worked really well, they worked too. As therapists, we seemed blessed with an abundance of riches. Where things started to go wrong was when we started comparing treatments to each other. The result was nearly always the same; a statistical dead heat.
Problem 2 Even non-techniques seem to work
Surely I am not the only one who finds it troubling that some of the non-techniques that we have used as comparison groups in our research seem to work just as well as the actual technique they are being compared to. Of course the most famous of these is Interpersonal Therapy, which emerged from its status as a placebo sham therapy in the collaborative depression research trial (CDRP) into full-blown evidence-based treatment in its own right. But for depression, in particular, it seems that nearly anything works, whether it has a scientific underpinning or not.
Problem 3 – Our techniques don’t seem to always work for the reasons that they should work
For our theories to be valid, then the mechanism of change should be predicted by the theory. EMDR is one of the best examples of the failings of process research in our field. Supposedly, the eye movement during trauma exposure allows the brain to reprocess trauma in a less overwhelming fashion. Unfortunately, it seems to work whether or not you include the eye movement. But before we get on our high horse and start attacking EMDR, try looking at the process research in some of the other therapies in the canon of evidence-based practice, and tell me how often can they demonstrate that they work for the reasons they should.
Problem 4 – Diagnosis doesn’t seem to matter
In medicine, diagnosis is critical. Some types of cancers respond better or worse to some types of cancer treatments. Getting the diagnosis correct is a critical part of the treatment protocol, and can literally be the difference between life and death. However, in our largest treatment studies, diagnosis doesn’t seem to predict outcome, and despite a significant investment of time and research into the idea of treatment matching, it doesn’t look as though we can predict which clients respond to which treatments with any degree of consistency.
Problem 5 – Placebo is huge (both placebo and allegiance)
If our EBP paradigm were correct, it would predict that exposure to an evidence-based treatment would be one of the leading contributors toward a positive outcome. However, our research seems to indicate that the degree to which patients believe therapy is going to be beneficial is at least as important and possibly more important than exposure to treatment.
This is not to denigrate placebo effect. One of my favourite papers is titled “If the placebo effect is all in your head does that mean you only think you are getting better”. We know that placebo is real and is a real contributor to outcomes right across the health professions. However, the comparative size of placebo in psychotherapy should give pause if nothing else.
There is also another type of placebo effect called allegiance effects, which is whether the therapist believes in the treatment or not. When the therapist believes in a treatment type, it is more likely to work. Both of these findings should be hugely troubling for our current paradigm.
Problem 6 – Outcomes are stuck and have been for 60 years
The beauty of a functioning paradigm is that it allows progress to be made. When JFK said we would put a man on the moon, no one had any doubt that it was possible; the physics worked, the material science worked, the computers worked. There was a question as to whether we could do it that quickly, but there was no question it could be done. Can we put a man on Mars? Absolutely. Can we send a man to Alpha Centauri? …
Well, now we are reaching the limit of that paradigm. Until someone comes along and figures out a new propulsion mechanism, and figures out a solution to the speed limit imposed by general relativity (and the time dilation effects that go with it) we are stuck pretty close to our sun for now.
In psychotherapy the equivalent questions are; are you sure that psychotherapy works? Absolutely. Can you tell me how to make therapy more effective for more people? …
Again we bump into the limit of what is currently known.
The outcome research suggests that we have not improved our overall effectiveness since the 1950s. This screams of hitting the limits of our current paradigm and waiting for the new innovation that will carry us forward.
Problem 7 – Inconvenient findings keep coming up from other disciplines that don’t fit with the model
LIke many psychologists, I had happily accepted the received wisdom that stomach ulcers were caused by an acid build up caused by stress. I will be the first to put my hand up to say that in the late 1990’s and early 2000s, I along with thousands of others delivered stress management programs to ulcer sufferers.
In 2005, two Australian researchers from Perth, Marshall and Warren were awarded a Nobel prize in medicine for their pioneering work demonstrating that actually most ulcers were caused by the Helicobacter Pylori bacteria, and were far more efficaciously treated with antibiotics than therapy.
There is currently mounting evidence that the disorders we commonly know as anxiety and depression are strongly linked with the type of good bacteria living in your gut (the microbiome theory), and that is without even beginning to scrape the surface of the findings that are coming out almost weekly from our colleagues in neuroscience, genetics, and epigenetics.
So what is the place for psychotherapy going forward
Clearly, psychotherapy works. The data is too voluminous and the methodology too rigorous to refute that canon of research. However, the problems listed above have created more than a healthy skepticism, and for some have led to downright cynicism.
Too often when I am presenting workshops, I hear a downright dismissal of the scientific method. In these scenarios, a therapist will typically point out that the approach they use “might not be scientific, but I just see how helpful it is with all of my clients”. When I try and call them back to the body of science, they will point to any one of the recognised limitations listed above, and point out that clearly, scientific method is wrong-headed and that i’m standing on shaky intellectual ground defending it.
While I can understand this type of reasoning, it is fundamentally flawed. The scientific method is not flawed, but our EBP paradigm is showing its age, and currently, there is nothing better to replace it.
I dont have the answer, and am not writing this paper to propose what the next paradigm should be. But if you are a smart young PhD student right now, I wouldn’t mind you at least considering taking this on as a challenge, because one of you is going to come up with a theory of psychopathology and of therapeutic change that both deepens our understanding of how therapy is so successful, but also allows us to understand the reasons it doesn’t seem to work exactly as our current theory would predict.
Rather, I write this paper to consider the best course for the professional therapist right now. We know we are working within a paradigm that has probably reached its used by date, but we also know that there is nothing new to replace it on the immediate horizon.
“Technique doesn’t matter, and it’s all about relationship” is, in my opinion, a misreading of the data, and i fear that we are entering a period where unprofessional, unscientific and in some cases unethical treatments are being offered by well meaning therapists simple because they have thrown the baby out with the bathwater. Because the current paradigm is failing around the edges, it doesn’t mean it is fundamentally wrong, it simply means that more work is needed to get a more complete understanding and develop a theory of therapeutic change that can incorporate the current limitations of the model
In the meantime, “practice based evidence” has become more important now than ever. If you don’t trust the current research findings, great, conduct your own research. It is my contention that in the current uncertainty it is now more important than ever to conduct routine outcome evaluation.
Just because our current paradigm has reached its practical limits, it doesn’t mean that the scientific method has.