I was recently asked to respond to the question of whether or not it has been “proven” than “General Psychologists” get the same outcomes as “Clinical Psychologists” and that there is in fact no value to Master’s training.

 

I have been of the opinion for some time that a number of people have been selectively presenting the data that speaks to these issues.  On both sides of the debate, I have seen over-statement of the findings in order to support particular arguments. My intention here is purely to present the papers I consider add the most value to this discussion and giive my view of what can and cant be gleaned from them.

 

For ease, I have presented the following blog as 4 myths.

 

Myth 1: There is no evidence that Post-graduate training in psychotherapy improves outcomes.

 

Stein and Lambert 1995 Graduate training in psychotherapy: are outcomes enhanced?, JCCP, 63 (2) 182-196.

 

Stein and Lambert conducted a meta-analysis of outcomes (dropout, improvement and satisfaction) in studies where patients had been allocated to receive treatment from therapists of different levels of training.  This meta-analysis included 35 studies and represents over 19000 patients. Overall, there was a moderate and significant effect of therapist training on drop-out, improvement and client satisfaction. Of particular note, the effect is strongest when the difference in training is greatest.  The authors highlight the study by Kapoian (1981) who compared BA level and sub BA level therapists with masters and doctoral level therapists and found an effect of D = 0.6 for dropout (64% dropped out for the lower trained group compared to 23% for the higher trained group)

 

It is worth noting that there is no comparison in this study between “Clinical Psychologists” and “Counselling Psychologists” or any of the other groups that we would recognise in Australia.

 

Implications

 

  • Post-graduate training in psychotherapy matters, and improves client outcomes, client dropout rates, and client satisfaction
  • There is no evidence as to the “brand” of training that matters.  It is highly likely that this finding applies to many of our current Australian Areas of Practice Endorsement (AoPEs)
  • This finding does not preclude the value of other types of training, merely establishes that postgraduate training does indeed value add.

 

Myth 2: It has been proven that “Clinical Psychologists” and “General Psychologists” get equal outcomes in Australia

 

People supporting this position typically point to the following paper.

 

Pirkis, J et al (2011). Australia’s Better Access initiative: an evaluation, ANZJP, 45, 726-739.

(NB: full text unavailable without a login)

 

Pirkis et al (2011) were funded by the Department of Health to conduct the first evaluation into Better Access.  This study did collect data for groups eligible to treat under Better Access as “Clinical Psychologists”, “General Psychologists” and “GPs”

 

There is a common misconception that this paper “proved” that clinical psychologists get no better outcomes than general psychologists.  This is a misunderstanding or misrepresentation of this data and the methodology by which it was obtained

 

Forty-one clinical psychologists, and forty-nine registered psychologists self-selected for inclusion into a small study (with a response rate of less than 10% of those who were approached).  These psychologists then self-selected 5 to 10 clients to participate in the study. There was no random selection of clinicians or clients. Clients were tracked at pre-post with K-10 and DASS.  Both the clinical psych and the general psychologist group made good gains. There was no difference in the magnitude of the gains between the groups. There is no way of knowing if the cases treated by both groups were equivalent on a range of factors.  In fact, there is data to suggest they treated quite different groups. The Clin Psychologists saw more men (36% vs 26%), they saw more under 30 year olds (22% vs 12%), and they saw less Rural (44% vs 53).  There is a reason the original authors did not conduct ‘between-group’ analyses.

 

This study is the only piece of evidence in Australia that treatment under Better Access works, and as such we should all be very thankful to Pirkis and her colleagues for their efforts, but as they acknowledge themselves there were limitations to the study, and they did the best with the resources available.

 

In fact as many on will remember there was an entire Senate inquiry largely fueled by professional outrage about this paper.  I would like to remind people that as well as deciding that as a profession we were rather undignified, the senators also concluded the following regarding the Pirkis paper

 

“…  the conclusions drawn are readily disputed based on the very poor methodology of the evaluation and therefore of limited value as a basis for decision-making going forward.”

 

I am aware of an unpublished post-hoc analysis that has been circling various forums claiming to prove that these groups are equivalent.  But, I would return to statistics 101 and the logic of post-hoc analysis.

 

Any decent stats textbook will tell you that post hoc tests may be used for exploring a data-set after planned analyses have been completed, but post-hoc tests may not be used for testing new hypotheses.  There is a danger of chasing your own biases with statistics, rather than impartially trying to determine truth. A post-hoc analysis becomes a hypothesis for a future study to try and disprove.

 

In my opinion, the Pirkis study and the subsequent unpublished post hoc analyses are not methodologically strong enough to allow comparison between groups and therefore outweigh the findings of the Lambert and Stein paper I presented first.

 

Implications

 

  • All patients treated got a lot better
  • The study methodology has produced quite different groups that were treated by all three groups.  The authors do not attempt to compare between the groups for effect size and it would have been inappropriate for them to do so
  • Post-hoc statistics cannot overcome methodological shortcomings and make comparisons that were never designed to be made.

 

Myth 3: Clinical Psychologists are the only ones who should be delivering psychotherapy

 

I have seen various submissions to the government suggesting that Clinical Psychologists are superior to everyone else when it comes to therapy.  This assertion is fairly difficult to square with the findings of the following paper.

 

Wampold  & Brown (2005). Estimating therapist variability: A naturalistic study of outcomes in managed care. JCCP, 73, 914-923.

 

A naturalistic study of over 500 therapists and 6000 clients showed a consistent difference between the most effective and the least effective practitioners.  A logistic regression was undertaken to look at predictors of outcomes and was unable to identify specific characteristics of therapists who were more effective, of particular note, neither “training” or “profession” were significant predictors of outcome.

 

This seems to support the idea that psychotherapy can be done equivalently by anyone, until you look in more detail at the methodology.  There was very little variability in the level of training of the therapists in the study.  63% had masters degrees in social work, psychology or a related field, 30% had doctoral or PhDs, and the rest were medicos.

 

This paper is often over-applied to suggest that everyone doing therapy gets the same outcomes.  A more accurate description would be that the differences in training in this sample of therapists did not add to predicting outcome.

 

Implications

 

  • Beyond post-graduate level training, the type of training or amount of extra training is not a significant predictor of client outcome.
  • It is difficult to apply this to an Australian context, therapists in this study were not identified by what we would recognise as areas of practice endorsement or what other jurisdictions would call specialisation
  • However, given equality of outcomes applies across trained professionals from across disciplines, it seems safe to assume that the same result may apply across many of our AoPEs.

 

Myth 4: More experienced practitioners get better outcomes

 

Finally, there is one other paper that I would like to throw out there to dispute one of the other common myths I hear about psychotherapy outcomes.  This is the myth of age superiority. Typically, this is presented along the lines of “… I have been practicing for X decades, and I know that I do a better job than some kid fresh out of a masters program”

 

Goldberg et al (2016) Do psychotherapists improve with time and experience?, JCP 63 (1) 1-11.

 

Goldberg and colleagues conducted a large naturalistic longitudinal analysis of therapist outcomes, 170 therapists, and over 6500 patients were followed over time  There is a lot in this paper, but one of the key findings is that therapists actually deteriorate over time rather than improve. The effect is small but significant and survived quite a few rounds of the authors trying to control for other variables such as caseload.

 

Implications

 

  • Experience is not necessarily a substitute for training
  • Length of practice is associate with a small but significant deterioration in patient outcomes
  • Training alone is no defense against deterioration

 

So to summarise what I’ve come to believe from these studies

 

1 – There is strong evidence for the value of postgraduate training in producing better outcomes. 

2 – There is insufficient evidence to make that assertion in the Australian context.  This should be interpreted as a lack of evidence, rather than evidence of no difference. The study that is frequently cited to “prove” there is no difference is not suitable to make comparisons between groups and therefore people should stop doing so.

3 – There almost certainly is not a difference in outcomes between different disciplines in Australian psychology(Counselling psychs, ed and dev psychs, health psychs etc).  Anyone whose post-graduate training has included a substantial component in psychotherapy is probably going to get outcomes that are as good as each other.  However, there is scant research that speaks to what a minimum “dose” of training is.

4 – Experience alone does not guarantee improved outcomes, in fact as a whole the opposite is true.

I would encourage people to hunt down these papers (along with many others) to read and critique for yourself.  Those of you who know me, know I am passionate about the outcome literature and am happy to see these issues being discussed.  I present these 4 papers in particular as I know that some of the findings will support various people’s worldview, while other findings will challenge those same people.

Share This