Todays blog post is not really for anyone other than hypnotherapists, therapists, trainers in these fields or related fields… I need to get something off mychest…
Those of us in the hypnotherapy field, especially us hypnotherapy trainers have noticed a shifting trend in our field recently… There is muchmore demand for advancing a scientific basis for hypnosis and hypnotherapy.
Recent articles in Newsweek and recently the Washington Post in the US have filtered there way over here and have advised therapy consumers that their therapist better be using techniques that have been “scientifically proven” to be effective. If not, the therapist may be behind the times, misinformed, or worst of all unethical.
I wanted to have a reality check for a moment today… I am in favour in putting the field of hypnotherapy under the microscope, don’t get me wrong and heck, just last week I wrote up a long list of hypnotherapy and hypnosis studies that gave this field a huge amount of credence and credibility…
Let’s look at the reality of things though… What the proponents of evidenced based treatment are talking about is randomised placebo controlled studies of treatment modalities. For example, take a population of patients with a simple phobia. Randomly assign them to three treatment categories: a placebo, a drug treatment or exposure therapy. See which group does best, and there’s your science, your evidenced based treatment. Let’s imagine, hypothetically but not improbably, that the results of the well run, placebo controlled, randomised study show that exposure therapy is most effective for the most people suffering from a simple phobia.
Keep in mind that in this hypothetical population, there is a minority for whom exposure therapy, at least in the conditions of the study, is not effective.
So now a imagine a client comes along – let’s say a 30 year old man- to see their GP, the hypnotherapist, psychologist, psychiatrist, or even a psychoanalyst (god forbid). He has a simple phobia. He also has some obsessive compulsive disorder, low self-esteem, and even some low level depression within the mix, and just went through a messy divorce. He describes his work life as deeply dissatisfying. He feels, in some vague way, that he is different from other men of a similar age to him… This is not hugely untypical of the clients I see in my consulting rooms.
What does the clinician who does not live in the cleanly dichotomous world of health policy and enthusiasts for evidence based randomised control studies do now?
Surely you treat the client, not a symptom or a set of disorders, no? Ideally, I believe, with a mix of approaches that fluidly adapts itself to the needs and capacities of the client at any given moment. I am a hypnotherapist and an NLP practitioner among other things, and I know that gradual exposure is one of the best ways to treat a phobia.
I would consider myself quite ridiculous if I decided to do some anchoring for that (though I have heard some people doing such)…. Now on the other hand, I also know that problems related to low self-esteem, whatever that may mean to an individual client, can be complex, subtle, pervasive and potentially crippling in life… And that going through divorce may affect ones belief in their own ability to have successful relationships and I might indeed recommend something different to this client…
Once again, I find myself wanting to plead, can’t we live with complexity and layers and multiple possibilities and nuance in mental health care, and avoid the dangers and false comforts of simple-simplistic-dichotomies. Like the tempting but false dichotomy, there is evidience or there is not evidence. Are things quite as black and white as that?
I understand and appreciate the scientific method, and value the data obtained by systemic, well designed studies with appropriate controls. But of necessity, these studies can generally test one element of a real person in real life, and their results must be used appropriately, in the context of a larger and more complex picture.
And by the way, can someone point me to the evidence basis that underlies the propositions that evidenced based studies are the best way to reach policy and clinical decisions that help people the most? What’s the control group? 😉
Absolutely agree. I’ve had 3 clients come in on the same day who booked for the same reason but the sessions I did with them were all different because they were 3 different people who had 3 different life experiences, beliefs and ways of processing information. If I’d done the same thing with all 3 then it wouldn’t have worked for 2 of them.
Thank you Sharon – really good hearing from you 🙂
I think you have misunderstood evidence-based practice Adam. Evidence-based practice blends the best clinical research evidence, WITH the therapist’s own clinical experience AND the wishes and preferences of the client. That provides lots of space for individual, creative and experimental approaches – and keeps responsibility with the clinician.
However surely we have a requirement as therapists (and especially as trainers) to be up to date with clinical AND experimental research.
Also there is distinction between efficacy and effectiveness. Efficacy relates to measurements of improvement in ideal conditions. Effectiveness relates to measurements of improvement in real world conditions.
That goes some way to answering you question in the last paragraph – but with regard to whether studies showing real world effectiveness are a good basis for making policy and clinical decisions for a population – we can only talk about probabilities. However what is the alternative? Letting any doctor or therapist just do want they “feel” is right? With taxpayers money?
A therapist who uses only their own experience as evidence puts their clients at a disadvantage to one who blends their own experience with clinical effectiveness results in randomised control trials. There’s an amazing study which showed that experienced psychotherapists are no more effective that novice psychotherapists. To me that is a SHOCKER which we should all burn into our brains. There is a huge capacity to delude ourselves that we are getting better as therapists when all we are doing is distorting and sifting the evidence to fit our practice and chosen treatments.
I’m against completely heavily manualised treatments (for x do y) but even more against therapists being ignorant of best practice and ignoring the research. That’s one of the reasons the hypnotherapy profession is so fragmented and unprofessional.
To Sharon I’d have to ask the question: how do you know if you’d have used the same protocol that the results wouldn’t have been equally successful? You might say that is based on your experience – i.e. your evidence of clinical effectiveness. So it is a sort of evidence-based approach.
(Also even a heavily manualised CBT approach aims to map out the individual dysfunctional beliefs and appraisals).
Blending clinical experience with latest research results is an ongoing challenge that keeps a therapist constantly questioning their whole theoretical and pragmatic approach. They realise they may have learned theories and practice that just don’t hold up… and they have to change. That’s a good thing. Clients expect us to know what works – not just in our experience – but in the broader world of practice.
Great blog BTW!
Hello Mark,
Thank you for your contribution here, it is very much appreciated.
Firstly, let me say that I consider myself to be a heavy proponent of evidence-based practice and I have worked as an evidence based practitioner for years. I agree with what you have said here.
We have a legal duty of care, I work closely with informed consent principles and employ modern research findings inherently throughout my work – much of the printed research from the IJECH makes its way to this blog and to my students via my members area and I am very particlar to include such principles as those you have mentioned here in my own trainings.
I actually think you have misunderstood the ethos behind my post here. I was asking some questions about the notion of evidence based practice, and discussing it. Not just taking it as given, my belief is that it is incredibly important to do (rather than always taking for granted everything we do and how we are in therapy as right, correct and without flaws). I think often in therapeutic circles, people just tend to take on board the entire set of fundamental beliefs and strategies that their trainer offers up and they believe in much the same without question.
I teach many principles of NLP, yet I find it incredibly important to point out where research has found techniques and strategies from the field of NLP to be flawed or wrong assumptions made, for example.
Though whilst we champion the notion of being evidence based in our work, as is a large trend on the therapy community of the last 10-15 years or so (thank goodness) – I was (albeit rather ironically) questioning the rationale for evidence based therapy… And (albeit even more ironically) questioning if there was any evidence to suggest that evidence based therapy had more efficacy than non-evidence based therapy.
A large number of psychotherapy techniques nowadays are characterised by their fans as “evidence-based.” This sounds good, and most people have heard it enough to recognise that “evidence-based” must be something better than “not evidence-based”. Evidence is something you have to have for success in court, and none of us would want to be such fools that we believed everything without any evidence at all. But exactly what does evidence have to do with psychotherapy?
During the last 15 years or so, a movement in the medical world began to emphasise the need for appropriate research to show that treatments were effective– that they actually accomplished what they were intended to do. The thrust of this movement for evidence-based medicine was that treatment for disease or injury should focus on the methods with the best indications of past success, not on anyone’s personal preference, tradition, or old habits. From the medical world, the idea of using methods supported by good research evidence came into psychology as well. Evidence-based practice has become a goal in many fields.
But there are very few treatments such that we could say, “this one has perfect evidence supporting it; that one has no evidence at all. The first one is evidence-based, the second one is not.” This is because there are many kinds of evidence that we could bring into a discussion about the effectiveness of a treatment. Even the much-maligned testimonial is evidence of a sort, although not a very convincing sort. In fact, we need to think not just about the amount of evidence for a treatment, but for the kind of evidence: what is called the “level” of evidence, as we rank different kinds of evidence from the very powerful to the very weak.
Professionals working in medicine, psychology, public health, and a number of other fields have generated a number of ideas about the factors that make for a high or a low level of evidence. Generally, authors agree that research evidence involving randomised controlled trials is the most compelling of all, because randomisation ( the assignment of people to treatments on the basis of numbers alone, not because of their individual characteristics) does all we can do to exclude the confusion of confounding variables, that might make us think a treatment was effective when it was not. Under the many circumstances that make randomisation impossible, the best approach might be clinical controlled trials, where individual wishes or circumstances place people in one or another of the treatment groups— but these arrangements can make it difficult to know whether a treatment that worked for the kind of person who chooses it would also work for other people.
I have been part of lengthy discussions recently where it was suggested that treatments supported by randomised controlled trials should be designated “evidence-based”, and those supported by clinical controlled trials termed “evidence-supported”. Both of these are high levels of evidence, and with respect to psychotherapy, we need to remember that randomisation can be very difficult to do. It was also suggested that a classification for treatments that were supported only by case studies or by other studies with various weaknesses: these would be called “evidence-informed”. They even created a fourth category, “belief-based”, for methods that were based on a theory or assumption but which had no usable research evidence to support them. Finally, they added the category “Potentially harmful interventions” for those that had shown evidence of injury or worsening of symptoms to persons receiving treatment, or that had manuals or other materials suggesting the possibility of harm, though there would be lots of heated discussion, wouldn’t there? I mean, lots of people would consider we have a duty of care and in line with informed consent principles should be advising clients of false memory syndrome issues related to regression therapies or purposely inducing abreaction, yet others use this as their main modailty on therapy.
These basic characteristics of different levels of evidence are not the only things required before a treatment can be called “evidence-based” or categorised in another way, according to may. For the higher levels, the research supporting the treatment would have to be replicated independently– that is, by researchers who did not have the personal interest that the initial investigators usually have. There would need to be a manual for the intervention, or some other way of assuring intervention fidelity (that the treatment is carried out in the same way each time). Evaluation would need to be done “blind”, with testers who are ignorant of the treatment an individual is receiving, so their assessments are not affected by their expectations. Statistical analyses would have to follow the rules agreed upon by researchers. In other words, the research and the reports of its results would all have to be done correctly before they were considered to be helpful in deciding whether a treatment was “evidence-based” or should be categorised differently.
Mark, when it comes to evidence based therapy or treatment, I think I understand what it is… None the less, I like to question these things on a regular basis… Not just the factual evidence itself, but the stance of it and the underlying philosophy that evidence-based notions are the way forward.
I certainly do not advocate experience-only as a way forward and of course there must be balance and a combination, but I also think it healthy to question assumptions that inherently exist within how we work and what we believe about our work.
Thanks again, A.
Hi Adam
I have always found evidence-based difficult to get my head round. So I have mixed views. Science is brilliant for testing things which are more objective, like moving objects (not quantum) and chemical reactions etc. BUT ‘mental wellness’ is one of the most subjective things in existence. How can a tool used for measuring objective things be used to measure subjective things like wellness? In years to come someone will realise what a farce it is to try and test subjective things in this way.
Imagine a research study trying to measure what the best piece of art is in the world! It’s no different from testing types of therapy. Some research would show that The Sunflowers was best, and another research study would show it was overrated, and the conclusion would say ‘more research needed, inconclusive’.
People say CBT works, but I’ve known so many people where it doesn’t work. I talked someone through CBT recently, and they weren’t having any thoughts, so it was impossible to use. CBT relies on noticing Negative Automatic Thoughts, or checking for someone’s unhelpful core beliefs. Some people seem to get anxiety which has nothing to do with thoughts or beliefs. It almost seems like a physiological anxiety, hormones etc.
The reasons for ANY mental illness is multiple. i.e. one person’s anxiety might have three causes (any one of psychological, neurochemical, hormonal, genetic, sociological etc etc). Another person might have different causes or one cause. Some people might just have a hormonal imbalance.
The other problem is that therapists, of any kind, tend to hugely overestimate their own efficacy. This is a big topic to analyse. Clients also overestimate the efficacy of their therapy if asked at the end of therapy how effective it was. If asked two years later, a huge percentage of people have gone back to how they were previously! Permanent change often never happens.
Again so many reasons for this. Wanting to please the therapist. Feeling foolish you’ve paid, so say it worked to justify the big spend. You felt lonely, or down, so meeting someone who showed you compassion made you feel temporarily better, but you thought you felt better BECAUSE of the therapy, but it was just because they showed you love!
I would argue that the number of factors for measuring therapy is SO huge (everyone is different, every illness is different, every therapist is different, everyone’s experiences are different) that there is NO way of measuring it accurately. I can’t believe scientists haven’t realised this.
It’s impossible. In years to come people will be shocked they even tried to measure something so diverse and complex.
In one study, there were two groups of helpers. One group were professionals untrained in therapy. The other group were highly trained therapists with lots of experience and many years training. Each group of helpers, got random clients. Some of these clients had very serious issues, some had moderate issues. Guess what!? The untrained therapists did better than the trained therapists!!!!!!!
It is shocking. People tried redoing this study, and the next time the two groups came out equal. Still not great!
Anyway there you have it. This explains the Dodo Bird Verdict too. All therapies work equally because the only part of therapy that works (a la Carl Rogers) is the therapeutic relationship and placebo effect. The type of therapy means nothing. That includes hypnotherapy. An untrained person could do as well.
Anyway sorry to bring the bad news!
Good luck!
Thank you for taking the time to write this post Mike, I really appreciate it and enjoyed reading it, Best wishes, Adam.