In the Seinfeld episode "The Kiss Hello" George Costanza describes his physical therapist as “… so mentally gifted that we mustn't disturb the delicate genius.” This could describe many of us involved in neurorehabilitation. We assume that we’re making the treatment choices for stroke swurvivors because we have a lot of experience. A lot of experience is a good thing, right?
“It works in my patients”
Neurorehabilitation research is now in a “golden age” with an exponential rise in quality of measuring recovery. This allows researches to test new treatments ever more accurately. For example, with functional magnetic resonance imaging (fMRI) we can see the work of the brain as it attempts to control movement. Triangulate changes in fMRI with computer-driven kinematic data capture, movement outcome measures, and data analysis and a three dimensional view of recovery becomes clear. But like the proverbial tree falling in a forest, are therapists listening?
“It works in my patients” represents observation as justification of treatment. Researchers call observations “anecdotal data.” Anecdotal data does not carry enough scientific weight to justify therapeutic interventions as best practice. Researchers do not consider observations robust enough to be published in journal articles, and journal articles provide the foundation for evidenced-based practice.
Example: I know a therapist who keeps telling me that he has “fifteen years of neurological experience.” "What do you do to treat spasticity in stroke survivors," I asked him. He listed 5 or 6 treatments that “…reduce spasticity in my patients.” His answer was remarkable for two reasons. First, few of his "treatments" were effective (if we are to believe the scientific journals). Second, he was not trained in measurement of spasticity. So even if something did work there’d be no way to measure success.
“I’ve seen research that said…“
It is rare to find a therapist who reads rehabilitation research. Therapists often rely on textbooks and lectures from school, research filtered through magazines or seminars. There is nothing inherently wrong with these sources of information, but the process does promote a scatter-shot perception of available therapies and can lead to a patchwork of treatment strategies, which may or may not be considered “best practice.”
College and university professors often tend to teach what they know and they know what they were taught and what they've used clinically. This provides an echo chamber in which present teachings are based on old, often refuted, research. Proof of this is available through a quick Internet perusal of course descriptions and syllabi for PT/A and OT/A programs. The most didactic and clinical neurorehabilitative teachings on the secondary education level involve treatment techniques that are 50 years old and that remain largely unproven. Textbooks cannot possibly keep pace with the enormous amount of research that unfolds, daily. Our best hope remains the development of the doctor of physical therapy (DPT). DPT’s tend have an inherent appreciation for peer review research and, just as important, they have the skills to access that research. For their part, practicing therapists and assistants hold some responsibility to pull the best that rehabilitation research has into their practice. Entropy often exists because therapists are more comfortable with the known that is ineffective than something new and effective, but that has to be learned.
Example: I finished a talk on neuroplastcicty in stroke and a PT came up to me and said, “That stuff on neuroplasticity was really interesting. The only problem is that if the stroke survivor has loss of sensation and proprioception then there’s no way to get them to move in any sort of functional way.”
I was glad for the question because it was something I’d done quite a bit of research on. I discussed with the therapist how a critical mass of studies has shown that relatively normal and functional movement can be relearned without sensation and proprioception. The therapist was correctly referencing research but was referencing research that was over 60 years old and had been successfully and completely refuted in a large amount of animal and human studies. Therapists often know research. But now more than ever research has become such a fast moving beast that, don’t blink, what was “true” may no longer be.
“I use a mix of therapies”
Many therapists are successful, and many renowned, for a particular therapy mix. And it may be true that their mix that they’ve developed provides superior outcomes. But there are two inherent problems with using therapies not subjected to standardized testing:
1. There is no way to know if the therapy actually works. Anecdotally (see “it works in my patients,” above) it may work but since there has been no clinical research there is no way to establish efficacy.
2. Since a “mix” of therapies is inherently complicated to define in terms of dosage and individualized treatments for individual patients, actual definitions of the therapy are difficult to pin down and subsequently impossible to duplicate and test.
Example: I spoke to an OTA program recently and showed some data that a particular therapy technique was not effective in chronic stroke survivors. While I was speaking I noticed that a few of the students were hiding their faces. “What?” I asked. They whispered, “Our program director loves that therapy, she’s certified in it and says it’s the best.” After I finished speaking the program director came to the podium and I said, “I’m sorry. I didn't mean to insult—.“ She cut me off. “It’s OK, I use a mix of therapies,” she said.