
To recap, his book consists of two general arguments:
1. Rates of psychiatric disability have soared during the period when many new psychiatric medications have been introduced--contrary to what you would assume should happen if the drugs worked so well.
2. Long-term studies actually comparing patients on meds vs. those off meds have yielded an apparently paradoxical result: patients not taking meds end up doing better over the years than those taking meds.
Last time, I rebutted the disability argument, pointing out that the skyrocketing rates of psychiatric disability have to do with increasing enthusiasm for diagnosing psychiatric illness and for increasing government financial incentives that encourage people to seek a psychiatric diagnosis.
In this post I'll consider his second argument. Whitaker argues that essentially all classes of psychiatric drugs are harmful in the long-term--including antidepressants, anti-anxiety drugs, and ADHD drugs--but I'll focus on antipsychotics, because this seems to be his main focus.
Before beginning, as anybody who has followed my blog knows, I readily acknowledge that psychiatrists over-rely on drug treatment of mental illness and should be doing more therapy. In order to maximize profits, drug companies consistently downplay the side effects of their medications. It's not that I believe drug treatment worsens conditions--rather, that our obsession with psychopharmacology has deprived many patients of integrative treatment, which combines the judicious use of medications (when needed) with the right kind psychotherapy (also, when needed). Thus, while both Whitaker and I end up with a similar conclusion--that psych meds are inappropriately used in the U.S--we arrive at that conclusion in very different ways and we mean different things by "inappropriate use."
On his website, Whitaker has posted a useful page in which he provides links to the studies he mentions in his book. He lists 34 studies, including links and brief summaries. Don't worry, I won't go through all 34, I'll just make two general observations.
1. The studies cited by Whitaker are old, and the patients who were diagnosed with "schizophrenia"often did not have that illness.
Most of the long-term outcome studies in the literature were published in the 60s, 70s, 80s, and early 90s. But they are actually even older than that, because these studies often describe people who were diagnosed with "schizophrenia" 10 to 20 years before the date of the publication. For example, the Vermont longitudinal study was published in 1987, while is already pretty old, but the study describes patients who were originally diagnosed with schizophrenia in the late 1950s and early 1960s.
Why does this matter? Because before DSM-III was published in 1980, American psychiatrists had a bad habit of vastly over-diagnosing "schizophrenia." Patients who would now be labeled with bipolar disorder, borderline personality disorder, depression, and various other problems, were, in the 60s and 70s, likely to be diagnosed "schizophrenic."
The classic study to demonstrate American psychiatry's historic infatuation with the schizophrenia diagnosis was published in 1971. The researchers showed videotapes of 8 patient interviews to a few hundred psychiatrists in both Britain and the U.S. After viewing the videotapes, the doctors were asked to make a diagnosis. The disagreements were glaring--and were somewhat embarrassing for the Americans. For example, in one case, "Patient F," the patient was a man from Brooklyn who had hysterical paralysis of one arm and a history of mood fluctuations associated with alcohol abuse. The diagnosis? According to 69% of American psychiatrists, the man had schizophrenia, whereas only 2% of the British psychiatrists favored that diagnosis. Most of the British psychiatrists diagnosed the patient with "hysterical personality disorder."
The point is that most of the long term outcome data Whitaker cites is based on unreliable and very broad conceptions of schizophrenia. While he accurately describes studies showing that schizophrenic patients who were not on medications often did very well, I'll bet you a dime to a dollar that many of these high functioning schizophrenics never had schizophrenia in the first place, at least not according to current DSM-IV criteria. They did perfectly well off antipsychotics for the same reason somebody without diabetes would do well off insulin--they didn't actually have the disease.
2. Most of the studies Whitaker cites are observational studies--which are suggestive, but not definitive.
The best way to figure out if a medication is helpful or harmful for schizophrenia is to conduct a placebo-controlled study. You randomly assign patients to two treatments: an antipsychotic or a sugar pill placebo. A recent meta-analysis of 38 such randomized studies of second generation antipsychotics, which pulled together data from 7323 patients, found that antipsychotics outperform placebo with a response rate of 41% vs. placebo's 24%. These were mainly short term studies, lasting a couple of months or so. Thus, we know that antipsychotics are moderately effective in the short term, although they can have some nasty side effects.
In order to convincingly show that antipsychotics improve (or worsen) schizophrenia over the long term, you would have to do a placebo controlled trial lasting many years. That has not been done--not through some conspiracy of the pharmaceutical industry, but because it would be extremely hard to conduct such a trial. Imagine if you were schizophrenic and a researcher asked you to be in a 10 year long study in which you might get an active medication or a sugar pill. Schizophrenia is a serious, like threatening condition. Would you roll the dice with your life, taking the chance that you would be on a sugar pill for 10 years? Probably not. Furthermore, even if you agreed, chances are good that you would drop out of the study before the 10 year mark--for any number of reasons, such as moving away, worsening mental or physical illness, side effects etc....
Because these gold-standard long term studies do not exist, we are forced to fall back on much less convincing evidence--observational studies. This is where researchers enroll, say, 200 patients with schizophrenia and ask them to agree to periodic evaluations. The patients can get their care wherever they want, they can continue to see their doctors, take or not take medications, seek enlightenment in the Himalayas, or go to Lourdes. It's all up to them. Every few years, the researchers contact them and ask them how they are doing. Are they taking meds? Are they having psychotic symptoms? Do they have a job? And so on.
You can immediately see why such observational (also termed "naturalistic") studies are suboptimal. If, 10 years after the study starts, the patients who are not taking meds anymore are doing better than the patients who are taking meds, how do you interpret this? You could conclude that antipsychotics worsen schizophrenia. Or, you could conclude that the patients who are not on meds after 10 years simply were blessed with a milder version of schizophrenia, such that they recovered after a few years and didn't need meds anymore.
This inherent weakness of observational studies is why the Harrow study is so hard to interpret. Whitaker and Andrew Nierenberg debated this study on Radio Boston recently. In this study, Harrow and colleagues identified 64 patients with schizophrenia and reinterviewed them five times over the next 15 years. At the final 15 year follow up, 64% of the patients who were taking antipsychotics still had psychotic symptoms, whereas only 28% of those not taking antipsychotics had such symptoms. What does this mean? Whitaker sees this as evidence that antipsychotics worsen mental illness. I see it differently. I suspect that the patients still taking antipsychotics after 15 years had more severe cases of schizophrenia to begin with, and therefore required more prolonged treatment with medications. The medication didn't cause the psychosis--the psychosis caused patients to still need the medications.
Over the last few days, I've spent many hours thinking and writing about Anatomy of an Epidemic. Mostly, I've chipped away at its central thesis, and yet the fact that this powerful book has riveted my attention for so long means something. It's fascinating. It's enthralling. And it is the work of a highly intelligent and inquiring mind--a person who is struggling to understand the nature of psychiatric treatment. Put it on your reading list, and join the debate.