Wednesday, September 27, 2017

Sign Up for The Carlat Report's Free Quick Tips Newsletter


The Carlat Report Newsletters are launching a bi-monthly email newsletter that will contain concise, practical advice for busy clinicians based on the work we are doing in The Carlat Report, The Carlat Child Report, and The Carlat Addiction Treatment Report.
You don't have to be a paid subscriber to get it, simply fill out this quick webform and you'll start receiving each new issue, along with special offers and information on the latest books and courses.

Tuesday, September 12, 2017

Hating on Antipsychotics: Are We Going Too Far?

Antipsychotics are not perfect. No drugs are. They can cause weight gain and weird movement side effects and sleepiness. But they have their uses, such as quelling racing thoughts, inner turmoil, and psychosis. There’s nothing inherently good or bad about any class of drugs. It’s up to physicians to understand the data and to prescribe medications judiciously.

With that introduction out of the way, let me describe for you a recent study that was published in the July 11 issue of JAMA. It’s called the VAST-D trial, which stands for “VA Augmentation and Switching Treatments for Improving Depression Outcomes”. In this study, researchers randomly assigned 1522 patients to thee treatments: 1. Switch from the current antidepressant to bupropion; 2. Continue the current antidepressant, and add bupropion; 3. Continue the current antidepressant and add aripiprazole (a second-generation antipsychotic that is FDA approved for adjunctive use in depression). (By the way, the study was not funded by aripiprazole’s makers, and the medication is now available in a low cost generic form).

 After 12 weeks of treatment, the aripiprazole augmentation group fared somewhat better than the others:
Outcome at 12 weeks
Aripiprazole augmentation
Bupropion augmentation
Bupropion switch
Remission rate
29% (statistically superior to bup switch)
27%
22%
Response rate
74% (statistically superior to bup switch and augmentation)
66%
62%

Aripiprazole augmentation yielded a higher remission rate than switching to bupropion, and it produced a higher response rate than either one of the bupropion strategies. But let’s look at side effects—some were more common in the aripiprazole group, others in the bupropion group. Side

Effects Statistically More Common in the Aripiprazole Group:

Side Effects
Aripiprazole augmentation
Bupropion augmentation
Bupropion switch
Akathisia
14.9%
5.3%
4.3%
Somnolence
14.5%
7.9%
7.2%
Weight increased by at least 7% at 12 weeks
9.5%
1.9%
2.3%
Weight increased by at least 7% at 36 weeks
25.2%
5.2%
5.2%

Side Effects Statistically More Common in the Bupropion Groups:

Side Effects
Aripiprazole augmentation
Bupropion augmentation
Bupropion switch
Anxiety
16.6%
22.5%
24.3%
Tremor
3.8%
10.3%
6.1%

Basically, bupropion caused more anxiety and tremor, while aripiprazole caused more weight gain, tiredness, and akathisia (a feeling of inner restlessness often caused by antipsychotics). Now, although aripiprazole caused more side effects, patients seemed to be less bothered by these side effects than those in the other treatment groups—at least as measured by the percentage of patients who withdrew from the study due to side effects. Only 5.3% of the aripiprazole patients withdrew because of side effects, as opposed to 7.3% in the bupropion augmentation group, and 10% in the bupropion switch group.

If you want to quibble with the results, there are a lot of details about the study that you can pick apart. And as is true for just about any large study, these would be reasonable points. But for me as a clinician, the bottom line is that this is the first large truly randomized study comparing antipsychotic augmentation with two other very common strategies for patients who don’t respond to the first antidepressant. And the results pretty clearly show that aripiprazole augmentation is somewhat more effective than the two other methods tested. This doesn’t mean that suddenly all my patients are going to be on aripiprazole. A quarter of patients had significant weight gain after 9 months, and about 15% had akathisia. Those can be problems, but in clinical practice, you monitor for side effects, and if they are bad, you stop the offending medication and try something else. That’s a risk/benefit decision . . . and this study implies that aripiprazole should be high on your options for treatment resistant depression.

But here’s the thing: Nobody seems to want to admit that it works.

 On Medpage Today’s Slow Medicine blog, the focus was on the side effects of aripiprazole, and how this study will definitely not convince them to use aripiprazole.

 Their commentary was titled “No Atypical Antipsychotics for Depression,” and they concluded with: “In the face of uncertainty, our Slow Medicine philosophy favors the safer, more conservative approach. VAST-D will not change our practice. Until we see clear evidence of benefits that outweigh the harms, we don't see a role for antipsychotics for most patients with depression. For now, among patients with an inadequate response to a first antidepressant, we'll try a second antidepressant, consider enhancing behavioral therapy, or think about augmenting with the safer medication, buproprion.”

Given that their philosophy is to avoid quick fixes and take a cautious approach to new treatments, I can understand this coming from Slow Medicine. But even the authors of the original JAMA article took pains to downplay the results. Here’s their conclusion:

 “Among a predominantly male population with major depressive disorder unresponsive to antidepressant treatment, augmentation with aripiprazole resulted in a statistically significant but only modestly increased likelihood of remission during 12 weeks of treatment compared with switching to bupropion monotherapy. Given the small effect size and adverse effects associated with aripiprazole, further analysis including cost-effectiveness is needed to understand the net utility of this approach.”

Really? First, they’re reminding us that we can’t generalize the results beyond the population studied (okay, we got that, it is, after all, a study of vets). Second, they are pretending that response rate was not one of the pre-specified outcome variables (it was a secondary outcome, but based on the same depression scale used for the primary outcome), and therefore they are making believe that there’s no evidence that aripiprazole was better than bupropion augmentation, not just better than switching. And finally, there’s this bizarre indirect way of saying “these results are unimpressive, don’t change your practice.” (I guess that’s what they mean by “net utility”).

 Dudes, this was a major study, and you got an interesting, clinically relevant, result!

It’s okay to brag a little. Let’s go where the data take us, even if we’ve become accustomed to hating on the drugs that come out on top.

Wednesday, September 6, 2017

We're Diagnosing Like It's 1799

The fact that psychiatry lags far behind the rest of medicine scientifically is no great news flash. The leaders of our field have long acknowledged this problem (see, for example, this withering self-critique by then head of NIMH Thomas Insel).  None of this should be taken personally. Psychiatrists are just as smart as other doctors. It’s just that we have the misfortune of having chosen the most complicated organ to study—the brain.

Nonetheless, occasionally I come across information that reminds me anew of just how far in the dark ages we are stuck. This happened a couple of weeks ago when I was binge-listening to podcasts and happened upon this great episode of the 99% invisible podcast about the origin of the stethoscope.

The stethoscope was invented in 1816 by a 35 year old Parisian physician, RenĂ© Laennec. Laennec was particularly interested in “diseases of the chest” as they were called then, and especially tuberculosis, which was ravaging Paris and had a 50% death rate. Doctors knew a little bit about how TB affected the lungs based on autopsy findings. But they didn’t have clue that what caused it (that would have to wait until 1882 when Robert Koch discovered mycobacterium tuberculosis), and they had a very hard time diagnosing the disease in a living person. TB causes symptoms such as dyspnea (shortness of breath), coughing up blood, weight loss, and fever, but many patients with other diseases presented similarly. Doctors had no diagnostic tools or blood tests, and depended on having long talks with patients about their symptoms and history. But conversations about an illness only got them so far, and commonly the final diagnosis was simply “dyspnea” or “fever”—which we now know are symptoms with various underlying causes, but which in the 18th century were thought of as diseases.

A medical transformation was borne one day when Dr. Laennec was examining an overweight woman with dyspnea. Based on their conversation, Laennec could not distinguish TB, pneumonia, or heart disease. He tried chest percussion, a popular method that helps detect whether areas of the lung are filled with inflammatory fluid, but the abundance of tissue rendered that technique unhelpful. He was tempted to simply place his ear on her chest—a technique called “immediate auscultation,” but felt that it was “indelicate” to do so. He looked around and, in his words, “grabbed 24 sheets of paper, rolled them tightly into a bundle, and secured them in shape with paste glue.” Using this cylinder, he placed one end onto her chest, and other to his ear.  He was “delighted” to find that he could hear heart and breath sounds with amazing clarity.

Laennec refined the device over the next several years, hiring a carpenter to build better versions out of wood, and he shared his discovery with colleagues. Armed with the stethoscope, doctors carefully correlated breath and heart sounds of dying patients with autopsy findings, eventually reporting a series of “pathognomonic” sounds that could, with a good degree of certainty, diagnose specific diseases. Whereas patients were once told that their disease was “dyspnea,” they could now learn which organ was affected, and what the likely prognosis was. Unfortunately, effective treatment had to wait for the discovery of antibiotics and cardiac drugs.

In psychiatry, diagnostically we are squarely in the pre-Laennec era (though therapeutically, we have serendipitously discovered highly effective treatments for many disorders). We diagnose such entities as “major depression” and “schizophrenia” based on prolonged conversations with patients, conversations termed “mental status exams.” We combine our observations with the history to discover clusters of symptoms that often occur together, and which are therefore included as “disorders” in the DSM-5. But, like physicians in 1799, we don’t understand how the pathology of the underlying organ leads to these symptoms. In fact, our science is arguably considerably more primitive than 1799 medicine, because even our autopsy results have not identified any lesions responsible for psychiatric symptoms—with the exception of Alzheimer’s disease.

Psychiatry does not have a stethoscope. We have ancillary technologies, such as MRIs, PET scans, EEGs, and blood tests, all of which can effectively rule out other diseases that can mimic psychiatric disorders. But we can’t peer into our patient’s brain to tell them what lesion or circuit mishap causes them to suffer as they do.

We need to acknowledge that a careful interview is not only central to psychiatric diagnosis, but is the only method we currently have in our diagnostic tool box. If we really want to help our patients, we need to enhance our skills at asking the right questions and understanding the meanings of the answers. Which may well take more time than insurance companies believe we are worth.