More threads by David Mowry

Researchers led by Irving Kirsch of the University of Hull reviewd a series of studies, both published and unpublished, on four antidepressants, they were "Prozac", a/k/a "Fluoxetine" manufactured by Eli Lilly & Co.; "Effexor", a/k/a "Venlafaxine" manufactured by Wyeth's; "Paxil" a/k/a "Seroxat or Paroxetine" manufactured by GlaxoSmithKline's, and "Serzone" a/k/a "Nefazone" manufacted by Bristol-Myers Squibb Co.. The study examined the question of whether a person's response to these drugs hinged on how depressed they were prior to obtaining treatment. The researchers found that compared with placebos, these new-generation antidepressant medications did not yield clinically significant improvements in depression in patients who initially had moderate to severe depression. The study found that significant benefits occurred only in the most severely depressed patients. " Drug-placebo differences in antidepressant efficacy increase as a function of baseline severity, but are relatively small even for depressed patients. The relationship between initial severity and antidepressant efficacy is attributable to decreased responsiveness to placebo among very severly depressed patients, rather to increaseed responsiveness to medication", the rearchers wrote. " Although patients get better when they take antidepressants, they also get better when they take a placebo, and the differance in improvement is not very great. This means that depressed people can improve without chemical treatments," Irving Kirsch said in a statement. Mary Ann Rhyne, a spokesperson for GlaxoSmithKline's, said the study only took into account the data submitted prior to the drug's U.S. approval.
 

David Baxter PhD

Late Founder
Re: Get Healthier & Happier

The researchers found that compared with placebos, these new-generation antidepressant medications did not yield clinically significant improvements in depression in patients who initially had moderate to severe depression. The study found that significant benefits occurred only in the most severely depressed patients. " Drug-placebo differences in antidepressant efficacy increase as a function of baseline severity, but are relatively small even for depressed patients. The relationship between initial severity and antidepressant efficacy is attributable to decreased responsiveness to placebo among very severly depressed patients, rather to increaseed responsiveness to medication", the rearchers wrote. " Although patients get better when they take antidepressants, they also get better when they take a placebo, and the differance in improvement is not very great. This means that depressed people can improve without chemical treatments," Irving Kirsch said in a statement.

With all due respect, that is pretty much total balderdash.
 
Re: Get Healthier & Happier

Perhaps several Universities and medical journals are incorrect within this study. see http:// medicine.plosjournals.org/perlserv/?request=get-document&doi=10.1371/journel.pmed.0050045&ct=1. Balderdash??? hmm
 

David Baxter PhD

Late Founder
Re: Get Healthier & Happier

Bear in mind that PLOS is an online publication venue, not a peer reviewed journal, and therefore not subject to the same prepublication scrutiny regarding methodology, statistics, validity of conclusions, etc.

There's never been any shortage of bad research. In the past, most of it never saw the light of day (or more accurately the light of printing press). Sometimes progress really isn't progress.
 

David Baxter PhD

Late Founder
Antidepressant Data Showed Not as Effective as Thought

More on what's wrong with that widely reported study:

Antidepressant Data Showed Not as Effective as Thought?
By John M. Grohol, Psy.D. on impossibility
February 26, 2008

Meta-analyses are great research tools, because they allow researchers to look at data across large sets of data published by multiple studies, and see if there are more powerful (or less powerful) effects that no single study has found on its own.

So it?s always interesting to read something that a meta-analysis finds in the data that individual studies didn?t quite find.

Today, British researchers discovered, unsurprisingly, that Antidepressant Data Showed Not as Effective as Thought. I say unsurprisingly, because the researchers made a series of decisions that pretty much guaranteed their end-result.

First, they went to the original datasets and included unpublished data too. Unpublished data is usually unpublished for a reason ? for instance, the study was either poorly designed (not taking into account some variable that made the conclusions useless), or maybe it had insignificant findings (e.g., placebo worked just as well as Drug A). If you include all those studies that found insignificant results, averages say that?s going to bring down the efficacy of any drug being examined. There is no drug on the market today that doesn?t have a study (likely unpublished) that showed the drug had no significant effect on whatever it was being studied.

Second, the researchers looked at data in a single slice of time (1987 - 1999). While their findings are true for that time period, in the intervening 19 years, many additional studies on the effectiveness of the seven SSRI antidepressants (only four of which made it into this study) have been published. Does that mean the researchers? findings are invalid? No, it just means that the FDA trial data ? the dataset that should be the strongest and make the most compelling argument for a drug?s approval by the FDA ? was pretty darned weak when pooled and looked at together. It would be interesting if the researchers could do a similar analyses of the 19 years worth of data now acquired and see if they found similar results (an impossibility, by the way, because nearly all drug companies still don?t release unpublished data on their drugs).

Third, researchers love to argue details and specifics. Is a 1.8 point change on the Hamilton depression scale clinically significant, or do you need a 3 point change? Well, the British organization, the National Institute for Clinical Excellence (NICE) published a clinical guideline in 2004 that says you need that 3 point difference, and since those folks are far smarter than I, I agree with them. But of course the U.S.-based FDA doesn?t use British guidelines for determining clinical efficacy (although it may consult with such guidelines) and ultimately, drug approval.

Patients taking a placebo, or sugar pill, had nearly an 8 point improvement on their Hamilton depression scale, a clinician-based rating of a patient?s depression. People taking one of the four studied antidepressants had nearly a 10 point improvement on the same scale. So while people taking the antidepressant felt better than their sugar-pill counterparts, it wasn?t likely a change one could feel or that others would notice.

The upshot of this research is to show how very weak these four antidepressants? data were, and that the FDA actually approved these drugs despite this weakness. Perhaps the weakness could not be seen individually, in each study?s data, and if that?s the case, the FDA should now be conducting their own internal meta-analyses on a single drug (or class of drugs) every year, to ensure their decisions remain valid in a more objective and empirical light.

Other Coverage:

 

David Baxter PhD

Late Founder
Understanding Research: Antidepressant Data
By Dr. Deborah Serani
Wed, Feb 27 2008

A new clinical trial study published yesterday in the Public Library of Science Medicine suggests that antidepressants only benefit some individuals, mostly helping the severely depressed patients.

My professional experiences do not align with this study. Many in the mental and medical health fields feel the same way - read this MSNBC.COM series here and here - because understanding research can be a tricky and sometimes misleading experience.

There are clinical studies that say drinking coffee is bad for your health, while other studies say drinking coffee can be good for your health. Hey, what's up with that?

The key to understanding research is not only in understanding the numbers and the statistical design, but also in considering that clinical trials are artificial situations that don't mimic real-life. When I come across clinical trial research, I consider the data as a "possibility", not an "absolute".

So when you read research, or the latest breaking news, consider what YOUR own unique needs are and make decisions accordingly.

And if you're someone who might be considering stopping your antidepressant medications because of this recent study, please do so carefully with your medical professional guiding the way.
 

David Baxter PhD

Late Founder
Psychologists 'prove' antidepressants are worthless - again
Thu, Feb 28 2008

A new study lead by Hull University's Professor Irving Kirsch has found that antidepressants are statistically no better than placebos except for the most severely depressed patients and even then the difference is mostly not due to antidepressants being more effective but because this subset of patients is apparently less fooled by the sugar pills.

Of course this is not the first study to find that antidepressants are expensive placebos, indeed, it isn't even the first from Prof Kirsch who has previously published two studies [1, 2] with similar results. The medical literature is replete with, mostly meta-analysis, studies that reached similar conclusions. A surprising number were conducted by psychologists, which may or may not be significant.

The first thing to note is this is a meta-analysis of a number of pre-approval clinical trials conducted between 1987 and 1999 to validate the effectiveness of the 4 antidepressants included in the current study. The difficulty is that not all the studies use the same methodology and each involved only a small number of subjects. There were a total of 3,292 participants randomized to the medications and 1,841 to placebo spread over the 35 studies used in the analysis.

Meta-analysis attempts to combine disparate information drawn from a number of studies into a single, larger data set. How effective this is depends on the studies selected for inclusion and the methodology used to meld the data. And this is where is gets interesting because the decisions on what to include and leave out seem to have pretty much ensured the end-result.

Prof Kirsch and his team attempted to remove bias by using all the pre-approval studies available to the U.S. Food and Drug Administration, both published and unpublished. They have made much of the fact that they were able to obtain unpublished data which had never before been publicly disclosed. However, the unpublished data underpins much of the final result.

Unpublished data is unpublished for a reason, and, despite the oft repeated claims, it's usually not because the result wasn't what the drug company wanted, though this does occur. There wouldn't be too many medications that haven't failed to beat the placebo in at least one trial. However, mostly they don't see the light of day because the study was badly designed, or publishers prefer are less interested in publishing negative results, or too many patients dropped out before completion.

Pre-approval trials also typically use highly selected individuals that are not representative of the patient population in general, and often place strict limitations on how the drug is used and what other treatments can be used at the same time.

One such limitation may be the doses given to study participants. When these drugs were first marketed very high starting doses were recommended. However, it soon became apparent that many patients, particularly those with a comorbid anxiety disorder, were unable to tolerate these high initial doses and from about 1998 manufacturers began to make half dose starter packs.

Trials conducted after approval generally have less strict inclusion criteria and therefore are thought to provide a better guide as to how well drugs work in real life.

The trials that form the basis of this study mostly involved people with very severe depression. These form only a relatively small subset of depression patients. No trials of patients with severe depression were included in the analysis and only one trial of moderately depressed patients was included. Therefore the results can tell us little about the efficacy of antidepressants in moderate to severely depressed patients. Despite this the authors have made a number of claims about antidepressants efficacy in these patients groups. How they can be substantiated is unclear.

The FDA prefers studies in which at least 70% of the subjects participated for the entire duration of the trial. Of the 35 clinical trials - 5 of fluoxetine (Prozac), 6 of venlafaxine (Effexor), 16 of paroxetine (Paxil), and 8 of nefazodone (Serzone which is no longer commonly prescribed) - only 4 trials met this criterion. In the 29 trials for which the drop out rate is known, on average 63% of the medicated cohort and only 60% of the placebo group completed the trials. It is unclear from the information provided if completion rates in the unpublished studies differed significantly so we can't know if this was a factor for them not being published.

It should be noted that nearly half of the studies, 16 of 35, are of paroxetine. This alone limits the value of the study as it may well be telling us a lot about paroxetine and much less about the other antidepressants. The researchers assert that based on results from other studies their results are valid across all the modern antidepressants, but this cannot be verified from the study's data set.

Another consideration is that few patients respond adequately to the first antidepressant prescribed. On average it takes 2 to 3 attempts and about 6 months before the most effective antidepressant for a patient is found. So a failure to respond to one or more antidepressants does not necessarily mean that the drugs as a group are ineffective.

One off trials, while important in determining an antidepressant's overall efficacy, can be of limited value at the individual level.

However, the factor that most reduces the value of this study is the duration of the trials selected. Thirty-three trials were of 6-week duration, six trials were of 4 weeks, two were 5 weeks, and six were of 8 weeks.

Few patients respond to antidepressants in 4 weeks, most find it takes 6-12 weeks before there is a noticeable change in mood and it can take months for the full affects to become apparent. Yet, most of the selected trials terminated just as the drugs would have begun to have an effect. In an added complication, the researchers decided to use the data "taken from the last visit prior to trial termination," not on trial termination which could mean some of the participants were medicated for an even shorter period.

So what should we make of this, and other similar studies?

The fact is, many patients do get well after taking antidepressants (as do dogs and other creatures thought not to be susceptible to the placebo effect).

Indeed, many more patients have improved after being prescribed antidepressants than have ever been helped by psychotherapy, not because they are necessarily better, but because for the vast majority of patients they are easier to access and sometimes cheaper.

In Great Britain counseling can be exceptionally difficult to come by on the NHS and waiting lists may be up to a year. Given estimates that Britain needs another 10,000 psychotherapists justv to meet current needs, these delays are unlikely to lessen anytime soon. In America many insurance plans limit patients to a handful of visits per year, and some do not pay for any. But at least in both countries it is possible to obtain non drug treatment. For most of humanity, this simply isn't an option.

Source: Kirsch I, Deacon BJ, Huedo-Medina TB, et al. Initial Severity and Antidepressant Benefits: A Meta-Analysis of Data Submitted to the Food and Drug Administration. PLoS Med 5(2): e45 doi:10.1371/journal.pmed.0050045

Caution:
Antidepressants should never be quit "cold-turkey" but weaned off slowly over a period of weeks or months. Ask your physician for advice if you intend to discontinue treatment.

Admin note: PLoS journals are online publications only and, as far as I kn ow, are not peer-reviewed. Therefore, such publications are not subject to the same scrutiny regarding methodology, statistics, and conclusions as would be the case for peer-reviewed print journals.
 

Halo

Member
Interesting and very informative article. What I found most interesting and which caught my eye was this:

Yet, most of the selected trials terminated just as the drugs would have begun to have an effect. In an added complication, the researchers decided to use the data "taken from the last visit prior to trial termination," not on trial termination which could mean some of the participants were medicated for an even shorter period.
 
Replying is not possible. This forum is only available as an archive.
Top