SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Politics : A US National Health Care System? -- Ignore unavailable to you. Want to Upgrade?


To: Lane3 who wrote (7233)6/29/2009 5:52:54 AM
From: Lane3  Respond to of 42652
 
Just read a post on the Dr Eades blog that speaks to observational studies and causality, a "misunderstanding" practiced regularly by both medical reporters and doctors. Ya gotta understand this stuff if you're in the business of making medical recommendations.

The placebo effect and observational studies

I got the following comment (reprinted here in part) on my last post:

Dr Mike, I must say I’m a bit uneasy about your attitude to observational studies. Doesn’t that in effect disparage most “traditional” knowledge, whether architectural (”If we build things in this way, they don’t seem to fall down”), medical (”People seem to recover from their fever when I give them this combination of herbs”), societal (”If we set up this kind of committee, things seem to function more or less peacefully and efficiently”)? I understand that an observational study doesn’t prove anything by itself but it seems that it’s a more formalized kind of traditional observation, one that, crucially, makes itself transparent and therefore open to future reinterpretation. I may be misunderstanding your stance, but I worry that in effect it negates most of humankind’s historical progress, and any kind of inquiry that doesn’t fit your preferred methods.

This commenter sets up the problem in a way that it can be explained easily. And probably more clearly than I’ve explained it in the past.

As I pointed out in my post on observational studies, these kinds of studies are worthless for proving causality, but useful in defining hypotheses that can be tested. Let’s take one line from the comment and is it to demonstrate what I mean.

“People seem to recover from their fever when I give them this combination of herbs.”

A perfect example. Let’s say that some witch doctor sometime in the past came up with an herbal concoction that helped his ‘patients’ recover from a fever. Over the years this herbal therapy was passed down from witch doctor to witch doctor, and it worked without fail. A traditional doctor heard of the cure, tried it on a few patients and found that it did indeed seem to work. Every time the good doctor prescribed this herbal remedy, patients had their fevers break and began to get well. This doctor told other doctors, many of whom began using the herbs, and their patients, too, recovered from their fevers. Patients swore by the stuff and rushed to their doctors to get it whenever they got sick. Traditional doctors and witch doctors alike were in agreement that the potion works like magic.

Then comes a scientist who looks at the data and says, hey, here is a great observational study. All the observational data indicate this stuff works like a charm, so let’s make that our hypothesis, which, simply stated, is that Herbal Mixture X reduces fever in those who take it.

Now that the hypothesis has been developed, it needs to be tested. The best way to test it is with a randomized, double-blind, placebo-controlled study. Our scientist recruits doctors in several clinics across the country who are familiar with the workings of Herbal Mixture X (HMX) and provides them with a study protocol and unlabeled HMX and placebo, both of which look identical. As per the protocol, any patient who comes into the clinic with a temperature above 101 [degrees] F gets a randomly generated number and either the HMX or the placebo. Neither the patient nor the doctor knows who is getting the real stuff and who’s getting the placebo, which makes the study double blind. If the doctor knew who was getting the HMX, then the study would be single-blind, not double-blind, which would not remove the physician bias from the study. The assumption is that if the doctor doesn’t know which is which, he/she will treat all patients the same and not let some subtle bias slip into the experiment.

When a patient presents to the clinic with a fever, the doctor gives either HMX or placebo and waits to see what happens. The doctor or staff contact the patients daily and have them report their temperatures. When temperature has returned to normal, the data point is entered on the patient’s chart. After a specific number of patients have gone through the protocol, the codes are broken to see which patient got the HMX and which got placebo. The scientist then crunches the data to see whether the supposed fever-lowering ability of HMX is statistically significantly different from that of placebo. And, lo and behold, let’s say for argument’s sake there is no difference.

There is a huge outcry from all the docs who have used the treatment. The study was flawed, they scream. We know this stuff works. We’ve used it for years, and we’ve seen it work. Same goes for the patients who have taken HMX over the years: they swear by it, too. They say, We don’t care what one stupid study showed - we know it works.

So, another group of scientists takes on the project and repeats the study. And gets the same results. HMWX works no better than placebo. All the same outcries arise, and so the study is repeated a few more times, all with the same result. Clearly, HMX works no better than placebo when compared in a double-blind, placebo-controlled study, yet thousands of doctors and countless patients firmly believe in its efficacy. What happened? The observational data seemed to strongly ‘prove’ that HMX worked, but the actual testing showed it to be worthless. What’s going on here?

What’s going on and what makes HMX work is the magic of the healer telling the patient that the therapy is potent along with the patient’s belief in both the healer and the strength of the remedy. In other words, the placebo effect.

Don’t believe me? With the recent death of Michael Jackson, reported by some as due to an overdose of a potent painkiller, said painkiller, Demerol, is much in the news. I just read a piece written by a doctor on the placebo effect that describes the strength of this phenomenon. Most physicians who have been in practice for any length of time have similar stories:

Jane D. was a regular visitor to our ER, usually showing up late at night demanding an injection of the narcotic Demerol, the only thing that worked for her severe headaches. One night the staff psychiatrist had the nurse give her an injection of saline instead. It worked! He told Jane she had responded to a placebo, discussed the implications, and thought he’d helped her understand that her problem was psychological. But as he was leaving the room, Jane asked, “Can I get that new medicine again next time instead of the Demerol? It really worked great!”

A placebo as strong as Demerol? You bet. Happens all the time.

proteinpower.com