SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Politics : A US National Health Care System? -- Ignore unavailable to you. Want to Upgrade?


To: Lane3 who wrote (16099)4/4/2010 4:21:03 PM
From: J_F_Shepard  Read Replies (2) | Respond to of 42652
 
"Easter Sunday is the day Christians celebrate Jesus rising from the dead. What does that have to do with interpreting correlations?"

Assuming you agree, what correlation convinced you of that.

With regard to other things like the rats, etc.. You have anything to show such as your own publications or models to demonstrate scientific understanding of experimental technique?



To: Lane3 who wrote (16099)4/5/2010 10:24:35 AM
From: HPilot  Respond to of 42652
 
Now, the reporting of that study has widely asserted that the study demonstrated that fat is addictive. But the study didn't isolate the fat variable so that claim isn't valid. It could have been the fat. Or it could have been the sugar. Or a combination of the two. Or it could have been the nitrites in the bacon and sausage or the salt in everything or the lactose in the cheese.

I suspect it was because the food was one hell of a lot better tasting than your typical rat feed!



To: Lane3 who wrote (16099)4/6/2010 10:33:26 PM
From: J_F_Shepard  Read Replies (2) | Respond to of 42652
 
"Correlation isn't causation. Unless you isolate that variable and rule out the rest, you can't even begin to think about causation. So it is with health care systems"

No, it is not, but correlation, if it's strong enough, will give a hint of causation.. More importantly, it will steer an experimental process in the direction of finding causation...all variables will be eliminated in the research process.
When you're involved in R&D, you will design your experiments to go in the direction you have observed with the correlation...may not prove causation but will always lead you in the proper direction(either out of it or forward) for your next set of experiments....



To: Lane3 who wrote (16099)4/7/2010 7:12:35 PM
From: TimF  Read Replies (1) | Respond to of 42652
 
Govt Muzzles More
By Robin Hanson · March 31, 2010 1:30 pm

Sponsorship [of pharmaceutical research and drug trials] by manufacturers has been found to be associated with a reduced likelihood of the reporting of adverse results. Likewise, a significant link has been found between industry funding and the likelihood that results of a randomized trial will support a new therapy. … One proposed solution to this problem is to increase public funding for the conduct of research on therapeutic effectiveness. Ironically, that may well aggravate the problem. In July 2007, AcademyHealth, a professional association of health services and health policy researchers, published results of a study of sponsor restrictions on the publication of research results. Surprisingly, the results revealed that more than three times as many researchers had experienced problems with government funders related to prior review, editing, approval, and dissemination of research results. In addition, a higher percentage of respondents had turned down government sponsorship opportunities due to restrictions than had done the same with industrial funding. Much of the problem was linked to an “increasing government custom and culture of controlling the flow of even non-classified information.”

Of particular concern is a provision of the Senate-passed Patient Protection and Affordable Care Act, [regarding] the … new Patient-Centered Outcomes Research Institute to conduct comparative-effectiveness research. The bill allows the withholding of funding to any institution where a researcher publishes findings not “within the bounds of and entirely consistent with the evidence,” a vague authorization that creates a tremendous tool that can be used to ensure self-censorship and conformity with bureaucratic preferences. This appears to be an effort in part to bypass the court order in Stanford v. Sullivan, a case involving federal contractual requirements that would have banned researchers from any discussion of their work without pre-approval by the Department of Health and Human Services. The order held that such blanket bans are “overly broad” and constitute “illegal prior restraint” on speech. The language in the Senate bill attempts to overcome this hurdle by eliminating prior restraint, but using the threat of post hoc punishment as an incentive for self-censorship.

That was written Feb 8; I just contacted its author and he doesn’t know if this made it into the final med bill or not. Alas this bodes poorly for the new comparative-effectiveness research program.

overcomingbias.com

Steven
March 31, 2010 at 2:15 pm | Reply

It’s in the final version of the bill. Section 6301 of the Patient Protection Act (Public Law No: 111-148) adds the following section to the Social Security Act:

“Any research published under clause
(ii)(IV) shall be within the bounds of and entirely consistent
with the evidence and findings produced under
the contract with the Institute under this subparagraph.
If the Institute determines that those requirements
are not met, the Institute shall not enter into
another contract with the agency, instrumentality, or
entity which managed or conducted such research for
a period determined appropriate by the Institute (but
not less than 5 years).”

This act was subsequently modified by reconciliation, but looking at the table of contents reveals that Section 6301 was not among the modified sections.

overcomingbias.com

Michael Caton
April 1, 2010 at 3:46 pm | Reply

RE the meta-finding that sponsored drug studies more frequently have positive answers regarding the treatments of the sponsors – this not necessarily indicative of twisting the truth and is better explained as a form of editorial control over research questions. That is, if you spend money developing a drug, you’re going to try to develop uses of it that you have a pretty good feeling in advance WILL WORK, and you’ll ask limited questions appropriate to that. You can have a better than 50% sense of whether something will work before you investigate it with rigor to get a MUCH better than 50% answer that it works. So I would argue the effect we’re seeing is more the result of conservative investigations than bias, and this is not inappropriate. But it’s worth thinking of ways to distinguish the two possibilities.

overcomingbias.com