SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Politics : Actual left/right wing discussion -- Ignore unavailable to you. Want to Upgrade?


To: koan who wrote (8748)12/5/2010 8:03:03 AM
From: Lane31 Recommendation  Read Replies (1) | Respond to of 10087
 
Easy to critisize. You know a better system?

That's not the point. The point is that the corruption in the system, regardless of how much of it that there is relative to any other system. Any corruption of science by scientists demonstrates lack of integrity. Your assertion that they "do the best they can" is condoning corruption of science, which you claim to value. The question is not where or whether there's a better system. The question is how much credibility to award the system we have. Your putting scientists on a pedestal is unwarranted. That's not to suggest that I find flawed scientists any more corrupt than the average person. My point is only that they don't belong on your pedestal. I certainly don't want to defer running the country to them.

Evolution is more complicated than knowing the earth is round, but not so complicated as understanding existentialism, which many/most great minds do not, so is not a good proxy for measurement.

I take your point about the relative complexity of evolution. But you are judging intelligence using a measure that ignores other variables. I agree with you that anyone who can't understand evolution at a basic level is conspicuously dim. But understanding it and acknowledging it as scientific truth are not the same thing. Sure, some people who deny evolution may really be that stupid. But other factors are in play. A key factor is cognitive dissonance, which is more about bias than about brains. Smart people suffer from cognitive bias, too, including some of those heretofore mentioned scientists. Another factor is one's hierarchy of values, what weight a person puts on scientific knowledge vs belief. And cultural values, how one grew up, tradition. Personally, I have a great deal of difficulty getting my head around any degree of religiosity let alone fundamentalism. But I think it unfair and simplistic to relegate it to stupidity. An agile brain does not oversimplify or judge without examining all the factors.

With regard to evolution as a political issue, I think you are overestimating the amount of serious opposition. Sometimes folks make statements just to affirm their group identity. Politicians do that all the time. Look at the political threads on SI where sides are drawn up. Someone from one team posts something banal or even downright stupid and a bunch of folks from his team jump in with "great post!" They're not really judging it a brilliant post. They're saying, "yeah, go team, I'm on your side." Evolution is a populist issue on the right. Everyone wears the t-shirt whether they agree with the slogan on it or not. I raise this not to show approval of that process, since I disapprove of mindless team affiliation and political polarization every bit as much as I disapprove of corrupting science, I mention it only to encourage a more thoughtful and respectful examination of the topic. Populist crap makes me gag, but not to the point of giving up my intellectual integrity in judging the basis for it.

Would you quibble with me if I were to say I believe the sun is round?

I would if you did so in a discussion about science vs religion. It's the critical difference.

no, the evolution denyers more like the global warming denyers and gay is natural denyers. Same population over and over. The correlations are very strong

Yesterday, one topic was the association between compassion and science. That association turned out to be nothing more than two things the liberals have in common, in your mind. The two items have no common root, do not stem from some common principle, they are just two items in a collection that supposedly coincide in said cohort. So now you list three things that correlate with a different cohort. Do they have some common basis or are they as independent as are science and compassion?

From your phrasing, I infer that you consider denial of science to be the commonality. I suggest that a more thoughtful examination shows that to be another simplistic judgment. The gay thing is based on some combination or religion, tradition, and homophobia. The evolution thing is about religion. The climate thing is about economics, statism, change, and science. Re the science, virtually no one challenges the scientific fact that the climate is changing. It's what, if anything, to do about the change that's the political issue. To the extent that it's actually about science, it is skepticism about the extent to which religious-like fervor might be compromising the integrity of the science.

(On this issue, BTW, it's the left that is resisting change, not the right, as I mentioned up thread.)

You start with the premise that the right is anti-science. Then you cherrypick three issues that they share and that you can, if you are so motivated, connect to science, and then claim "eureka, a correlation." A correlation can be nothing more than coincidence. Or an amusing irrelevancy, like liberals preferring arugula and conservatives preferring iceburg. More is needed to find deep meaning among factors that coincide in a cohort.



To: koan who wrote (8748)12/12/2010 8:49:25 AM
From: Lane31 Recommendation  Read Replies (1) | Respond to of 10087
 
More on my point re the credibility of scientists and peer review:

Gambling Save Science?
By Robin Hanson · December 12, 2010 12:30 am ·

The latest New Yorker:

All sorts of well-established, multiply confirmed findings have started to look increasingly uncertain. … This phenomenon … is occurring across a wide range of fields, from psychology to ecology. … The most likely explanation for the decline is … regression to the mean. … Biologist Michael Jennions argues that the decline effect is largely a product of publication bias. Biologist Richard Palmer suspects that an equally significant issue is the selective reporting of results. … The disturbing implication … is that a lot of extraordinary scientific data is nothing but noise. (more)

Academics are trustees of one of our greatest resources – the accumulated abstract knowledge of our ancestors. Academics appear to spend most of their time trying to add to that knowledge, and such effort is mostly empirical – seeking new interesting data. Alas, for the purpose of intellectual progress, most of that effort is wasted. And one of the main wastes is academics being too gullible about their and allies’ findings, and too skeptical about rivals’ findings.

Academics can easily coordinate to be skeptical of the findings of non-academics and low-prestige academics. Beyond that, each academic has an incentive to be gullible about his own findings, and his colleagues, journals, institutions, etc. share in that incentive as they gain status by association with him. The main contrary incentive is a fear that others will at some point dislike a findings’ conclusions, methods, or conflicts with other findings.

Academics in an area can often coordinate to declare their conclusions reasonable, methods sound, and current conflicts minimal. If they do this, the main anti-guillibility incentives are outsiders’ current or future complaints. And if an academic area is prestigious and unified enough, it can resist and retaliate against complaints from academics in other fields, the way medicine now easily resists complaints from economics. Conflicts with future evidence can be dismissed by saying they did their best using the standards of the time.

It is not clear that these problems hurt academics’ overall reputation, or that they care much to coordinate to protect it. But if academics wanted to limit the gullibility of academics in other fields, their main tool would be simple clear social norms, like those now encouraging public written archives, randomized trials, controlled experiments, math-expressed theories, and statistically-significant estimates.

Such norms clearly remain insufficient, as great inefficiency remains. How can we do better? The article above concludes by suggesting:

We like to pretend that our experiments define the truth for us. But … when the experiments are done, we still have to choose what to believe.

True, but of little use. The article’s only other suggestion:

Schooler says “Every researcher should have to spell out, in advance, how many subjects they’re going to use, and what exactly they’re testing, and what constitutes a sufficient level of proof.”

Alas this still allows much publication bias, and one just cannot anticipate all reasonable ways to learn from data before it is collected. Arnold Kling suggests:

An imperfect but workable fix would be to standardize on a lower significance level. I think that for most ordinary research, the significance level ought to be set at .001.

I agree this would reduce excess gullibility, though at the expense of increasing excess skepticism. My proposal naturally involves prediction markets:

When possible, a paper whose main interest is particular “interesting” empirical estimates should be accompanied by a description of a much better (i.e., larger later) study that, if funded, would give more accurate estimates. There should be funding to cover a small (say 0.001) chance of actually doing that better study, and to subsidize a conditional betting markets on its results, open to a large referee community with access to the paper for a min period (say a week). The paper should not gain prestigious publication if current market estimates regarding that better estimate do not support the paper’s estimates.

Theory papers containing proofs might similarly offer bets on whether errors will be found in them, and might also offer conditional bets on if more interesting and general results could be proven, if sufficient resources were put to the task.

More quotes from that New Yorker article:

The study turned [Schooler] into an academic star. … It has been cited more than four hundred times. … [But] it was proving difficult to replicate. …his colleagues assured him that such things happened all the time. … “I really should stop talking about this. But I can’t.” That’s because he is convinced he’s has stumbled on a serious problem, one that afflicts many of the most exciting new ideas in psychology. ….

Jennions admits that his findings are troubling, but expresses a reluctance to talk about them publicly. “this is a very sensitive issue for scientists,” he says. … In recent years, publication bias has mostly been seen as a problem for clinical trials …But its becoming increasingly clear that publication bias also produces major distortions in fields without large corporate incentives, such as psychology and ecology. …

“Once I realized that selective reporting is everywhere in science, I got quite depressed.” Palmer told me. … “I had no idea how widespread it is.” … “Some – perhaps many – cherished generalities are at best exaggerated … at at worst a collective illusion.” … John Ioannidis … says … “We waste a lot of money treating millions of patients and doing lots of follow up studies on other themes based on results that are misleading.”

overcomingbias.com