SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Politics : A US National Health Care System? -- Ignore unavailable to you. Want to Upgrade?


To: Lane3 who wrote (27313)9/18/2013 5:02:19 PM
From: Brumar89  Respond to of 42652
 
In this case, the pig would be a piglet.

"A part-time job may be less valuable than a full-time job" involves no lipstick. Quite the contrary. The shortfall is not masked with lipstick or anything else but, rather, explicitly highlighted.

"...but a job is still a job." Just as a pig is still a pig.



To: Lane3 who wrote (27313)9/21/2013 12:04:39 PM
From: Lane3  Read Replies (1) | Respond to of 42652
 

Rational group-think

September 20, 2013 at 6:00 am
Austin Frakt

In unsurprising news, party ideology is associated with views on Obamacare. Are these evidence-based views? Most likely not.

Thus, at one level—a very individualistic one—it will make perfect sense in this situation for individuals to attend to information, including evidence of what is known to science, that promote the formation of identity-congruent beliefs. Again, even citizens of modest science literacy and critical reasoning skills will likely be able to form such beliefs without difficulty—because figuring out what view prevails among those with whom one shares one’s most important connections depends on a basic kind of cultural competence, not on an understanding of or a facility with empirical evidence. But those citizens who enjoy above-average science comprehension will not face any less incentive to form such beliefs; indeed, they will face pressure to use their intelligence and reasoning skills to find evidentiary support for identity-congruent beliefs the comprehension of which would likely exceed the capacity of most of their peers (Kahan, Peters, Wittlin, Slovic, Ouellette, Braman & Mandel 2012).

At a collective level, of course, this style of engaging decision-relevant science can be disastrous. If all individuals follow it at the same time, it will impede a democratic society from converging, or at least converging as quick as it otherwise would, on understandings of fact consistent with the best available evidence on matters that affect their common welfare. This outcome, however, will not change the incentive of any individual—who despite the harm he or she suffers as a result of unaddressed risks or ill-considered policies cannot change the course of public policymaking by changing his or her personal stances, which, if contrary to the ones that prevail in that person’s group, will continue to expose him or her to considerable social disadvantage. [...]

We submit that a form of information processing cannot reliably be identified as “irrational,” “subrational,” “boundedly rational” or the like independent of what an individuals’ aims are in making use of information. It is perfectly rational, from an individual-welfare perspective, for individuals to engage decision-relevant science in a manner that promotes culturally or politically congenial beliefs. Making a mistake about the best-available evidence on an issue like climate change, nuclear waste disposal, or gun control will not increase the risk an ordinary member of the public faces, while forming a belief at odds with the one that predominates on it within important affinity groups of which they are members could expose him or her to an array of highly unpleasant consequences (Kahan 2012).

That’s from a very interesting paper by Dan Kahan on “motivated numeracy“. It is, believe it or not, about an interesting randomized experiment. I’d tell you about it, but you could just as well read the paper. It’s ungated. Recognizing that it’s long and you’re busy, I recommend you at least read Chris Mooney’s summary. If even that’s too long for you, try Kevin Drum. The real reason I’m not writing a longer post is that Kevin and Chris have already done better jobs than I could do.

Politics Makes Morons of Us All

—By Kevin Drum

| Wed Sep. 4, 2013 3:52 PM PDT

Today, Yale law professor Dan Kahan presents evidence that we Americans really suck at math. "Correctly interpreting the data was expected to be difficult," he says of the test subjects from his latest study, and he turned out to be right. But all he was asking them to do was calculate a simple percentage. If 200 out of 300 people in one group get better by taking a pill and 100 out of 125 people in a different group get better by doing nothing, which is better? Taking a pill or doing nothing? You'd pass Kahan's test if you understood that the raw numbers aren't enough to get the right answer. You have to calculate the percentage of each group that got better.

It's disheartening that most people couldn't figure that out, though hardly unexpected. But what came next is....well, not unexpected, maybe. But certainly discouraging. Kahan ran the exact same test with the exact same data, except this time the question was about gun bans and crime levels. Half of the time, he presented data suggesting that a gun ban increased crime, while the other half of the time the data suggested that a gun ban decreased crime. And guess what? Among the subset of test subjects who were very good at math, they suddenly got really stupid if they didn't like the answer they got. Here's the chart:



This comes via Chris Mooney, who describes the whole thing in much more detail here. However, there's one big caveat: If I'm interepreting the dataset correctly, the sample size of highly numerate subjects is very small. Roughly speaking, there were about 30 liberals and 30 conservatives who were highly numerate and were given the gun ban version of the test. That's not a lot.

On the other hand, the effect size is pretty stunning. There's a huge difference in the rate at which people did the math correctly depending on whether they liked the answer they got. I'd like to see some follow-ups with more subjects and different questions, but it sure looks as if we'd probably see the same dismal effect.

How big a deal is this? In one sense, it's even worse than it looks. Aside from being able to tell that one number is bigger than another, this is literally about the easiest possible data analysis problem you can pose. If ideologues actively turn off their minds even for something this simple, there's really no chance of changing their minds with anything even modestly more sophisticated. This is something that most of us pretty much knew already, but it's a little chilling to see it so glaringly confirmed.

But in another sense, it really doesn't matter at all. These days, even relatively simple public policy issues can only be properly analyzed using statistical techniques that are beyond the understanding of virtually all of us. So the fact that ideology destroys our personal ability to do math hardly matters. In practice, nearly all of us have to rely on the word of experts when it comes to this kind of stuff, and there's never any shortage of experts to crunch the numbers and produce whatever results our respective tribes demand.

We believe what we want to believe, and neither facts nor evidence ever changes that much. Welcome to planet Earth.