SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Politics : Al Gore vs George Bush: the moderate's perspective -- Ignore unavailable to you. Want to Upgrade?


To: TigerPaw who wrote (3916)10/29/2000 8:55:20 PM
From: ColtonGang  Respond to of 10042
 
Pollish Sausage
By William Saletan

Posted Thursday, Oct. 26, 2000, at 4:00 p.m. PT

No matter who you think is going to win the presidential election,
you can find a poll to back up your opinion. If you're betting on
George W. Bush, you can point to the Voter.com Battleground
2000 survey, which consistently shows Bush ahead. If you're
betting on Al Gore, you can point to the New York Times/CBS
poll, which usually indicates a small lead for Gore. If you think the
debates helped Bush a lot, you can point to the CNN/USA
Today/Gallup poll, which found a big Bush surge after each
encounter. If you think the debates didn't help Bush much, you
can point to the Reuters/MSNBC/Zogby survey, which has rarely
shifted more than two points a day.

Why do the polls confirm so many theories? Because theories are
built into the polls. Each polling outfit has its own objectives and
biases. In the case of media surveys, these objectives and biases
aren't about ideology; they're about news-making and social
science. Some tracking pollsters want to find big day-to-day
changes, others want stability. Some want to narrow the
population they study, others want to broaden it. Some fear
passive bias, others fear active bias. Each pollster designs his
survey to suit his preferences, and each gets the results he's
looking for. Like the rest of us, pollsters have theories about who
will vote and how. Polls don't confirm these theories. They
incorporate them.

This year's big controversy is the CNN/USA Today/Gallup
tracking poll. Other pollsters are dismayed at Gallup's radical
swings. In the two days after the first debate, Gallup's three-day
sample went from an 11-percentage-point Gore lead to a
seven-point Bush lead. Last weekend, Bush had a nine-point lead
in the Gallup sample; two days later, Gore had grabbed the lead.
Contrast this with the Zogby survey, which moved only four
points and two points during those periods, respectively. Why the
difference? Because Gallup and Zogby are looking for different
things. Gallup is trying to capture daily fluctuations, while Zogby is
trying to filter them out.

On its Web site, Gallup makes clear that its poll seeks to
maximize daily change: "Our objective is to pick up movements
up and down in reaction to the day-to-day events of the
campaign." Gallup postulates that one in five voters is highly
malleable: "A sizeable portion of the voting population, upwards
of 20%, is uncommitted and on any given day as likely to come
down in favor of one candidate as the other." Gallup doesn't mind
that big shifts in the partisan makeup of each day's sample—one
day lots of Republicans, the next day lots of Democrats—push its
numbers back and forth. Gallup's editor in chief, Frank Newport,
says these partisan shifts reflect "differential intensity" between the
parties. One day, Republicans feel likely to vote; the next,
Democrats feel likely to vote. Accordingly, the pool of "likely
voters" shifts from Bush to Gore.

Other pollsters regard that kind of change as a distraction. They
want to hold some factors constant—including party
affiliation—so they can focus on variations in other factors.
"We're trying to measure movement within groups," says Ed
Goeas, the Republican pollster who oversees the Voter.com
survey. "If I see that white women have moved 10 points, I want
to see whether that was real movement"—as opposed to an
excess of Republican women in the first sample and an excess of
Democratic women in the second. Similarly, Washington Post
survey director Rich Morin writes that Gallup "may not be
tracking real changes in the electorate, but merely changes in
relative interest or enthusiasm of Republicans and Democrats."

Notice the clash of premises. Morin and Goeas use a hard model
of voting behavior. They assume that any changes in the
horse-race numbers (i.e., the percentage of respondents who plan
to vote for Bush or Gore) caused by changes in the partisan
makeup of the likely voter pool aren't "real." These pollsters treat
the distribution of Democratic and Republican voters in
presidential election turnout as a constant. When they see poll
results in which that distribution shifts back and forth like a
variable, they dismiss the data and fault the poll's methods. You
could argue that their hard model, with its fixed dichotomy of
constants and variables, is too rigid. But you could argue just as
easily that Gallup's soft model, which treats everything as a
variable—to the point of positing that uncommitted voters are "on
any given day as likely to come down in favor of one candidate as
the other"—is too mushy and chaotic. Which model is better?
The answer to that question isn't scientific. It's philosophical.

It's also practical. CNN and USA Today are in the news
business. They're paying Gallup for new numbers every day. If
Gallup's numbers don't change, where's the news? So Gallup has
an incentive to keep its filter loose, allowing the winds of shifting
partisan intensity to blow its numbers back and forth. Goeas, on
the other hand, is a professional campaign pollster—as is his
Democratic partner in the Voter.com survey, Celinda Lake.
They've designed their poll to get the kind of information a
candidate, as opposed to a news organization, would want.
Campaigns divide the electorate into demographic groups—union
households, white women, Midwestern Catholics—and target
their ads and messages to those groups. A campaign manager
needs to hold the distribution of these groups constant from day
to day so she can track movement within each group. Which poll
is correct? That depends on what you need the numbers for.

Here's another philosophical question: How many days do you
need to poll in order to understand public opinion? Gallup is
sampling 400 people every night. Since CNN and USA Today
want the numbers to keep changing, they report a rolling average
based on only the last three samples. If Gore was doing well three
nights ago, but Bush is doing well tonight, the pro-Gore sample
drops out of the three-night mix, the pro-Bush sample goes in,
and Bush gets a big bump. On its Web site, however, Gallup
reports a rolling average based on the last six samples. The
pro-Gore sample stays in the mix, diluting Bush's bump—and
conversely, tonight's pro-Bush sample stays in the mix five days
from now, diluting Gore's next bump. The result is a less exciting
series of smaller shifts. The three-day average tells you how
1,200 people feel right now. The six-day average tells you how
2,400 people feel over the course of a week. Which number
should you pay attention to? That depends on whether you want
the latest news or the big picture.

The argument for the big picture is that it's a better predictor.
Presidential preference "is not a firmly held attitude," says Gallup's
Web site. "[T]here is no need for Americans to develop a firmly
held view on their vote until Nov. 7." Yet Gallup says its poll is
designed to clarify who would win the election if it were held
today. Its surge toward Bush after the first debate, for example,
suggests that "if the election were indeed held during the days
after the debate, Bush would have won, in large part because his
voters would be more likely to turn out to vote." But if
presidential preferences don't become "firmly held" until Election
Day, then it makes no sense to infer from today's numbers that
Bush would win "if the election were held today." The election
isn't being held today—and if it were, voters would have to
resolve their fluctuating feelings into firmly held views that might
not lead to the same conclusion.

Every pollster dreads statistical bias. But there are two kinds of
statistical bias: passive and active. Passive bias is what happens
when you don't balance your sample. If you live in a white
neighborhood and poll your neighbors, you don't get enough
black respondents. You have to take steps to make sure you
either 1) sample the proper percentage of blacks up front; or 2)
"weight" the number of blacks in your sample to reach the proper
percentage. For example, if you polled half as many blacks as
you should have, you double the weight of each black
respondent's answers, as though you had polled the correct
number.

How do you determine the proper percentage? The least intrusive
way is to adjust each demographic group—women, Hispanics,
senior citizens—to census data. But what if you're polling likely
voters? Shouldn't you adjust the percentage of black respondents
in your sample to the percentage of blacks among voters who
actually turn out on Election Day? And how do you figure that
percentage? Do you look just at exit polls from the last election or
at precinct-by-precinct turnout figures? How many past elections
should you look at? How should you update those old figures to
take account of possible changes in this year's black turnout? And
what if you overestimate black turnout and assign too much
weight to black respondents in your poll? In that case, you've
replaced passive bias with active bias.

Gallup and the New York Times/CBS poll use minimal weighting,
based on the census. Goeas and Zogby, however, adjust their
filters and weights to match the turnouts they expect among
various demographic groups, based on past turnout, current voter
registration, and other factors. Polls whose weights and filters are
calibrated to reflect turnout, as opposed to just the census, tend
to favor senior citizens, well-educated people, whites, men, and
nonunion households. The weights alone can radically change the
final numbers. According to the Post, on one recent night Zogby's
weighting process shifted the results from a four-point Bush lead
to a four-point Gore lead.

To see how filters can affect survey results, look at the disclaimer
on the Post's own poll: "The Post and ABC News collect data
jointly but use somewhat different models to identify likely voters.
This can produce slightly different estimates of candidate
support." Sure enough, over the past week, ABC and the Post
have reported different results from the same tracking poll. Here
is a perfect controlled study: The raw data are the same, but the
pollsters differ, and therefore, so do the reported results.

The problem isn't ideological bias. Weighting can just as easily
shift the numbers the other way. The problem is that weights and
filters aren't part of the interviewing process. They precede and
succeed it. Whether you're filtered into or out of the poll and how
heavily your answers are weighted depend largely on the
pollster's theory of this year's turnout—and that theory isn't
reported alongside the numbers in tomorrow's newspaper. "Every
time you add a weight, you run the risk of skewing your internal
data. You're adding one more unknown," observes Goeas. So
which poll should you trust—the one that minimizes weights and
filters or the one that maximizes them? That depends on which
kind of bias worries you more.

The big debate about weighting this year concerns party
affiliation. Republicans are indicating they're more likely to vote
this year than in past years. Should pollsters believe them or stick
with the old turnout projections, which favor Democrats? Usually,
weighting protects the GOP. On his Web site, for example,
Zogby argues that his polls are more accurate because "we apply
weighting for party identification to ensure that there is no built-in
Democratic bias in our sampling." But New York Times survey
editor Mike Kagay agrees with Gallup poll editor Frank Newport
that party affiliation, unlike race or gender, is too vague and
changeable to measure or track reliably. So in addition to the
difference among pollsters over which kind of bias to err
against—active or passive—there's a philosophical disagreement
over whether party affiliation is more like a trait or like an opinion.
Good luck resolving that one.

There are plenty of other backstage quarrels among the pollsters.
Zogby dismays some colleagues by polling during the day. Goeas
dismays others by not polling on Fridays or Saturdays, which are
the hardest days to reach married voters with children. Whose
methods get the most accurate result? Even the election won't
settle that question. Every pollster has fudge factors he can apply
to massage his numbers at the last moment. He can raise or lower
his projected turnout. He can adjust his weighting coefficients, as
several pollsters have already done during this campaign. Twelve
years ago, when I worked at the Hotline, our final three-day
rolling average missed the election result—so we left an extra
night's sample in the mix and bragged about nailing the result with
our four-day rolling average. Being a clever pollster means never
having to say you're sorry.

Join The Fray What did you think of this article?

Reader Comments from The Fray:

[Notes from the Fray Editor: Almost everyone loved this article:
hard (but not impossible) to find a dissenting word. Roger
Raphael's post could only be understood by attentive readers of
the Saletan oeuvre--click here if you're mystified. Matt is
concerned about the margin of error with a 35% response rate.
Greg makes an interesting point about polling on Friday and
Saturday nights ("so then why is the TV so family-focused those
nights if they're all out?"). And then there was praise:]

A terrific piece. This confirms what I have long suspected: polling
differences are much like political differences; they usually stem
from what assumptions are initially made. Pollsters, however,
deliberately keep their assumptions somewhat mysterious,
whereas laypeople are often nearly completely unaware of what
their assumptions are, or are intentionally dishonest as to what
they are. A suggestion, Mr Saletan: call each of these oracles
the evening of Nov. 6, and force them, under threat of journalistic
ridicule, to clearly predict the winner, and by what margin. We
might then begin to discern who had the more realistic
assumptions.

--Will Allen

(To reply, click here.)

This is a wonderful article. It is indeed the very model of the right
way to view the polls. The existence of statistical competency in
a non-statistician almost restores my faith in reporting.

It's worth noting that the issues involved here hardly ever matter.
When conducting a test for a new shampoo, for example, if one
has done even a trivial amount of homework (don't interview bald
men for a grooming product, for example), one can safely
disregard most second-order effects. As I tell clients, if the effect
isn't obvious, all the statistics in the world won't improve your
understanding of what you should be doing. In contrast, the
current political campaign is almost the first example I've seen of
the necessity of knowing something about polling being
necessary to read the polls. For once, the difference between
49% and 51% matters. Have you any idea how big and stratified
a sample you need to detect a difference this small is a
non-uniform market? I'd love to know if anyone is actually
sampling at that size and sophistication. I'd love even more to
see the data.

--Alan Kornheiser

(To reply, click here.)

(10/27)

William Saletan is a Slate senior writer.



To: TigerPaw who wrote (3916)10/29/2000 8:58:28 PM
From: puborectalis  Respond to of 10042
 
In addition to his heart disease starting at age 37 and cardiac bypass operations........Cheney also has been treated for skin cancer and
episodes of gout of the foot,
Dr. Gary Malakoff of George Washington University
Medical Center, Cheney’s primary care physician, also
wrote a letter declaring Cheney to be in “excellent health,”
although adding that Cheney is on a “long list of
medications” which are monitored for his known medical
problems, including elevated cholesterol.



To: TigerPaw who wrote (3916)10/29/2000 9:11:51 PM
From: Hawkmoon  Read Replies (1) | Respond to of 10042
 
Both cases were good men who brought nothing to the ticket.

Hmmm.... sounds like you spent many days of your misspent youth smoking the same ganja that Algor Mortis did.

Anyone who claims Cheney bring nothing to the ticket is clearly diagonally parked in a parallel universe. Here's his biography:

rnc.org

..."His career in public service began in 1969 when he joined the Nixon Administration, serving in a number of positions at the Cost of Living Council, the Office of Economic Opportunity, and within the White House.

When Gerald Ford assumed the Presidency in August 1974, Mr. Cheney served on the transition team and later as Deputy Assistant to the President. In November 1975, he was named Assistant to the President and White House Chief of Staff, a position he held throughout the remainder of the Ford Administration.

After he returned to his home state of Wyoming in 1977, Mr. Cheney was elected to serve as the state’s sole Congressman in the U.S. House of Representatives. He was re-elected five times and elected by his colleagues to serve as Chairman of the Republican Policy Committee from 1981-1987. He was elected Chairman of the House Republican Conference in 1987 and elected House Minority Whip in 1988. During his tenure in the House, Secretary Cheney earned a reputation as a man of knowledge, character and accessibility.

Cheney also served a crucial role when America needed him most. As Secretary of Defense from March 1989 to January 1993, Mr. Cheney directed two of the largest military campaigns in recent history – Operation Just Cause in Panama and Operation Desert Storm in the Middle East. He was responsible for shaping the future of the U.S. military in an age of profound and rapid change as the Cold War ended. For his leadership in the Gulf War, Secretary Cheney was awarded the Presidential Medal of Freedom by President George Bush on July 3, 1991".

**************

Now if you want to claim that Cheney doesn't bring any votes to the ticket, then I agree with you and stated as much in a previous post today.

But to try and spread a bunch of BS like Cheney not bringing anything to the ticket is clearly blatant denial.

Kinda like denying that Bush has a better educational record than Algor Mortis, who dropped out of two post-grad schools in order to spend his days getting high smoking "wacky tobaccy".