SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Politics : Politics for Pros- moderated

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
From: LindyBill10/30/2006 5:14:24 PM
   of 793890
 
Primary vote a useful lesson for pollsters

01:00 AM EDT on Sunday, October 8, 2006

BY MARK ARSENAULT
Providence Journal Staff Writer

In late August, the veteran Rhode Island College pollster Victor Profughi was faced with survey results in the Republican primary that defied conventional wisdom: his poll said Cranston Mayor Stephen Laffey had a huge lead on U.S. Sen. Lincoln Chafee.

“I couldn’t believe the results when they were given to me any more than anyone else could,” Profughi said recently. “We went back and double-checked everything. Everything was done by the book.”

He felt obligated to publish the results: Laffey was up, he said, by a staggering 17 points, 51 percent to 34, over the incumbent, in what most people believed was a very tight race. Chafee supporters at the National Republican Senatorial Committee immediately released numbers from their own poll that suggested Chafee was the one with the huge lead: 53 percent to 39.

Two weeks later, Chafee beat Laffey by 8 points, 54 to 46, in a contest that smashed the turnout record for a GOP primary in Rhode Island.

“We were no further off than the guys doing the work for the Republicans, only they were on the other side,” Profughi said. In the 70 years since George Gallup refined the art of polling, pollsters have become ever more skilled, and even a bad poll contributes to the art by exposing mistakes to be avoided in the future.

Chafee now faces Democrat Sheldon Whitehouse, the former attorney general, in the November election. With wide national interest in the race, Rhode Islanders are being bombarded by polls: The national pollster Rasmussen Reports said two weeks ago that Whitehouse was ahead by eight points. Two recent polls, by Brown University and by Mason-Dixon, have the race essentially tied. A Zogby poll last week put Whitehouse ahead by four points. Then on Friday, a new USA Today/Gallup poll suggested Whitehouse led by 11.

In evaluating surveys, says longtime Rhode Island pollster Joseph Fleming, it’s critical to remember that a poll “is just a snapshot of the current moment.”

“People’s opinions are changing,” Fleming said. “You try to look for trends in the polling, see which way the trend is moving.”

The theory behind public-opinion polling is to identify a sample of several hundred people who are representative of the population, and to call those people and ask who they’re voting for.

“It’s important to understand that even though we’re working with relatively small samples, those samples are drawn following laws of mathematics and science so we can project pretty accurately most of the time,” said Profughi. “It’s not just 400 people. It’s 400 that have been randomly chosen where every voter has an equal chance of getting selected.”

From those results, a pollster can predict the leanings of the full electorate, generally within a few percentage points. The trick is finding a sample that accurately represents the voting population. The survey results published before the Chafee-Laffey primary show why so many pollsters stayed away from that race.

“Primary samples are very difficult to draw,” said Fleming, who is conducting surveys for Channel 12 (WPRI). “That’s why I didn’t want to do Republican primary polling.… In a regular sample you’re going to get more voters in Providence and Cranston and Warwick and Pawtucket, but in a Republican primary you’re going to get a bigger turnout in East Greenwich than a city like Pawtucket. You have to weigh all these things, and it makes it very difficult to do a representative sample.”

Rhode Island never had a race like Chafee-Laffey, so nobody really knew who was going to vote in that primary. How many voters would turn out? And from which communities? How many independents would vote?

In creating an accurate poll, “The most important thing is the sample selection,” agreed Brown University professor and pollster Darrell West. To contact voters, West uses “random-digit dialing, so everybody in the state has an equal probability of being included,” he said. “It doesn’t matter if you have a listed or unlisted number.”

People who don’t vote are no help in political surveys; on Election Day their opinions don’t matter. So pollsters screen the people they call to determine whether they are likely to vote. West includes two screening questions in his survey interviews: Are you registered to vote? (People who are not registered are eliminated from the survey.) How likely are you to vote in the November election? Very likely? Somewhat likely? Or not at all likely? (Only people who answer “very likely” are included.)

The pollster may predict voter turnout to determine how many voters from each region to include in the survey. “We have voter history for turnout,” Profughi said. “In well over 95 percent of the time, you can use the historical pattern of voter contribution from election to election because it doesn’t change very much.”

West said he compares the “demographic characteristics” of the sample “to our own past surveys, to make sure they are consistent, as well as to the census. You have to monitor during the course of doing the survey so that you end up with a sample that matches the characteristics of the state.”

Pollsters have long known about several biases that can affect a sample. First, random surveys tend to reach a greater percentage of women than men. “You’ll get more women answering the phone in a lot of households, and you also have female-headed households that are growing in number,” West said. “Another known bias is weighted toward more-educated people. They are more likely to participate in surveys than people of lower education. But that bias isn’t very problematic for surveys because voting is also skewed in favor of more highly educated people.”

Cell phones and caller ID are new barriers to pollsters. More people, especially younger voters, are abandoning landlines for cell phones that cannot be random-digit dialed due to federal rules – because consumers have to pay for air time on incoming calls. And caller ID makes it easier for people to avoid pollsters. To adjust, pollsters have to make more calls and carefully watch the demographics of the sample.

Credible pollsters always publish a “margin of error” for their surveys, which is a measure of the accuracy of the sample. The larger the sample, the smaller the margin of error.

For example, “In our last survey we had 578 likely voters,” said West, “so that would give you a margin of error of plus or minus about four percentage points.”

Margin of error is frequently misinterpreted, West said. “Margin of error means either number could be plus or minus four percentage points.” So surveys that show a race tied at 40 percent for each candidate means that either candidate’s support could be as low as 36 or as high as 44, based on a margin of 4 percent. “The concept of margin of error basically means 95 percent of the time the results are going to be within that range of plus or minus 4 percent,” West said.

Sometimes, polls miss outside of the margin. The national pollster John Zogby said that lopsided polls can skew the media coverage of a campaign and make it more difficult for the trailing candidate to raise money and generate news coverage. “Does this mean that preelection polls actually affect voter turnout and/or the results?” Zogby wrote in an article on USINFO Web page maintained by the U.S. State Department. “Generally the short answer is no.” Speaking about national politics, he said there is “no clear evidence showing that any candidate in a competitive race ever lost because of preelection polls showing him behind.”

In hindsight, Profughi thinks he knows what happened with his Laffey-Chafee survey. “On the Laffey side, we were high but I don’t think out of the ballpark,” he said. His poll overstated Laffey’s vote total by five percentage points, which was exactly the poll’s margin of error.

“Where we really missed the boat was on Chafee,” he said. In a postmortem analysis, Profughi discovered an “extraordinary” number of people who declined to participate in the poll because they had apparently been repeatedly called by the Chafee campaign’s get-out-the-vote effort.

“They wouldn’t give us any answers. They terminated quick. What I think was happening was that Chafee supporters we normally would have been able to reach were being contacted [so frequently] by the Chafee camp, which was exactly what Chafee should have been doing. But when these voters got one more call they said, ‘Hey that’s it, buster.’ ”

With so many refusals, Profughi had to make more calls. “We continued to call and hit more Laffey people who had not been contacted nearly as frequently. And that went a long way in explaining what happened in underestimating Chafee and to some degree, being over on Laffey.”

“Now in retrospect,” he said, “I have learned from that.”
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext