I don't believe that Gallup games every poll they do. I do believe that they game strategically. And that is what that 5 point advantage to Romney was. A strategic game.
Here is the rest of the article, which explains that Rasmussen actually has a 3 point house effect in an apples-to-apples comparison, using Silver's methodology. I haven't studied his methodology, however, and am not sure I agree with it. I do know that when I see pollsters like, e.g., Gravis or Foster McCollum (who he doesn't mention here) consistently showing up with outlier results skewed toward Republicans, I tend to disregard them, especially when I see who the principals are. However, Gallup is a respected name in polling, and I am sure that they are careful in how they do and report their results.
Since Rasmussen Reports is one of the few polling firms to be surveying likely voters, adjusting other polls to a likely-voter basis tends to bring the other polling firms closer in line with it. Without that likely-voter adjustment, Rasmussen Reports would have roughly a three point Republican-leaning house effect.
The philosophy of the model is simply to strip most of the house effect out of the poll. So a Public Policy Polling survey that showed Barack Obama ahead by seven points in Colorado would be treated as more like a four point lead for Mr. Obama once its house effect is accounted for.
(As a more technical matter, the model does allow the polling firm to retain some of its house effect. Since calculating the house effect is subject to error – anomalous results may reflect sampling error rather than anything systemic – it reverts them to the mean somewhat. But the adjustment is fairly aggressive, especially if the house effect is large or if the polling firm has released a large amount of data.)
The house effect adjustment is calculated by applying a regression analysis that compares the results of different polling firms’ surveys in the same states. For instance, if Marist comes out with a survey that shows Barack Obama ahead by four points in Ohio, and Quinnipiac has one that shows him ahead by one point instead, that is evidence that Marist’s polls are 3 points more Democratic-leaning than Quinnipiac’s.
The regression analysis makes these comparisons across all combinations of polling firms and states, and comes up with an overall estimate of the house effect as a result. National polls are treated as a ‘state’ and are used in the calculation. The calculation accounts for changes in the national polling trendline over time, and so ideally will reflect true differences in methodology rather than just accidents of timing.
The simpler method of taking an average of the polls is each state will certainly go some way toward reducing house effects. However, the mixture of the polling firms that are active in each state is not always the same, and sometimes the Republican-leaning and Democratic-leaning polls do not cancel out. For instance, most of the polls released in Michigan recently are by firms like Rasmussen Reports and We Ask America with Republican-leaning house effects, while most of those released in Washington State are by firms like Public Policy Polling and SurveyUSA that have had Democratic-leaning ones.
I have left one question unanswered, however. We might say that Public Policy Polling has a three percentage point Democratic house effect. But that reflects 3 points as compared to what exactly?
The basis for the comparison is a weighted average of the polls, with heavy emphasis given to a firm’s pollster rating in calculating its say in the consensus. Polling firms that we believe to produce more reliable results – based on their past performance and their methodological standards – have a lot more say in the calculation.
In addition, in a new wrinkle in the model this year, the consensus is estimated solely from polling firms that include cellphones in their sample. More and more polling firms are including cellphones, including almost all of the major news organizations. But some still do not, and this is an increasingly unacceptable practice. About one-third of American households do not have landlines at all, while another one-sixth have landlines but rarely or never accept calls on them.
What this means is that polling firms that are not including cellphones are missing somewhere between one-third and one-half of the American population. That really stretches the definition of a scientific survey. There is reasonably persuasive evidence that this can bias results. Polling firms can try to compensate for the problem by applying demographic weights, but this entails making a lot of assumptions that may introduce other types of bias and error.
There has been a modest tendency this year for polling firms that do not include cellphones to show more favorable results for Mr. Romney than those that do include cellphones. The more powerful pattern, however, may simply be that polling firms that do not include cellphones are producing more erratic results in one direction or another as their weighting algorithms try and sometimes struggle to compensate for their failure to collect a random sample. We are not quite at the point of completely throwing these results out (although that may be a defensible position). But we do calculate the consensus based solely on the polls that do include cellphones. Therefore the house effect adjustment includes an implicit cellphone adjustment. |