A couple of days ago I was watching the local news when the anchor said that McCain was now leading Obama in Colorado 44% to 42%. I squinted to see the fine print on the screen on the screen which revealed a margin of error of ± 3%. Now I don’t expect journalists to be math wizards, but a spread of 2% with a margin of error of 3% indicates that it’s impossible to tell who’s ahead.
The margin of error is what is referred to mathematically as a confidence interval. This means that if they say a candidate has 44% of the vote ± 3%, one can say that the candidate enjoys somewhere between 41% to 47% of the vote. But you can’t say for sure where in that interval. There’s more. For every confidence interval there is a confidence level, typically 95% for political polls. This means that you’re 95% confident that your confidence interval is correct.
The question now becomes how many people do you poll to achieve some particular margin of error? I won’t go into the details of the central limit theorem here, which is where the number comes from, but suffice to say, the more people you poll, the smaller the margin of error is. The good news it that a relatively small number of people polled will give you a good margin of error. The bad news is that the central limit theorem places some conditions that are difficult to stick with in polling.
The margin of error is the only type of polling error that can be quantified, and consequently is the only error reported. However there are other types of error in polling. Coverage error, measurement error and non-response error are also considered by professional pollsters. Coverage error refers to not being able to contact you. You may not have a land line, you may be on vacation, or you may be stationed in Iraq. Measurement error has to do with how the survey is presented: wording, question order, response options, mistakes by the interviewer, the interviewee jerking the interviewer’s chain, etc. Non-response error is people who let the answering machine pick up, have caller ID, or simply tell the interviewer “no.”
Consequently the typical poll has a biased sample of the voting population. Sophisticated pollsters have ways for weighting the results based on these biases, but these methods can only be applied if the biases are known. For example, asking the age, gender, political affiliation, education, etc. are important questions.
Presidential polls that report national sentiment are fun, but don’t tell the correct story since we don’t elect presidents by popular vote. And timing is everything. Political junkies find this hard to believe, but a significant percentage of the electorate won’t pay any attention until after the conventions. Consequently the poll results may suffer the “I dunno” factor.
So for the time being, poll results reported by the press are pretty much a parlor game. Campaigns have much more sophisticated, targeted polls, since these serve as information about how to spend their resources. But they ain’t talkin’.