Press "Enter" to skip to content

Notes . . .

notes

Got a call today about a column from a few weeks back relating to a polling result in an Idaho contest. The point of the call was, there’s reason to think the poll result was flawed.

Which may be fair enough. No one polling approach is perfect, and some are more flawed than others. The best approach in analyzing them is to compare and contrast and maybe draw averages, from a bunch of polls. That presents a problem in a place like Idaho, where not a lot of polls are conducted, and many of those that are will be done for private parties. (Beware of putting too much certainty into private polls.)

This may be grist for a column . . .

But in mulling over the subject, I spotted a new Nate Silver article on polling, always worth the review, pointing out the wide disparity in pollster results in the upcoming Alabama Senate race. Recent polls have shown everything from a Roy Moore win by nine points to a Doug Jones win by 10.

Most illuminating, though, is an online poll done by the company Survey Monkey, which actually shows that full range of prospective results using the same set of information – the same data set. Political pollsters generally don’t run out the data they receive unfiltered; usually they weight it so the response base they receive matches the local demographics and political leanings. It usually works, sort of.

But the Survey Monkey results show just how much the “polling results” vary depending on what kind of assumptions you attach to it. If you use standard demographic weights and count responses from all available registered voters who will certainly or probably vote, then Jones is ahead by nine points. If you use a standard set of demographic weights filtered through the 2016 results, and counting people who voted in 2014 (with newcomer certain voters added), then Moore wins by 10 points.

So what happens tomorrow? Hey, no predictions here . . . –rs
 

Share on Facebook