Sunday, May 17, 2009

WaPo's Lame Reasons For Not Reporting Va. Polls

Yesterday's Washington Post carried a defense of it's decision not to report on a trio of recent polls on the Democratic primary for Virginia governor. See "Polls In Va. Governor's Race Haven't Met Post's Standards."

Aside from actually reporting the results of the polls it says it isn't reporting, the Post's defense was pretty lame.

The Post haughtily told its readers that "[o]ur responsibility is to scrutinize the data we report as carefully as we do the sources we quote in our stories." Sadly, in the Post's view, "[n]one of the recent polls in the Virginia governor's race meet our current criteria for reporting polls."

You see, "the basic test of any poll is how reliably the opinions of those surveyed reflect those of everyone in the population of people that interests us, in this case prospective voters" in the primary.

That all sounds nice, but the Post offered only a couple reasons for excluding two independent polls that came out recently. First, the Post says it is very difficult to estimate "how many and what types of people might vote" a month out from the primary given the possibly low turnout. While it's true that turnout is difficult to forecast, this excuse would knock out ANY poll. Most polls get at this question by asking respondents if they are likely to vote in the election of interest. You can usually at least get a good idea of how "likely" voters will cast their votes.

In short, reason one is really no reason at all. It's an inherent difficulty in ALL elections.

The second reason the Post offered up is that the two independent polls were "robopolls." In other words, they were automated telephone polls, in which respondents are asked by a recorded voice to press numbers on their telephones to indicate their preferences.

The Post shouldn't be excluding all robopolls. At least one of the "robopolls" that the Post failed to report on was conducted by Public Policy Polling in North Carolina. In the last election cycle, PPP had an excellent record in forecasting votes in the Democratic presidential primaries and in the general election.

There is no real evidence that a good robopoll is any less accurate than one conducted by a human. The Post--as newspapers like to do--simply cites anecdotal evidence on the subject by quoting "experts" on both sides of the issue. So what?

The Post complains that robopolls tend to be shorter than ones conducted by humans. While true, that does not necessarily make them less accurate. If anything, the sub-questions in longer polls, broken down by various demographic groups, can often be quite misleading because there are so few respondents in each "cell" of the poll that the margin of error is off the charts.

One of the advantages of the PPP robopolls, ignored by the Post, is that it can conduct polls with the same relatively simple questions on a fairly regular basis leading up to an election and develop a trend line.

PPP has done just that in Virginia this cycle, conducting the same poll each month in Virginia since January. The results seem to make sense with what's been going on. The number of undecided voters, while still high, has steadily gone down each month, from 53% in January, to 46% in Feb., 45% in March and 36% in early May.

As the undecideds have gone down, all the candidates have gone up, but McAuliffe has gone up faster. That makes intuitive sense--McAuliffe entered the contest later and was more of a wild card, and he's been spending more money, which would account for a surge as undecideds start to formulate their preferences. The last PPP poll, taken at the beginning of May, still showed more than a third of voters undecided, and many others indicating "soft" support for their candidates (i.e., saying they could change their minds).

The last PPP poll was also quite consistent with another robopoll, from Survey USA, taken a week earlier. Survey USA also had a pretty good track record in the last election cycle. Consistency across different polls is usually a sign that the numbers they are reporting have some reliability.

So, our bottom line on these independent robopolls is that they should be reported by the Post and other media, albeit with appropriate caveats.

On the other hand, the Post was correct to omit reporting on an internal poll conducted by the Moran campaign and then released after the independent polls showed Moran behind by McAuliffe by as much as 10 points. We agree that internal campaign polls should be taken with a huge grain of salt. Campaigns only release what they want, and a campaign pollster can easily manipulate the results (for example, Moran's pollster could overload his poll with Alexandria residents to skew the result for public consumption).

We're not saying the independent polls that the Post overlooked will necessarily be right, or reliable. The Post is certainly free to point out the limitations of such polls when it reports them. (Heck, we wish the Post would point out how poorly hurricane predictions have been in the past when it, inevitably, carries some story with the latest prediction from some group of purported experts.)

But the Post's reasons for excluding them altogether, as not meeting the Post's "standards," really don't pass muster.

1 comment:

Anonymous said...

To under score the efficacy of automated polling methodology I refer interested parties to read a study conducted by a group of Canadian academicians from Toronto that was published in the October 1994 issue of the American Marketing Associations' monthly magazine. The study compared live interviewing and automated polling methodologies. They found respondents had more confidence in the confidentiality of their responses with an automated format. A significant advantage of automated technology is that every respondent hears the same question asked exactly the same way, thus mitigating interviewer bias inherent in live surveys using multiple questioners was another conclusion of the study.
For the past 15 years I have been the CEO of TeleResearch Corporation in Indiana. We do not believe people who answer polls are liars. With a consistent random telephone number select methodology, we have forecast winners in 64 of the last 65 elections we have surveyed. When properly and professionally used, automated polling methodology has a place in contemporary political research.

Jeffrey Lewis, CEO