Things that make you look dumb: Citing bad poll data
July 4, 2010
Even under the best of circumstances it is difficult to collect representative data from a limited sampling of the population. Legitimate organizations make an honest attempt but there are factions that, through ignorance or by design, so fully bastardize the process the resulting information is less than meaningful. We must be critical thinkers when it comes to evaluating polling information and question the accuracy, representativeness and bias of the poll. Some flaws are readily identified while other are more insidious. Understanding the limitations of polls is critical to understanding their relevance in our conversations and in our democracy. (See “Poll addiction and democracy and.. democracy”)
It should be no surprise that finding a poll to support any given position is not a challenge. If you’re a conservative and want to “have the numbers” to support your particular version of reality you would need look no further than Fox entertainment. Progressives also have their favorite sources of biased information.
Identifying blatantly biased polls and dismissing them for lacking credibility should be a simple task but often the euphoria of having hard, cold data to support a tenuous position is too much to resist. Forget the fact that the data is meaningless; if it supports my position it must be accurate. When that statement is read it should look ridiculous but in casual conversations it is not difficult to hear reference to biased poll information offered as factually accurate and in some way meaningful. Meaningful discussions are a blend of fact and opinion. Opinions are easy and so should be facts. In our sometimes emotional pursuit to find supporting facts for our opinion the standards for credibility and critical thinking are sacrificed.
I am not a pollster and will leave an informed discussion of validity, representativeness, sample size, response rate and such to those professionals. However, there are some issues related to polling that do not require a doctorate in statistics to appreciate. In considering the types of polls one can identify three categories: 1) Entertainment polls, 2) Agenda polls and 3) Non agenda polls.
1) Entertainment polls are readily identified by their total disregard of sample representativeness. We’ve all seen them on cable television channels and websites. A question is asked and the audience that happens to be watching television or reading the website is invited to respond to a question. At the time of writing the CNN website was asking, “Which party bears more responsibility for current economic problems in the United States?, Democratic, Republican, Same”. At least two things make me not even bother to look at the results. First, there are likely certain characteristics of people who choose to go to the CNN website. It is almost certain that the same question asked on the Fox website would have radically different responses. Second, the fact that this is a question asked on the web requires that respondents have access to a computer and the internet. Entertainment polls should be easy to identify and used appropriately, that is for nothing, not even entertainment,
2) Agenda polls are trickier. These are polls that make argument of statistical validity and scientific legitimacy. This does not mean that they are fully accurate. The fact that Democratic and Republican are used as adjectives for “pollster” belies the fact that there is an agenda behind their polling. Results of such polls are more difficult to dismiss out of hand and vigilance must be paid to the wording of questions. One must also consider the complete set of questions in the poll and not take responses favorable to a given position out of context. For example a poll might show that a majority of respondents disapprove of the way a Democratic president is handling a particular policy issue. A Republican pollster might cite that as evidence that respondents want a more conservative policy. That may not be accurate if one looks closely at the all the questions and discovers that a significant number of those responding as disapproving of the policy position actually want a more progressive approach. Lumping all those who disapprove into one category is misleading, at best. Identifying the group responsible for the poll is usually not difficult and will quickly identify whether their group has an agenda.
3) Non agenda driven polls are performed by instructions that have no stated bias. This would include private companies such as Gallup or educational institutions such as Quinnipiac University. Organizations such as these will attempt to conduct polling that is representative of the population using non leading questions. This is the best source of data but not necessarily information. For the data to be become information proper context must be applied and this is where it is incumbent upon those citing the poll data to understand the entire poll and not just the response to a single question. Single responses may be fully accurate from a statistical perspective and make great sound bites but the truth is often only gleaned when the single question is considered in relation to other responses in the same poll. It may be too much to expect that everyone evaluate the full poll when there is a news item mentioning a response in the poll. However, a good start would be for everyone to ask themselves the basic question, “I wonder if that is accurate?”
Critical thinking and skepticism are two assets in evaluating the credibility of poll data. If the data isn’t valid or is biased then citing that data to support your point of view diminishes your credibility and to those who understand the limitations of polling it makes you look, well… dumb.