Saturday, December 10, 2011

Why The Reuters Israel Bias Study Is Flawed

The traditional pro-Israel blogosphere has been buzzing with the release of an academic journal paper which claims Reuters employs anti-Israel propaganda in its reporting. Given that most of the evidence of anti-Israel bias in the press is anecdotal, an academic paper may lend considerable leverage to claims the media consistently discriminates against Israel in its reporting.

The paper suggests persuasively that anti-Israel bias may be at play in the media. But social science is not for suggesting. It is for demonstrating with a high degree of certainty that a given variable is causing a certain outcome. The difficulty of this task is perhaps the most intellectually honest reason many academics steer clear of policy-relevant research. Simply put, it's really hard to demonstrate anything to a high degree of certainty.

The author, a finance professor, begins by coding 50 Reuters articles randomly selected from the period of May 31 - August 31, 2010. He goes through each article with a list of 41 possible types of propaganda (rumors, euphemism, innuendo, etc), and classifies each instance of bias as one of these 41 types. Separately, he asks 33 college students to rate their affinity towards either the Israelis or Palestinians/Arabs, and their willingness to take unspecified supportive action on their behalf. The students then go through the same 50 articles, and answer the same two questions about their personal affinity after reading each one. The author finds that reading the articles shifted the students' perceptions towards the Palestinians in a statistically significant way.

Sounds pretty convincing, and for an editorial it certainly would be. But there are major flaws in the research design presented above from a social science perspective.

In the article, the author uses a fancy-sounding technique called ethnographic content analysis (ECA). Content analysis, as much as it pains this blogger to admit, is a fancy term for going through a document with a pen and circling stuff that's interesting. Ethnographic content analysis means going through the document and circling stuff "reflexively" rather than developing transparent, reliable, and fair definitions of the coding categories before doing the actual coding. It is a method even more subjective than normal content analysis. So when a pro-Israel professor goes through Reuters articles about Israel, it's hardly surprising that he finds evidence of anti-Israel bias.

This is even more the case given that the timeframe of the analysis covers the extremely emotional Gaza flotilla incident in which tempers flared on all sides of the Israeli-Arab conflict. A researcher would be hard pressed to find any truly objective account of the incident in any media or academic source.

Evidence of the author's subjectivity is compounded by the extremely questionable way in which the articles are coded. For example, the claim that the Gaza blockade harms 1.5 million Palestinians is coded as an "assertion." The UN and countless human rights groups say otherwise, indicating that this coding is false at worst and debatable at best.

The research design also has practically no controls. In other words, the coding scheme is the social science version of asking "George W. Bush: great president or greatest president?" Additionally, while the author begins by saying that demonstrating an intent to propagandize is nearly impossible for a researcher to do, he ends by accusing Reuters having an "explicit purpose to disseminate that [biased] ideology and manipulate audiences to adopt the same."

Furthermore, the student participants in the experiment read only Reuters articles. That means there is no way for the research design to speak to whether Reuters alone is engaging in so-called propaganda, nor whether students reading about an extremely emotional event (the Gaza flotilla raid) simply tend to side with Palestinians after being reminded of the incident. Nor does it demonstrate that college students are representative of Reuter's customers as a general population. Nor is the design able to speak to whether Reuters is biased or whether it is reacting to social norms about the use of force which the author considers unfair.

Most egregiously, the study tests for bias by essentially asking 33 college students, "Are you biased and if so, how much?" Such techniques are widely recognized as problematic in social science.

In conclusion, the article is far from conclusive proof of systematic propagandizing by Reuters. Attempts to explain Israeli-Palestinian media bias across academic disciplines (Vallone, Ross, and Lepper, 1985; Kressel, 1987; Giner-Sorolla and Chaikin, 1994; Gentzkow and Shapiro, 2006) have consistently pointed to the subjectivity of the coder rather than the media as the causal mechanism of the discovery of bias. This article ultimately is unconvincing at persuading the reader otherwise.

No comments:

Post a Comment