The Cursory Pedant: War Rape, the Human Security Report and the Calculation of Violence

“Cursory and pedantic”. So says IntLawGrrls’ Fionnuala Ní Aoláin of the just released Human Security Report 2012 (hereafter HSR). You may recall the team behind the HSR from their last intervention, which upset the applecart over the estimate of 5.4 million excess deaths in Congo (DRC) since 1998 and which also claimed a six decade decline in global organised violence. The target this time round is a series of putative myths about wartime sexual violence (those myths being: that extreme sexual violence is the norm in conflict; that sexual violence in conflict is increasing; that strategic rape is the most common – and growing – form of sexual violence in conflict; that domestic sexual violence isn’t an issue; and that only males perpetrate rape and only females are raped), each of which the authors claim to overturn through a more rigorous approach to available evidence. Along the way an account is also given of the source of such myths, which is said to be NGO and international agency funding needs, which lead them to highlight the worst cases and so to perpetuate a commonsense view of war rape that is “both partial and misleading”.

Megan MacKenzie isn’t impressed either, especially by HSR’s take on those who currently study sexual violence:

[HSR’s view is] insulting because it assumes that those who work on sexual violence – like me – those who have sat in a room of women, where over 75% of the women have experienced rape – as I have – listening to story after story of rape, forced marriage, and raising children born as a result of rape, it assumes that we are thinking about what would make the best headline, not what are the facts, and not what would help the survivors of sexual violence.

Laura Shepherd (who like Ní Aoláin and Megan has written at some length on these issues) took a slightly different approach: “It makes not one jot of difference whether rates of [incidents of conflict-related sexual violence] are increasing, decreasing or holding entirely steady: as long as there are still incidents of war rape then the issue demands serious scholarly attention rather than soundbites”. Activists are concerned less by what the report says than by how it will be interpreted and the effects this will have on victims and survivors of rape (the danger, in Megan’s words, that “painting rape as random is another means to detach it from politics”). By contrast, Laura Seay (who has previously addressed similar issues in relation to Congo) is very supportive: “it’s hard to find grounds on which to dispute most of these claims. The evidence is solid”. Andrew Mack (who directs the HSR) similarly replied that the data supports HSR’s claims and that, despite criticisms, it had been checked rigorously.

So what is going on here?

Since the report and subsequent debates mash together a whole series of issues, it’s worth untangling them a bit (I’m focusing here only on the wartime sexual violence claims in the report). First, in HSR’s defence, their target appears to be ideas common amongst policy-makers (academics, for example, tend to cite Elisabeth Jean Wood a fair bit). To the extent that the Report stimulates a more careful disbursement of funds, it is of course to be welcomed. I was particularly taken by the idea that there should be an effort at evidence-gathering more in line with that adopted for the Millennium Development Goals. Moreover, there is value in trying to think about global scale and change, even if some of us disagree very strongly indeed with appeals to incontestable facts and law-ish trends, and even if other ways of studying sexual violence are just as important. Any generalisations at this level necessarily swamp a more qualitative approach, but we nevertheless need to be clear about the possible grounds for contestation. 

In the case of counter-examples, like that of Libya that Megan raises, caution about the character and extent of sexual violence is indeed in order, and it would not fundamentally challenge the HSR conclusions if there was rape in this and similar conflicts (as there surely is), since their point when it is made most coherently is that the relative number of wars-with-overwhelming-levels-of-rape is not increasing (and may be on the decline). Atrocity ratcheting too is a real dynamic and one that can have all kinds of consequences that do little to ameliorate the effects of sexual violence. These issues are the subject of much controversy, although my own view is that assessments of general scale and severity are pretty much always implicitly present in explanations and ethical claims about violence and so cannot not be addressed (which isn’t the same thing as arguing that the only way to study them is through cod-scientific regression analysis). All that said, the particular claims with which the HSR seeks to challenge supposed conventional wisdom are themselves pretty problematic, as I hope to flesh out.

Second is the question of novelty, the crucial point being that there isn’t actually any new research at all in the HSR (more on which in a moment), and that the main argument offered for any possible ‘decline’ thesis isn’t based on sexual violence data, but on claimed war trends. Moreover, most of the mythological views attacked by the report aren’t actually held by analysts (at least, not by ones I’m familiar with), and many of the critiques (that domestic violence matters, that men are raped too, that interventions can be driven by dynamics other than evidence) have not only been made before, but are common in the rather extensive literature that HSR neglects (particularly ironic given that some HSR advocates are leaving comments basically alleging that academic critics don’t know what we’re talking about). This inevitably results in straw-men arguments: who, after all, really claims that rape in Congo is representative of all violence in all conflict situations since World War II?[1]

The suspicion raised in several places is that the HSR is out for easy publicity, as was the case last time round, when one of those who had produced the disputed International Rescue Committee estimate of excess mortality in the DRC retorted that it was “unbecoming to grab a headline a decade after by tearing down a study with erroneous speculation”. In this case, I think the question of why war rape happens is important, and relatively neglected (although not by everyone). I would probably also agree with HSR’s view that “strategic rape is less common than claimed”, although that phrasing hides a lot, and touting strong conclusions on this quality of evidence probably isn’t wise (nor is it sufficient to implicitly define the two possible kinds of rape as either ‘strategic’ or ‘domestic’ – there is a lot, a lot, of space inbetween).[2]

Third, and most importantly, there is the problem of concepts, and of calculation. The issues here span a spectrum from the minor to the crucial. To take a relatively small example that can nevertheless have considerable consequences, the focus on numbers of cases, or on counting years of violence, is not self-evidently the best way to frame the problem. Some wars are bigger than others and involve more human suffering, however you want to go about estimating that. If war rape declines from levels in the tens of thousands in countries A and B, but increases to levels in the hundreds of thousands in country C, has the problem got worse or not? On the HSR method, this would count as a decline, even if the total number of persons raped (and even the proportion of those raped to those not) increased.

In definitional terms, the HSR also operates with a restrictive conception of gender violence and with an inappropriately stark public/private distinction. The understanding seems to be that paying attention to war rape means not paying attention to domestic violence, and that any rape not carried out by soldiers doesn’t count as war-related. This goes completely against the trend of very almost all feminist and gender research on these issues, which consistently connects war and peace situations, and frequently calls attention to how misleading the label of ‘peace’ can be when considering gendered insecurity. The result for the HSR is a rather flimsy opposition of strategic war rape with domestic rape, the kind of distinction that actually reinforces the problem that it claims to be solving.

At several points, the HSR alights on the relative lack of the ‘most severe’ cases of sexual violence in conflict, and says that they are “far from the norm”. But what is the measure of ‘severity’ here? The categories are borrowed from some work by Dara Kay Cohen, who created a dataset on wartime sexual violence by using the human rights reports of the US State Department.[3] Cohen coded civil wars from 1980 to 2009, with 15 having no rape (level 0), 18 showing isolated reports (level 1), 35 as having ‘numerous’ or ‘many’ rapes (level 2) and 18 with widespread or systematic sexual violence (level 3). Consider that the threshold for a level 2 incidence of sexual violence includes war in which the State Department used words like “widespread”, “common”, “extensive”, “persistent”, “spree”, “routine”, “regular” and similar to refer to rape. To meet the criteria for a level 3 conflict, sexual violence had to be described as “systematic” or “massive”, or the State Department had to invoke phrases suggesting that rape was a weapon, tool or tactic. In other words, 62% of the coded wars on this measure involved very serious and widespread levels of sexual violence. Hardly a minority. The HSR instead claim that Cohen’s statistics show that 56% of conflicts since 2000 had low, or no, rape. The raw data for that period is not in the HSR and is not presented in Cohen’s paper – a footnote suggests HSR got it separately, but it is therefore impossible to adjudicate. But even if it is correct, 44% of cases as having widespread or extreme levels of sexual violence is not a minority in an analytically decisive sense (even if it is in a technical mathematical one).

The Economist putting Cohen’s figures to some different purposes in 2011.

But there are some other crucial ambiguities here. Coding thresholds are slippery things: is the gap between “widespread and extensive” and “massive and systematic” enough to establish that the two forms of wartime sexual violence are qualitatively different? Cohen herself is cautious on these points, and lists a number of potential problems that are not stressed as strongly in the HSR report (certainly nothing in the below should be taken to mean that I think her work is wrong, only that the questions of calculation and quantification involved are very fraught). State Department researchers may have incomplete data, and indeed they do not appear to have conducted any field research themselves (the reports are summative documents, drawing on reports from various NGOs and embassies: essentially desk research). The State Department also seems to have previous form in terms of neglecting gender violence in these reports. Moreover, report authors would not have known that their choice of words would be taken to eventually reflect major differences across cases, and so may not have been as careful with phrasing as they should have been.

Consider for example the entries for the DRC from the period 2008-2011. Checking each report and applying Cohen’s criteria we find the following. In 2008, there was one term indicating sexual violence at level 3 (‘weapons of war’) but many more indicating level 2 (‘widespread’ was used 6 times, ‘common’ twice, ‘frequent’ twice and ‘often’ 4 times in relation to sexual violence). How are we to code the conflict? Is the single use of a level 3 phrase sufficient, and does it matter that it is conjoined with a level 2 one (the phrase in question is “often a weapon of war”)? In 2009, ‘weapon’ was again used once (in the same cut-and-paste phrase), but level 2 terms were again more common. In 2010, there were some extra uses of ‘systematic’ but still more of the ‘non-extreme’ phrasings. But in 2011 there do not appear to have been any level 3 terms at all. On the HSR model, does that mean that rape got less severe in Eastern DRC from 2010 to 2011? Or does it reflect some other factor (for example an apparently shorter country report)? Certainly it seems a rather flimsy metric for any such conclusion.

So when Andrew Mack writes something like “the countries worst-affected by sexual violence constitute a minority of all war-affected countries” about the general figures for 1980-2009, he includes in the majority of conflict situations (those to be counter-posed to ‘worst case scenarios’) cases where the State Department describes rape as “common”, “a persistent pattern”, “extensive” and “frequent” at levels that are “innumerable”. The distinction between low-to-no cases (levels 0 and 1) and high-to-extreme cases (levels 2 and 3) seems more robust, but these remain measures drawn from a single official source (although with no real sense that the official source did the underlying original research) arrived at by an unclear process. Again, as Cohen herself points out, these reports offer no figures, only summative assessments. They appear comparable with each other, but the extent to which they reflect, or merely select, from the vast array of documents and analyses in existence beyond them, is really quite obscure.

More importantly, some of the distinctions here are not really about scale at all, but about form. The phraseology of ‘weapon of war’ has indeed become common to all kinds of documents and interventions, but for analytical purposes it is a designation of intention, strategy and ends, not of sheer numbers. It is logically conceivable, and may well frequently be the case, that sexual violence is higher in settings where it is not carefully planned and deployed specifically by military or political leaders as a tool. After all, part of the instrumentality of using a weapon is knowing when to stop. Military designs are in this sense as much about restraint as they are about permission. A different kind of account (one stressing, say, levels of rape carried out by civilians, by disaffected soldiers operating outside of a formal military hierarchy, or merely attributing sexual violence to ‘needs’, ‘desires’ or ‘sexuality’) may well suggest very high levels of sexual violence without there being any sense of rape as a coherent tool or as ‘systematic’ (at least not without making ‘systematic’ synonymous with ‘extensive’).

For example, all analysts agree that sexual violence in the Democratic Republic of Congo is extremely widespread and very serious (and epidemiological evidence of a much more careful kind than that deployed in the HSR would seem to bear that out). Yet this does not make it synonymous with being a ‘tool’ in the direct, military hierarchy sense that Cohen’s level 3 criteria suggest. Maria Eriksson Baaz and Maria Stern (who have done more than anyone to understand the perpetration of rape in Congo) conducted interviews with 226 soldiers and officers from the Congolese army, all of whom were directly asked whether or not they had ever received orders to rape. Not a single one reported receiving such orders. And here’s the twist: the HSR approvingly cites this exact point to illustrate the complexity of rape, and yet still sees the DRC as an example of severe rape, even thought it cannot be on Cohen’s scale if Eriksson Baaz and Stern are right.

Nor is this the only internal tension. The strand of the report that argues that women are more significant perpetrators of rape in war is, for example, somewhat at odds with the stress on non-war rape as being what really matters. Moreover, while a main allegation early in the report is that the UN and others focus too much on strategic rape, the HSR ends up quoting Anne Marie Goetz’s threefold distinction of forms of sexual violence (widespread and systematic, widespread and opportunistic and isolated and random) whilst making a point about domestic violence, despite that typology being more sophisticated than the one deployed in the HSR itself. That’s Goetz as in the Chief Advisor on Governance, Peace and Security at UNIFEM. A self-contradictory tangle transposed into a robust judgement on character and trends.

So, in sum, some valid questions; a possibly useful provocation on strategic rape that is largely squandered; a neglect of existing materials addressing similar concerns; a lack of original research (research which could have been particularly useful on the topic of agency spending priorities and errors); and a complex data measure which doesn’t show what the report thinks it does. War rape matters, and so does being as careful about discussing its details as we can manage, but care and coherence, it seems, can sometimes be in surprisingly short supply.


[1] This indeed suggests a whole other string of questions. Focusing on the countries where sexual violence is highest is only a problem if your orientating desire is to accurately describe all current violence in all conflict situations. That is an important research agenda, but it is not the only kind of orientating desire. If you were instead interested in learning as much as you could about countries were there was a lot of sexual violence so that you could do something to address that, it would be entirely appropriate to focus on the most severe cases. This is not selection bias. Problems arise only if you start dispensing funds or taking some other policy decision that applies to all countries on the basis of the situation in the worst one. I am not aware of levels of sexual violence in Congo being used as proxies for sexual violence elsewhere, nor or demands that everywhere get the same amount of funding as the worst cases. Nor, despite citing a number of reports that do seem to have made over-broad generalisations, does the HSR show that this is the case. Indeed, Mack and his colleagues say that they are not trying to necessarily reduce funding for sexual violence programmes anyway. So something of a confusion there.

[2] I would say more about this, but a lot of the discussion is currently embedded in a submitted – but not yet examined – PhD thesis. Some of the relevant arguments are my recent European Journal of International Relations piece (and associated blogpost) but interested parties can look at the full thesis draft, provided they treat it with the appropriate caution.

[3] As far as I can tell, this work is still in unpublished form, and I am working from the same draft paper (from January 2012) as the HSR cites.

7 thoughts on “The Cursory Pedant: War Rape, the Human Security Report and the Calculation of Violence

  1. I argue that much of what the Human Security Report Project produces is questionable. This is not to say that there is outright deceit or a violation of basic academic principles which allows for academic freedom twinned with “the scholarly obligation to base research […] on an honest search for knowledge,” [citation: http://goo.gl/hb8Fu%5D however it would be better to consider them a lobbying organization with specific aims, ideologies, and goals, rather than a research centre based at a university. As a former employee I can report significant questions over the data used, even simply in terms of data-handling, for example, which would suggest major, systemic problems.

    Data Handling
    It was clear working at HSRP that there was limited understanding of what computer programs such as MS Excel could contribute to a quantitative effort. HSRP generally viewed Excel as a glorified table with graphing function, unaware that one was able to use simple formulas to sum values, filters, or pivot tables. Until at least the publication of the Human Security Brief 2007, the bulk of data work were essentially done by hand: taking values from the computer screen, entering it into a calculator, then re-entering the results into Excel. Naturally, this sort of data handling method is prone to major error [http://goo.gl/uiXtG].

    Intent and Statistical Work
    The group states that “the HSRP tracks global and regional trends in organized violence, their causes and consequences” [http://goo.gl/UEIjM]. The way the findings are presented as analysis, with data, charts and figures, and reports of ‘findings,’ and ‘significant’ [http://goo.gl/70Y0g] changes strongly implies that their in-house work is backed-up by actual quantitative, statistical data analysis. This is, however, not the case. It is my understanding that no one with any substantial statistical knowledge or expertise was involved with the production of the 2005 Report nor 2006 Brief. I would be profoundly surprised if anyone with any statistical knowledge to speak of, or even a basic understanding of statistical concepts, was involved in the 2007 Brief. If their director or then deputy had this training, it certainly did not appear to impact the publications. The Human Security Report 2009/2010, presumably, continued this in this manner as a former colleague suggested to me, although I cannot speak to their more recent publication.

    Data
    Given the apparent data intensive nature of their work, one would expect that their “Security Stats” section of their website [http://goo.gl/263TF], along with the data accompanying each graphic on the respective publication websites, would be accurate replication data. This is not the case. The data provided offers the numbers one needs to draw the graphic, not the source data to derive the graphics (e.g., the data provided for Figure 5.7 of their 2012 Report [http://goo.gl/WsT3s] is only the numbers required to build their figure, not the numbers to derive the results). This effectively reduces the ability of anyone to check their purported “challenges” to the commonly held assumptions they claim to overturn.

    As an example, consider the following from chapter three of their 2006 Brief. Specifically, the first figure from the chapter, titled “Average Number of Conflict Onsets and Terminations, per Year, 1950-2005” [http://goo.gl/TzvIC]. This figure suggests that there are more conflicts ending than starting, in particular after the end of the Cold War. The data provided are only the data used to compile the figure [http://goo.gl/CwSrM]. It is possible to locate data on their “Security Stats” website section which closely approximates the data presented. Why it is different isn’t clear, though it may well be an updated dataset.

    Many of the graphics and statements claim a significant change in pattern X. What this typically means is ‘this graph is arranged in a manner which suits our needs.’ Their conflict-terminations section of the 2006 Brief, mentioned above, is a case in point: you can literally rearrange the data they provide, as evidence for a rise in conflict terminations and fall in conflict starts to support a completely opposite conclusion than their ‘the 1990’s peace-divided, rise in international activism success’ to ‘the end of the cold-war resulted in a massive increase in wars starting, ineffective conflict suppression work, and a neutered UN.’ However, that argument doesn’t support preordained conclusions. They also start the graphic in a convenient year which supports their message, and use averages, rather than absolute values, effectively disguising the magnitude of the change (from 1990 to 2005 their data suggest the “dramatic changes” of conflict starts to stops: 157 conflicts started and 164 ended).

    Has this questionable behaviour continued? In their 2007 Brief they suggested that the country of Sudan was defined as not being in Africa, in a somewhat disguised section of text. That’s not particularly the conclusion one would draw from the text which opens the “Towards a New Peace in Africa?” chapter:

    “Recent news from sub-Saharan Africa has not been good. Since the end of 2007, spiralling intercommunal violence in Kenya has killed more than 1,000 people and displaced well over a quarter of a million. Somalia, still without a functioning government, has become the battleground of a bitter low-level proxy war between Eritrea and Ethiopia. The growing violence in Darfur has spilled over to envelop neighbouring Chad and the Central African Republic, while in southern Sudan, the 2005 peace agreement that stopped a civil war that has cost 2 million lives is in grave risk of breaking down. In the Democratic Republic of the Congo (DRC), elevated levels of disease and malnutrition caused by almost a decade of political violence have been killing an average 45,000-plus people a month—half of them children—since 2003.” [http://goo.gl/Ew6xs]

    Presumably reading this opening paragraph one would think that Sudan would be a country included in Africa, after all, Darfur is in Sudan, South Sudan was still Sudan at the time, and the conflict with Chad and CAR are cited. Generally speaking Eritrea and Ethiopia are considered, in Africa. Yet several paragraphs later, we get this little note “…(Although the war in Darfur is directly linked with the conflicts in Chad and the Central African Republic, it is not included in the sub-Saharan Africa conflict totals because Sudan is part of the Middle East and North African region.)…” This may help to explain the “extraordinary, but largely unnoticed, positive change in sub-Saharan Africa’s security landscape.” [http://goo.gl/hbTsa]. Of course, it’s next to impossible to ratify this conclusion given that the data presented isn’t actually replication data.

    Another case is from the same publication, which also made claims about terrorism: One of their sources wasn’t a peer reviewed journal, as you would presumably expect, but rather data pulled from a story appearing in the magazine Mother Jones: this involved literally eye-balling a graphic from the article to infer the numbers. Concerning more recent publication, a former colleague at HSRP stated: “I do know that they struggled to get the last Report finished because either the data wasn’t telling them what they wanted it to or they couldn’t explain what they found in a cohesive way, or both.” This is hardly up to the standard one would expect of research coming out of a university, which requires a “scholarly obligation for an honest search for knowledge” [http://goo.gl/XBcAV].

    Datasets
    The datasets themselves which HSRP’s major conclusions are based on are also problematic. If you consider three major conflicts in 2006 the farcical nature of the data behind their claims becomes clear: In 2006, their datasets show that DRC suffered or was responsible for 239 violent deaths as a best-estimate, Iraq 4488 deaths, and Israel 1331 [http://goo.gl/WzYHG]. Stated in a different way, Israel had over five and a half times the number of deaths as DRC in 2006, while Iraq had just over three times the number of deaths as Israel. Consider that the Iraq Body Count [http://goo.gl/o8ZF5] suggests over 28,800 people killed in Iraq in 2006—six times higher than the deaths reported by HSRP/UCDP (UCDP—the Uppsala Conflict Data Program—is often commissioned by HSRP to provide conflict datasets). Likewise, the 873 odd American and allied military fatalities in Iraq in 2006 total to 20% of the deaths recorded by UCDP: this means that only five-times the number of Iraqis died according to their data than allied forces. In 2006 Israel fought a several week’s long war in Lebanon, while Iraq in 2006 included some of the most violent months in the country since the 2003 invasion. Turning to the DRC in 2006: the country had some 20,000 UN blue-helmets deployed on a mission costing some $1.2 billion.

    I am not purporting to have now what the actual numbers on human death are in these situations, however data presented by HSRP—and onto which they project their preconceived conclusions—are so flawed as to be pure fiction.

    Recall that many of the claims made by HSRP involve countering the media narrative or commonly held assumptions which they remind us are usually wrong. The data for most of the datasets they cite comes from media reports of violent deaths: it would then be this same media reporting that they claim over-reports violence in some cases or under-reports in others. I suppose this is why Israel, which is monitored extensively by the world media, had over 450% higher death totals in 2006 than the DRC did. Or why the International Rescue Committee (IRC), who weren’t sitting in air conditioned offices at Simon Fraser University in Vancouver, rather were on the ground doing extensive field work, submitting to peer-review, were so profoundly wrong about their findings [http://goo.gl/S8JSE].

    I make no claims about knowing what the right answer is, but whatever conclusions they are drawing from this data is enormously flawed. The response from HSRP will inevitably be that, ‘we understand, of course, that our data don’t capture all deaths,’ or ‘the trend remains accurate.’ There is a difference between having a consistent uncertainty in the data, which can be managed using known statistical techniques and submitting to peer-review, versus whatever is going on here.

    Concluding Thoughts
    The HSRP was originally conceived of to report on human security issues to policy makers and into the general discussions about humanity. The intention was to make known what the academic community knows on the state of violence against humans. The need for this type of work is absolutely critical as it assists in providing the data and knowledge to key decision makers. How effective the HSRP is in performing this task is an entirely separate issue. Rather than a robust research centre which provides objective policy advice to governments and aid agencies—as one could argue groups such as the International Crisis Group do—the decided impression I was left with was that the organization functioned simply as something of a vanity project

    When they were initially founded, and based out of the University of British Columbia, the Project claimed that it would publish a yearly report focusing on different aspects of human security. By now, there should be eight Human Security Reports (2005 through 2012). Currently there are three reports with two ‘Briefs.’ The HSRP also briefly ran two “conflict monitors” [http://goo.gl/4ODbd] which consolidated news and information about the war Afghanistan, as well as the de facto war in Pakistan: they were able to convince the Government of Canada to give them some $280,000 [http://goo.gl/CKSuo]. From the disclosure of this grant by the Government of Canada at the end of September 2010, until the HSRP shuttered the service in July 2011, they effectively received nearly $180 per repost of a notable news item; GoogleNews does the same thing for free. Although it should be noted, they did redesign the website.

    One really begins to wonder how the governments of Norway, Sweden, Switzerland, the United Kingdom, and the others which provide support continue funding such an endeavour. The HSRP is an excellent case-study on the effective lobbying of government and funders by a highly skilled sales-team, which knows how to write an attention grabbing press-release, give compelling presentations, ‘challenge conventional knowledge’ to reveal a “dramatic” [http://goo.gl/bWI86] trend, find something significant [http://goo.gl/ihCWi] or something else which is “extraordinary but unnoticed,” [http://goo.gl/XvYeF] or reveal that “widely held beliefs” [http://goo.gl/kyXPS] are wrong, yet the end goal of all this is to ensure continued funding and visibility alone.

    October 24, 2012
    Switzerland

    Like

    • “The datasets themselves which HSRP’s major conclusions are based on are also problematic. If you consider three major conflicts in 2006 the farcical nature of the data behind their claims becomes clear: In 2006, their datasets show that DRC suffered or was responsible for 239 violent deaths as a best-estimate, Iraq 4488 deaths, and Israel 1331 [http://goo.gl/WzYHG]. Stated in a different way, Israel had over five and a half times the number of deaths as DRC in 2006, while Iraq had just over three times the number of deaths as Israel. Consider that the Iraq Body Count [http://goo.gl/o8ZF5] suggests over 28,800 people killed in Iraq in 2006—six times higher than the deaths reported by HSRP/UCDP (UCDP—the Uppsala Conflict Data Program—is often commissioned by HSRP to provide conflict datasets). Likewise, the 873 odd American and allied military fatalities in Iraq in 2006 total to 20% of the deaths recorded by UCDP: this means that only five-times the number of Iraqis died according to their data than allied forces. In 2006 Israel fought a several week’s long war in Lebanon, while Iraq in 2006 included some of the most violent months in the country since the 2003 invasion. Turning to the DRC in 2006: the country had some 20,000 UN blue-helmets deployed on a mission costing some $1.2 billion.”

      “Recall that many of the claims made by HSRP involve countering the media narrative or commonly held assumptions which they remind us are usually wrong. The data for most of the datasets they cite comes from media reports of violent deaths: it would then be this same media reporting that they claim over-reports violence in some cases or under-reports in others. I suppose this is why Israel, which is monitored extensively by the world media, had over 450% higher death totals in 2006 than the DRC did. Or why the International Rescue Committee (IRC), who weren’t sitting in air conditioned offices at Simon Fraser University in Vancouver, rather were on the ground doing extensive field work, submitting to peer-review, were so profoundly wrong about their findings [http://goo.gl/S8JSE].”

      Not sure about all the claims of the anonymous former ‘insider’ above, but most of the interpretations and claims in the quotes above are wrong. You can look through the sourcing for each conflict in the Armed Conflict data (UDCP/PRIO) used by HSRP here:

      Click to access PRIObd3.0_documentation.pdf

      It is not true that most of their conflict estimates are “from media reports of violent deaths”. Indeed, Andrew Mack co-authored a paper refuting this claim and other mistaken claims about HSRP’s data sources here:

      Click to access Spagat-Mack-JConflict-Resolution-EstimatingWarDeaths.pdf

      “In fact, as the documentation for the data set makes clear, fatality estimates are not based primarily on media reports. PRIO used all available sources, making informed
      judgments about their reliability before including anything in the fatality estimates. The following rough de facto hierarchy of source reliability was employed: peer-reviewed studies aimed at integrating the findings from many sources; the work of epidemiologists and demographers; military historians; other published casualty estimates, mostly in books on particular conflicts; and Keesings Record of World Events. ”

      Some do use sources involving media reports, but most of the conflicts don’t. For Iraq they use Iraq Body Count in the version above (2009 version, I don’t have the 2006 version on hand). I suspect that the 4488 deaths mentioned above for Iraq in the 2006 version was referring specifically to battle deaths during 2006, not all violent deaths, and this, rather than “farcical” data or practices, is why it was so much lower than the IBC numbers. The Mack paper above again clarifies this point with other mistaken critics treating their battle death numbers as estimates of all violent deaths.

      The 2009 version I linked above gives a battle-death estimate for Iraq in 2006 of 3,656, somewhat lower than the 4,488 apparently used in the 2006 version, but it explains that these numbers are for battle deaths and “does not include attacks on civilians or fighting between non-governmental actors”.

      They then go on to give broader estimates of violent deaths overall:

      “2006:
      Best and high estimate: 35,071 (Iraqi civilians (Iraq Body Count), Iraqi security forces and Coalition forces (iCasualties), contractors (Schooner),
      and insurgents (Michaels))
      Low estimate: 3,656 (UCDP) ”

      Since 2006 was the height of the sectarian civil war in Iraq, most of the deaths during that year were from various non-government groups targeting each other for execution, which does not fit the ‘battle deaths’ definition. So it seems the former ‘insider’ was misinterpreting a battle deaths number and declaring it ‘farcical’ for not matching up to an IBC number for civilian deaths overall. Battle deaths as defined by HSRP are a particular subset of violent deaths. Odd that a former employee would not understand this.

      On the DRC numbers, it’s unclear why the former insider thinks 239 (presumably battle deaths again) in 2006 in DRC is wrong or ‘farcical’. Supposedly this is because Israel is given a higher number. He or she doesn’t offer any alternative number, but knows this one is wrong… somehow. Perhaps this is based on the high claims of ‘excess deaths’ from sources like the IRC, but it should be noted that more than 90 percent of their estimated excess deaths were non-violent, and by 2007 less than 1 percent of the excess deaths estimated by IRC were violent deaths (let alone battle deaths).

      And with regard to the IRC studies of DRC, they’re apparently right because they were “on the ground”, whereas HSRP critiqued them from “air conditioned offices”. Well you don’t need to be on the ground to know that the first two IRC surveys did not use proper random selection techniques, or to know that the pre-war mortality rate used to derive an ‘excess deaths’ number for all the surveys was selected arbitrarily by IRC (presumably while sitting in their air conditioned offices), seems to be too low for a variety of reasons, and that these problems fatally undermine the estimates claimed by IRC. Additionally, HSRP also cites a DHS survey of DRC, also performed ‘on the ground’ there, which again undermines the estimates claimed by IRC. Their argument is pretty strong on that case. It can be read in Chapter 3 here:

      Click to access HSRP_ShrinkingCostsOfWar.pdf

      In sum, I don’t think the claim that the HSRP data is “so flawed as to be pure fiction” is a credible claim or properly supported in the above comment, but clearly the author has engaged in some fiction of his or her own here to make it appear credible.

      Like

  2. Pingback: Guest Post: What’s Wrong with the Human Security Report and the “Global Decline Claim” » Duck of Minerva

  3. Pingback: Is Wartime Rape Declining On a Global Scale? We Don’t Know — And It Doesn’t Matter | Political Violence @ a Glance

  4. Pingback: The Cursory Pedant: War Rape, the Human Securit...

Leave a comment