He’s just taken two wounded and one dead. All he has to report is one confirmed, one probable [enemy fatalities]. This won’t look good. Bad ratio. He knows all sorts of bullets were flying all over the place. It was a point-to-point contact, so no ambush, so the stinkin’ thinin’ goes round and round, so the probable had to be a kill. But really if we got two confirmed kills, there was probably a probable. I mean, what’s the definition of probable if it isn’t probable to get one? What the hell, two kills, two probables.
Our side is now ahead. Victory is just around the corner.
Karl Marlantes, What it is like to go to War, Grove Press
Karl Marlantes is a Vietnam War veteran, who in his reflection about the war experience wrote down all his frustration for the strategy of attrition and the whole data collection process supposed to measure its progress – the infamous body count, and the idea that once achieved a certain ratio of causalities, the enemy would have had to surrender.
Marlantes is hardly alone, and there is a certain consensus that the strategy of attrition, measured by metrics like the body count had been a major element in the American defeat in Vietnam.
On a personal note, reading Marlantes and other authors of Vietnam war literature, I couldn’t help but thinking about another endeavour, where there is an obsession to measure certain indicators, with the conviction that these can give the measure of things and help to gauge real progress.
Yes, humanitarian aid industry, I am thinking about you.
I do believe that there is a striking resemblance about data collection processes as performed in the Vietnam war, and as performed in the humanitarian aid industry. More specifically, both are:
- Strategically misleading
1. Quoting Wikipedia, in computer science, garbage in, garbage out (GIGO) describes the concept that flawed, or nonsense input data produces nonsense output or “garbage”. Marlantes describes poignantly the type of pressure soldiers and junior officers were subject to when reporting causalities inflicted to the North Vietnamese and the Viet Cong. Anyone along the chain of command, down from the humblest grunt in the bush to William Westmoreland, had all the incentive to present inflated statistics. The whole process was of course self-serving and self-defeating, but there was no mean to triangulate, nor any interest at all to do that.
Flash forward some fifty years later, and consider an aid project manager who has to fill in a report for a major donor asking the number of reported GBV incidents for which survivors received psychosocial support. Because the filing system is what it is, considering that there is no way to triangulate the information, that the country office and the donor expect good figures, and that probably the project manager is too busy trying to do his/her job of supporting GBV survivors to really give a damn about how many of them there are, is anybody’s guess what will be the incentive when compiling this report.
This is aimed as no offence for those who strive to give as much as accurate data when compiling their reports. I like to think I used to be one of them. Fact is, sometimes the figures asked are simply not objectively measurable, but they will be taken as fixed truth by your head office or your donor: the incentive is all on presenting data looking as good data as possible. The Vietnam war and the aid industry are hardly the only settings where incentives created this kind of distortions, but they seem to me two very relevant examples.
2. Much more important than quality issues, is the fact that what Americans were measuring in Vietnam, and what many humanitarian organizations are measuring today, is basically useless, if not outright misleading and dangerous. Knowing how many North Vietnamese regulars or Viet Cong had been killed was nothing giving a measure of the progress of the war. Quite to the opposite, it contributed to give a false sense of confidence to decision makers.
At the same time, knowing how many trainings you delivered, how many awareness activities you have organized, at times even how many tons of food has been distributed, is nowhere near being a measure that you are making someone better off. It has happened to me to call off some activity that was clearly backfiring, and squandering tons of money, just to be told to keep on going, otherwise the metrics would have looked bad.
So, what about it?
I am all for data collection and analysis, but this has to be done properly. As seen above, garbage in will only bring garbage out. With regard to humanitarian interventions, there is abundant literature on the fact that the whole issue with metrics that I have described is that they are focused to “activities” level, rather than on “objectives” level. But somehow, this simple truth still struggles to find its way from the theory to the practice, and lot of resources are spent focusing on measuring progress on activities, without bothering to consider the big picture.