Friday 21 August 2015

"At the end of the day, everyone was a Ranger"

Two women, First Lt. Shaye Haver and Capt. Kristen Griest, graduated the US Army’s Ranger School at Fort Benning yesterday. That doesn’t mean they’ll be joining the 75th Ranger Regiment. The two are institutionally separate, and combat arms remain closed to female soldiers. But they’ve still passed a course showing their ability to lead a small unit in brutal conditions. It is a tremendous accomplishment.


Their progress through Ranger School is a pretty interesting window into a persistent question in military sociology: the relationship between social cohesion, task cohesion, and unit performance. The concern that women would undermine fraternal bonds among male soldiers, or social cohesion, is often cited as a reason to bar women from combat, along with their supposed inability to meet the physical rigors of the job. Skeptics raised similar objections to integrating openly gay and lesbian soldiers, and for that matter to black soldiers, in earlier eras.


But I think a very good way of thinking through this issue is through costly signals. In a recent article about desertion published in International Studies Quarterly, I argue that when soldiers send costly signals of their commitment to their unit-mates (such as volunteering for service rather than being conscripted), they build trust and desert less often.


Passing through Ranger School is a hugely costly signal – far more than just volunteering. It is brutal. And in forcing these costly signals from its candidates, Ranger School itself seemed to undermine social differences among them. By showing they were capable of enduring and succeeding in the same challenges as their male peers, 1st Lt. Haver and Capt. Griest showed what they could do and overcame stereotypes about women’s physical capabilities. Here are some of their unit-mates quoted in the Army Times:



    "I was ignorant and assumed that because they were women, it was going to be harder for them," said 2nd Lt. Zachary Hagner, who was Griest's Ranger buddy for much of the course.


    During the last day of the Mountain Phase, Hagner said he had been carrying the squad automatic weapon for three days. Exhausted, he asked his teammates if they would help him out and take over the load for a while.



    "Everyone said 'no,'" Hagner said. "But [Griest] took it from me. She, just as broken and tired, took it from me, almost with excitement. I thought she was crazy for that, but I guess she was just motivated."



    Haver was the same way, said 2nd Lt. Michael Janowski, an infantry officer and Haver's Ranger buddy.



    Janowski said he also struggled with extra weight during the Mountain Phase, and Haver "was the only one who volunteered" to help, he said.


    "I probably wouldn't be sitting here right now if not for Shaye," he said.


    Second Lt. Erickson Krogh, an infantry officer, said his skepticism was "smashed pretty fast."


    Griest and Haver almost looked embarrassed as their classmates praised them for their efforts during Ranger School. "I got to know a few of the other females [in the course], they're absolute physical studs," he said. "When I completely believed they would run into physical walls, that was never the issue."


    …


    After a while, the students said, gender didn't matter.


    "When you're out in the field and you're starving and you're tired and you're cold and you're miserable, you really don't think about gender," said 2nd Lt. Anthony Rombold, an armor officer.


    Staff Sgt. Michael Calderon, a sniper section leader from Fort Carson, Colorado, agreed.

    "You're way too tired and way too hungry to honestly care," he said. "At the end of the day, everyone was a Ranger. It didn't matter, as long as the team pulled through and accomplished the mission."


Look at the process of learning. The other candidates learned what Griest and Haver were capable of, and gender stopped mattering in the teams: “everyone was a Ranger.”


Ranger School is a classic costly signal in that it separates, brutally, those who have a certain level of physical fitness and mental and emotional toughness from those who do not. (Were the standards lowered for the female candidates? It seems not: most of their female peers washed out. Maj. Jim Hathaway, a senior Ranger School officer, pushed back strongly on any suggestion that the standards had changed. Indeed, recognizing the concerns that would emerge from an accusation of favorable treatment, senior officers apparently stayed away from patrols involving female candidates, to avoid any notion that top brass was trying to secure their success.)


With costly enough signals of your commitment and your capabilities, soldiers can establish task cohesion – trust in their unit-mates’ efforts towards a common aim – regardless of initial social differences. This suggests a contingent relationship: at high levels of demonstrated commitment, social homogeneity doesn’t matter for intra-unit trust and performance.


This, and the similar experience of DADT repeal, adds a new dimension to what we know about task cohesion and social cohesion. Other research has also shown contingent relationships between task and social cohesion, but with different results. My article shows that social homogeneity mattered more in reducing desertion in volunteer units than in conscript units. Similarly, Peter Bearman found that among North Carolina troops in the American Civil War, more socially homogenous units had lower desertion rates at the start of the war (when things were going well for the South), but higher desertion rates by the end of the war than heterogeneous units (when everything was falling apart). My work, and Bearman’s, suggest that social homogeneity helps military units more among the relatively committed and enthusiastic than among the uncommitted and those who don’t want to be there. If nobody wants to fight (either in a losing Confederacy or among conscript units in Spain), it doesn’t matter if you trust each other—in fact, trusting each other can help you desert together. These results stand against unambiguous, unqualified claims that it’s social cohesion that keeps a unit fighting – a long tradition in military sociology. Social cohesion can’t rescue an uncommitted unit.


But aren’t my findings, and Bearman’s, also in tension with the experiences of Griest and Haver? For them, training among the most committed, the toughest mentally and physically, washed away social differences – they didn’t matter anymore. Based on their experience, shouldn’t we expect social homogeneity to matter less, not more, in more committed units?


But I think costly signalling theory helps us figure out the apparent paradox. As I note in the conclusion to my article, the upper end of commitment in my study and in Bearman’s is still not extremely highly committed – or at least they don’t show it through costly signals. Volunteering for service is a long, long, long way from Ranger School – in an all-volunteer army, only a very small number of soldiers even try to get through Fort Benning, and only a fraction succeed. Bearman and I just don’t cover the full range of possible levels of commitment and costly signals that it’s possible for a soldier to display.


Putting all of this together, it looks as though social homogeneity promotes unit cohesion only at middling levels of task cohesion. It can help reinforce a sense of trust that’s potentially there, but needs a boost. It’s useful for startup armed groups of volunteers to build on social networks. But social homogeneity won’t rescue a group of conscripts who don’t want to be there, and simply won’t matter among the toughest of the tough. Capt. Kristen Griest and First Lt. Shaye Haver, and their Ranger unit-mates, have shown that it isn’t an excuse.

Monday 29 June 2015

"It is no longer a reproach to be known as a deserter": Desertion in the Civil War



I’m dusting off the old blog because I got a bunch of new followers Friday who seem like they're interested in desertion during the American Civil War. (That's what happens when you get retweeted by someone like Ta-Nehisi Coates.) David Brooks had talked about the dilemma of Robert E. Lee: valiant, but a slave-owner, fighting a treasonous war in defense of slavery. What does America do with him, revered as he is? Coates made this great point about deciding whom to venerate:
So I responded:
It’s easier to ignore that the South fought, manifestly, for slavery, when you focus on how brave its troops were. But to do so, it helps to ignore desertion rates.

(I neglected to point out that it isn’t just the Lost Cause. Emphasizing bravery and downplaying desertion was one way that all of white America came to grips with the Civil War while failing to do very much about responding to its origins. Sentimentality about bravery helps you think past what the war was about. By 1876, forgetting about what the war was about was something North as much as South was happy enough to do. I’m overgeneralizing; but it’s noteworthy that the first book-length study of desertion in the Civil War that I'm aware of took over sixty years to appear.)

In any case I thought I’d write a little bit about desertion in the American Civil War. I’m not an expert; my work is on the Spanish Civil War. But it’s hard to ignore the Late Unpleasantness. There is more extensive research on desertion in this war than in probably any other civil conflict. So, keeping in mind that others know way more than me, here are some basics and further reading.
To begin with it’s hard to get a fix on just how many deserters there were. It’s hard to pin down the difference between surrender, capture, going AWOL temporarily with the intention of returning, and desertion. There were probably general incentives to under-report desertion, telling commanders what they wanted to hear. It also means that the accuracy problem was probably worse for the South than the North, because as it’s losing, there’s more surrender, more capture, and—probably—more desertion. 

Estimates based on official records are comparable for the two sides: 9-10% among white Union troops (see pp. 92-93 here), less than 8% among the US Colored Troops (p. 112 here), and around 10% for the Confederacy (p. ix here gives the figure of 103,400 deserters). On both sides, that’s about half of the total number of dead and wounded – so it was a major source of troop losses.
But keep those caveats in mind. An army historian writing in 1920 estimated desertion in the CSA at 45%, but didn’t document the estimate (p. 4 here). 

So, what kept soldiers fighting? A righteous cause, the barrel of a gun, or the "band of brothers" of a unit? I find a useful way of thinking through the phenomenon is in the norms of military units. Do the soldiers around you want to fight, or not? Let’s say you wanted to fight, in principle. If you couldn’t trust your unitmates to have your back in combat, would you? If it was "no longer a reproach to be known as a deserter," as one South Carolina officer wrote in August 1863, desertion became more thinkable (p. 29 here). Or, on the flipside, let’s say you really didn’t want to be fighting. Social pressure might make you join up in the first place, or you could be among the 10% of Union troops or 20% of Confederates who were conscripts. But once in your company, if everyone around you is gung ho, you might keep fighting because you don’t have much choice.

So where did norms come from? I think a good framework is the sense of accomplishing something together, underpinned by a sense of community. If soldiers believed that their unit-mates were committed to the fight, they could trust that their unit-mates would have their backs and be more willing to endure. So as the war was ending, desertion became an epidemic in the CSA as fighting seemed futile, hardships at home ramped up, and many soldiers’ homes were now behind Union lines.

Trust could be easier to establish with social ties—coming from the same hometown or similar occupations. Among (white) Union volunteers, for example, desertion rates were considerably lower in companies where the troops came from the same state, or had the same job, or were similar in age.
But social homogeneity, helping you see your unit-mates as buddies, didn’t always help. Sometimes it hurt, quite seriously. Take North Carolina, which really was “first in flight,” as Katherine Giuffre once joked. At 13%, it had the highest desertion rate of any Southern state. Peter Bearman found that Tar Heels in homogenous units deserted less often at the start of the war, but more often by the end. Basically, in 1861, groups of hometown buddies fought together. By 1865, they often deserted together. Social ties need to be linked to commitment to a task or a project. 

So the “band of brothers” idea doesn’t mean much for desertion and battlefield endurance unless there’s a common sense of doing something together. This “something” need not be ideological, or part of the master narrative of a conflict. Civil wars connect lots of little political aims up with big ones. It need not be the same thing for every in a unit (though organized factional competition definitely drives up desertion rates, as my research in Spain shows). It doesn’t need to be very sophisticated—simply a sense of, do my unit-mates care whether the unit succeeds or not?

(Side note: lots of proponents of racial segregation and, later, of Don’t Ask, Don’t Tell in the US military have argued that social heterogeneity undermines unit cohesion. Does my argument mean I agree with them? No, actually. I think that if soldiers’ signals of commitment are strong enough – not only voluntarism but also going through rigorous training and constant opportunities to show their unit-mates what they are made of – then social homogeneity doesn’t matter. The aftermath of DADT repeal makes this pretty clear. In the American Civil War, raising armies rapidly and in desperation in a divided country, really clear signals of loyalties could not really be clearly sent, and social homogeneity could make more of a difference.)

There were other dimensions of desertion in the American Civil War. One I’m particularly interested in is hill country. Hill country soldiers were lots more likely to desert. Katherine Giuffre’s “First in Flight” doesn’t just have a fantastic title, it also shows this pretty clearly in the case of North Carolina troops. Giuffre argues that this was because they were less connected with the lowland slave economy, so they had less skin in the game.  That is plausible, but I think another reason might be just that it was easier to hide in the hills. I make that argument in the case of Spain here.

There’s lots more work to be done, and with so many proximate causes to leave—a desperate letter from home; hating one’s officer; horrible medical care; Lincoln just issued the emancipation proclamation, alienating white soldiers who do not want to fight for black people—the literature is immensely rich. Here are a few things to read. Ella Lonn’s Desertion During the Civil War was the first book I know of on the subject (1928), and it’s still packed with insights. James McPherson’s book For Cause and Comrades is terrific on ordinary soldiers’ combat motivations. Mark Weitz’s More Damning than Slaughter is a great history of desertion in the Confederacy. Though I think their main finding needs to be contextualized a bit, Dora Costa and Matthew Kahn’s Heroes & Cowards is truly magnificent, based on a study of over 40,000 soldiers in 300 Union volunteer companies. There is so much in there: how deserters were ostracized when they went home, how unit buddies banded together in the horrors of POW camps. Peter Bearman’s “Desertion as Localism” is a classic piece of insightful, rigorous, and fascinating sociology.

Monday 10 February 2014

Shorts films and short circuits

We saw the Oscar-nominated shorts yesterday, as we have done for the past few years. The best live-action short was clearly Xavier Legrand's "Avant que de tout perdre," an astonishing film that takes full advantage of the short timeslot. Not a moment is wasted. It builds up amazing tension around a mother taking her kids... somewhere. It is full of wonderful touches of plot and character development, a story that's at once straightforward and complex in its details. I hope it wins.

But I'm worried that Esteban Crespo's "Aquel no era yo" will, because it is tailor-made to make white people feel good about themselves.

It's a film about two doctors from Spain who get captured by rebel child soldiers in Africa. It may wind up being seen as gritty and "real", and certainly the kinds of abuse it depicts are all too common in conflict zones. But it also reinforces a bunch of tropes about saintly white people in conflict zones and the mad Africans they must survive. Details are after the jump; here I'll say what I can without spoilers.

Which "Africans"? The film never specifies. They speak in English and are children, and it seems as though it is set in the late 90s or early 2000s. Sierra Leone, Liberia, and Uganda seem the most likely candidates. But the film seems to be saying, by commission or omission: who really cares where it is? It's Africa, isn't it?

It really bothers me that Amnesty and Save the Children were involved in a film that cannot take its subjects seriously enough to say what country they're in.

Overall the film compares really unfavourably to "Asad," nominated last year, a film about (and starring) pirates and children in Somalia that told a story and took its subjects on their own terms. It even compares unfavourably to "Raju," from two years ago, about a German couple who go to Calcutta to adopt an orphan boy. Once they learn that he had been kidnapped from his family some years before, their different reactions do a very good job of depicting the degrees of complicity that well-meaning white people have in crimes in the developing world.

I mean, go ahead and tell the stories you want to tell. But I saw this in ShortsHD's collection of the Oscar-nominated live-action and animated shorts, intercut with directors saying how wonderful it is to be able to take risks in short film, because you don't have to answer to a million people. Given this, I would have loved to see a conflict movie that took real risks.

SPOILERS FOLLOW. Also, TRIGGER WARNING: killing and sexual violence.


Monday 3 February 2014

Playing one game when everyone else is playing a different one

Arthur Chu has won a few games of Jeopardy. This, alone, would not garner much attention, though it's a very good performance and nice take-home pay. Full disclosure: I was on the show once, and came third, so I ain't criticizing.

But Arthur has been winning by breaking some time-honoured Jeopardy practices in order to play smarter. Daily Doubles are the way to go, for they let you put a dollar figure on how confident you are, in particular vastly increasing the the valuation you can put on a question, and do not give anyone else a chance to answer. Consistent with this, Arthur jumps around high-paying categories, looking for Daily Doubles. Then, once, after hunting around and finding a Daily Double, he bet the minimum ($5) because it was about sports and he didn't think he'd know the answer. Then why the hell did he look for it in the sports category? Answer: to prevent someone else from getting it who knows something about sports; such is the value of the Daily Double.

And he's played to draw: in the lead heading into final, he's bet $X = the difference between his current total and twice the second place player's total, such that if they both get it right, the worst he can do is draw; he will win if the other player bets less than his or her maximum, or gets the answer wrong. Now, ordinarily, leaders bet $X + $1: same deal but you win outright if you respond correctly. Presumably Arthur's logic is this: he would still get to keep all his money if he draws rather than wins outright; he doesn't think the second-place player is very good, and wouldn't mind seeing them continue into the next round, rather than a randomly chosen player; and he runs slightly less risk of not winning at all. William Spaniel explains the logic very well here. (I have one quibble with William's analysis: if the gap with the second-place player is small enough, i.e. they've played a good enough game, Arthur should conclude that the 2nd-place player is likely better than the median replacement player, and should go for the win. But I digress.)

This is quite different from what you ordinarily see on Jeopardy. Plodders like me, who keep going down a category like mules rather than looking for Daily Doubles (I flubbed the one I got), let the Arthurs of the world run rampant. In deviating from the norm, it has bothered some viewers: it's harder to follow when someone is jumping around. As for Arthur's Final Jeopardy wagers, it just... feels wrong, by some norm of what a game should look like, not to go for the outright win. There is a sense that it's just not cricket, as they say. Here, Arthur is exploiting a rule that creates tension with the way that a typical viewer thinks a game should happen: there should be one Jeopardy champion.

There are a lot of games out there, and they have rules, but they also have norms that help define those games. These are easy to see in televised games, because there are leagues (game shows) with an interest in making it good TV, and teams (contestants) with an interest in winning. Hockey's rules worked one way in the 1980s, generating a golden age of scoring. But once the great Jacques Lemaire (among others) figured out a way to exploit them to win hockey games, those same rules meant that the game was played totally differently: with stifling defense through the neutral zone that dramatically cut down on scoring chances.

This doesn't happen all the time. It's hard to know how to win in complex games, and so you learn some shortcuts. Jeopardy fans can figure out, by watching enough, that it "makes sense" to bet $X + 1, and then do so if they get on the show, even though there's a better strategy out there. It becomes received wisdom that sometimes you need to steal bases to win baseball games, so that's what you do, even though some nerd is telling you that that's a bad bet. (Hell, sometimes it even works.) Jeopardy is a pretty straightforward game, apart from the matter of actually knowing the answers, but even then the nerds that go on don't really seem to play it well, usually... and I fully include myself in that critique.

There is also socialization that isn't even about winning. You learn one way to play a game, even if you don't adopt that way of playing as a shortcut to winning. Sean Avery worked out that you could, under the NHL's rules, be a jerkface and help your team score a goal. Nobody had ever thought to do that before.

When I went on Jeopardy, they told us a bit about the mechanics of the game. There are employees who press buttons to reveal the clues on the screens, and listen to the responses to make sure that they can adapt when it's formulated a little differently than expected or is otherwise questionable, but permissible, and signal to Trebek what to do. The show employee who dealt with us explained that these guys' job is easier when people plod through a category, because they just focus on the next answer down the line. When there's a player like Arthur, they look at each other and warn: "Jumper!" This is, of course, hilarious. More than that, I don't know that it was an intentional effort to get us not to be "jumpers"... but in any case, it made it clear that "jumpers" are weird folks to the Jeopardy personnel. So norms get perpetuated, new players learn the ropes in one way rather than another.


Leagues can change the rules to adapt to the norms. The NHL tried to deal with the Dead Puck Era by, among other things, eliminating the rule against passing the puck up two lines. (At about the same time they put other foolish rules in place, including one that makes actually winning games Pareto sub-optimal--the opposite of every football league in Europe, where winning gets you three points and a tie gets you one.) Basketball banned zone defense and put up a shot clock for the same reason. Immediately after Avery did his thing, NHL immediately banned... whatever it was he did. If Jeopardy wants to conserve the norm of one winner, they should do something like rule that you get half of your total in case of a tie.

It's all a reminder that the thing about games is this: there are really two games going on. There's what the rules say, and what we act like the game is. And Arthur Chu is winning because he doesn't care about the game the rest of us are playing.

Thursday 25 July 2013

Predictive Policing and Race

On facing pages in The Economist's print edition this week: a piece on predictive policing and a piece on Trayvon Martin. The juxtaposition raises questions that are worth exploring. Will predictive policing help keep future Trayvons, Oscar Grants, and Amadou Diallos alive? Or will it make matters worse?

Predictive policing purports to identify high-risk locations for crime within neighbourhoods, based, in part, on recent crimes there and nearby. It also promises to help identify likely perpetrators (see also this unfortunately gated New Yorker piece on predicting domestic violence in Massachusetts). I want to start with the direct effects on police interactions with the public, before turning to vigilantes like Zimmerman and how predictive policing might change regulation.

The first Economist piece outlines some problems with prediction:

Misuse and overuse of data can amplify biases. It matters, for example, whether software crunches reports of crimes or arrests; if the latter, police activity risks creating a vicious circle. And report-based systems may favour rich neighbourhoods which turn to the police more readily rather than poor ones where crime is rife. Crimes such as burglary and car theft are more consistently reported than drug dealing or gang-related violence.

However, the Economist is still overall optimistic about the effects of predictive policing on race:

But mathematical models might make policing more equitable by curbing prejudice. A suspicious individual’s presence in a “high-crime area” is among the criteria American police may use to determine whether a search is acceptable: a more rigorous definition of those locations will stop that justification being abused. Detailed analysis of a convict’s personal history may be a fairer reason to refuse parole than similarity to a stereotype.

In sum, prediction will allow the cops to do profiling better, and more fairly. I think this is overly optimistic. When a model identifies a high-risk location, the police show up and look around. But what are they looking for? Many appear to be looking for "suspicious-looking" black male teenagers with "furtive" movements. A mathematically-derived predictive model won't stop them doing so, no more than a hunch or prejudice about a violent neighbourhood.

It might, indeed, make matters worse. We're not just bad at predicting. We are bad at understanding prediction. We don't think enough about margins for error, we distort extreme probabilities badly, and we have very weird patterns of risk acceptance and risk aversion, especially at extremes of probability. What's more, the ex ante likelihood of crime on any given night in any given location is pretty small. So even if Block A is more likely than Block B to experience a robbery, we can't say with much confidence that a robbery will occur at Block A. You could make money betting that robbery is more likely to occur on Block A than Block B, but not by betting, at even odds, that a robbery was going to happen on Block A on a given night. But I don't know that we understand all that well: we may become inclined to believe that because Block A is "risky" in the sense of "more risky than Block B," it is then risky in the sense of "having an absolutely, not relatively, high risk of crime." That would be a non-sequitur, but it's one I think people will commit. Under the illusion of computers and their cachet, police may well become more inclined to shoot first. And what goes for a police goes for a jury.

Ultimately, if a cop thinks a location is high-risk and sees a black kid looking "furtive," whatever the hell that is supposed to mean, I don't think it matters much whether a computer told them the location was high-risk or whether their hunch told them. Whether we have a dead kid or not comes down to the cop's ethics, training, professionalism, and prejudice. How a jury responds depends on their prejudices and those of the legal system. There are massive problems with each.

As for vigilantes: it is probably not encouraging to contemplate Zimmerman having access to predictive crime models. Even absent that scenario, I think I'd ideally want the predictive models that are being used to be able to predict when a vigilante is about to be a Zimmerman. I have no idea how feasible this is. It may not even matter: Zimmerman was told by a police dispatcher that he shouldn't chase Travyon, and he did anyway. A cop on the scene may have helped, or, going by all of the above, may have made everything worse. But it is, in any case, well to remember the Economist's warning about what kinds of crime get plugged in to a predictive model. Is it the break-ins alone--i.e. what supposedly prompted Zimmerman to be driving around that night? Or is it the act of a Zimmerman taking the law into his own hands? And if that isn't defined as a crime, what happens?

Thursday 4 July 2013

This was a coup. But then, so was 2011.

What just happened in Egypt was a coup, by prevailing scholarly definitions. But the thing is, by the definitions offered, so was the endgame in 2011. Here is Zoltan Barany in Journal of Democracy:

The generals concluded that Mubarak’s mix of concessions (agreeing not to seek reelection or have his son succeed him) and repression (the February 2 attacks) had failed, and that rising violence and disorder would only hurt the military’s legitimacy and influence. Thus, on February 10, the Supreme Council of the Armed Forces (SCAF) assumed control of the country and, the next day, persuaded a reluctant Mubarak to resign and head for internal exile.
This sounds rather similar: the armed forces replacing a leader unconstitutionally (say what you will about the constitution in question) after several days of hundreds of thousands of people protesting. I want to throw out a new word for the conjunction of uprising and coup: the "couprising." Let me explain.

Even at the time, some scholars and commentators treated Mubarak's ouster primarily as a revolution but also as a coup. Posts at the Monkey Cage referred to comparisons to 1989, the revolutionaryauthoritarian-durabilitycollective action, and media frames; Erik Voetens also referred to a paper by Marinov and Goemans on the consequences of coups on democracy. He did so again today. The armed forces are, as was recognized at the time and as corresponds well with what we know about nonviolent uprisings, pivotal to revolutionary success: there isn't necessarily too much contradiction.

But looking back now, it's easy to forget that it was a coup. I think it's the case that when people now, in the present, talk about what happened in 2011, they'll remember that the military got involved and was ultimately the agent to oust Mubarak. But they won't mention that the intervention constituted a coup at the time. Here, for instance, is the New York Times editorial today (emphasis added):

Despite his failings, and there were plenty, President Mohamed Morsi was Egypt’s first democratically elected leader, and his overthrow by the military on Wednesday was unquestionably a coup. It would be tragic if Egyptians allowed the 2011 revolution that overthrew the dictator Hosni Mubarak to end with this rejection of democracy....
The military was the force behind Mr. Mubarak, then became the protesters’ protector when Mr. Mubarak was overthrown. Subsequently, the generals became the interim government, only to incur the peoples’ wrath when they proved inept at governing yet clung to power even after Mr. Morsi was elected.  
So, in the first paragraph, the current coup is juxtaposed against the 2011 revolution. Then, in the second, the passive voice allows the editorialist to avoid the question of who it was who, in the end, overthrew Mubarak. The military "subsequently...became the interim government", but this is a little misleading: subsequently suggests a time lag between Mubarak's ouster and the military becoming the government--a lag that did not take place.

There are several possible reasons why the two events are viewed rather differently from our vantage point--why 2013 is a coup, but we don't remember as easily that 2011 was too. I want to consider two related ones.

The first is that "revolution" was an available frame in 2011, but not so much in 2013. And this is, to a large extent, fine: just a sensible way of dealing with different events. Treating the events of 2011 as a revolution captured the major reason why they mattered: the ousting of a dictator of thirty years and the end of a seemingly invulnerable regime in favour of a major shakeup in political rules. To zero in on the coup instead would be, to some extent, to miss the forest for the trees. In contrast, on July 2nd, 2013, Egypt was probably a hybrid, "competitive-authoritarian" regime trapped in that difficult "grey zone"; after July 3rd, 2013, it is probably just the same, with someone different at the helm.

So, it is entirely possible that the "revolutionary" frame has pushed the "coup" frame out in the memory of many thinking back to 2011. If--as seems rather unlikely--the coup this year manages to install a stable democracy, we might come to treat it similarly.

A second reason is correspondence to normative positions. Morsi was elected, Mubarak was not, and so Morsi's ouster makes it feel more appropriate to use a term with more pejorative connotations, like coup, over one with more positive connotations like revolution. But this corresponds well, also, to the resistance among many Egyptian liberals to calling what happened a coup. Coup seems to lend Morsi an air of legitimacy (to use Morsi's word) that he should not have had, given his "Calvinball" approach to government--making up rules as he went along to further his own control of the situation.

On this analysis, whether you call what happened a coup has a lot to do with whether you support it or not. And there are, to be sure, good reasons why liberals in Egypt and elsewhere would be against it: like it or not, Morsi was elected, and the massive wave of popular protest in the last few weeks could potentially have been channeled into votes, even in an election without a level playing field. Calling it a coup draws attention to the strong possibility that the Egyptian "deep state" military establishment is reasserting its control.

But the word coup, if that's how the events of the last few weeks are reduced, also draws attention away from the massive protests that Egypt has had, again provoking the resentment of some Egyptian opposition members. A friend of mine, a liberal Egyptian activist, posted to Facebook one of the ubiquitous overhead photos of Tahrir Square, with the caption "Egypt's 'Coup.'" This is with plenty justification. This terrific piece by Trinity College Dublin PhD student Catriona Dowd, based on ACLED data, suggests graphically that protests have, if anything, lasted longer and happened more broadly this year than two years ago.

Dowd asks whether Egypt is experiencing a second revolution. Karl Sharro, getting right at the ambiguity of the officers' motives and intentions, poses a similar possibility in a more cynical vein:

I think it is very important to note that what happened in Egypt, in 2011 as in 2013, involved both a popular uprising and a coup. One also involved a clear change in the rules of the political game, the other... we don't know yet, but there's a good chance not. We have a tendency to apply one label to complex processes. So 2011 is easy to read as a revolution--covering the public uprisings, the change in rule, and, if we think about it but often not, if we don't think about it, the military intervention. 2013: not a revolution. And now, to fight over one term for 2013--coup or uprising?--would be to miss that it was both.

Perhaps we need a single word for the conjunction of an uprising and a coup, capturing their ambiguity. Off the top of my head, how about "couprising"? Thus Egypt had a couprising in 2011, and in 2013; the first was also part of a revolution, the second, we don't know. Or it may be that settling on a single word is itself a problem.

Saturday 24 November 2012

Returning to "Coup-Proof" with some stuff on coup-proofing

I have finally been poked out of blog-lethargy (blothargy?) by a very interesting discussion about the integrity of some African regimes in the face of rebellion. James Fearon was provoked by M23's seizure of Goma to muse:
Being the president in African countries (and many others besides) can be an incredibly lucrative deal.  Why don’t these rulers, in their own self-interest, take some of that money and use it to build crack units, presidential guards, or strong and loyal army divisions that would protect their hold on power against two dozen putchists, or a hundred or a couple thousand rebels armed with rifles and maybe some mortars?
Fearon's intuition is then the same as mine--that the best explanation is the tradeoff between coup risk and rebellion, as developed in the work of, inter alia, Philip Roessler
Keeping the military weak  may lower their coup risk somewhat, but effectively trades coup risk off against higher risks of rural rebellion, insurgency, and foreign depredations such as we are seeing in Eastern Congo. 
But then he notes that Congo is a pretty extreme case in Africa. There are lots of countries that don't seem to have these recurrent problems. It is unclear what, structurally, differentiates these two groups of cases, according to Fearon.

One bit of brush clearing about what we're talking about. Laura Seay, in comments on Fearon, points out that Congo's President Joseph Kabila has in fact done a bit of what Fearon suggests: 

like leaders in most African states, Kabila has a well-trained, equipped, and paid presidential guard that has saved him from at least one coup/assassination attempt as well as a military battle in Kinshasa in 2007.

Goma is pretty far from Kinshasa, as Seay points out, so for the moment Kabila is fine. But clearly he would rather that Rwandan-backed (that seems clear enough) rebels not control a major town in the east. So there is indeed a bit of a puzzle. 

I'm going to try to think through the strategic dimensions of this a little bit, as a first cut. If we want to understand why some countries face these tradeoffs and others don't, I think our best bet lies in predictors of coup risk, and in generating comparative statics through a better characterization of the strategic situation facing leaders.

I think Fearon captures the demand side of things pretty well: presidents do have a pretty compelling interest security services that are (a) effective and (b) loyal, if they can get them. Now, the threat of a successful coup is worse for the leader than the threat of a successful civil or international war, because at least in the latter case, the president probably has a better chance to flee before he/she is captured. Hence the president has to ensure against coups first, rebellions/international wars second. In practice, there are coup-proofing techniques, such as divide & rule, or running multiple internal security agencies all monitoring each other, or relying on loyalists from a communal in-group, that can help reduce the risk of a successful coup, but weaken the army in the face of wars (international and internal). So the president tends to reduce effectiveness for the sake of not having a coup.

I would point out an additional problem attendant on rebellions: they disrupt the calculations of coup-proofing. If, from sources that are in the first instance outside the military, the likelihood that the president is going to get ousted suddenly and dramatically increases, soldiers have much more of an interest in defecting. This was a problem Mobutu Sese Seko faced in 1996-97, or Gaddafi in 2011. The problem in rebellion is then not just effectiveness per se, but also, quite often, loyalty too, and what a president does to keep loyalty against a coup plot might not be enough to keep loyalty against a rebellion.

In any case, the President has a clear preference for a strong & loyal army, and has lots of money, but money doesn't seem to translate into that strong & loyal army. We observe in reality security services are often not effective, and if they are loyal, it's only in the sense that they are not actually launching a coup right this second, not that they are willing to fight particularly hard, or not desert in the face of a rebellion (see Jeffrey Gettleman's masterpiece of sarcasm: In New Tack, Congo’s Army Starts to Fight). There are then three non-mutually-exclusive possible short-run explanations, as I see them:

(1) The problem is on the supply side, not the demand side. That is, the population of individuals willing to supply their military services loyally is too small to meet the demand.

(2) Even if those individuals existed, identifying them is going to be pretty hard. It would probably require credible signals of some kind, but it is unclear what kind of credible signals can be offered.

(3) Direct cash payment is not what is needed to secure loyalty. If I am a colonel, I can hope for a payoff from the next president, not just from this one. In any case promises of cash can attract mercenary careerists over committed professionals. If it is hard to identify one from the other, it sets up something like Jeremy Weinstein's information problem in rebel recruitment, applied to governments. So the money that African presidents have just isn't all that useful in building effective, loyal armies.

So these are some areas where I think we can clarify just what the problems are. I happen to think the identification problem is a seriously tricky and important one. In my paper "Loyalty Strategies and Military Defection in Rebellion," I investigate some of its consequences. I see the reliance on a core group, such as an ethnic minority, as a way of constructing a way of identifying loyalists from the suspect endogenously. A president believes that members of group X are particularly loyal; so promotes members of group X; so members of group X cannot easily defect in the face of a rebellion and hope for good treatment by the other side, being too tightly associated with the old regime. It is a kind of self-fulfilling prophecy. This does, of course, alienate non-Xs, who will be more likely to defect, and as Roessler argues in a terrific paper, it can provoke rebellions in the first place. My paper shows how this played out in Syria in the Muslim Brotherhood uprising that culminated in the Hama Massacre of 1982, in a manner similar to the reliance of the Jordanian monarchy on certain East Bank groups in the Jordanian Civil War in 1970 (and also Saddam Hussein's Iraq), and in contrast to individualized reward-and-punishment approaches like the Shah's in Iran. I think some version of it has been playing out, horribly, in Syria again since 2011.

So, these are some strategic considerations. I think picking them apart will provide further insights into the short run.

But what makes one country more likely than another to have these tradeoffs in the first place? That is, what about Fearon's question about structural preconditions? In general, I suspect that if there are, indeed, structural predictors of the kind of coup-rebellion tradeoff that's at issue here--the specific problem of having a military too weak to deal with these kinds of challenges--that we'll find them in structural predictors of coups, not structural predictors of civil wars, if a coup is indeed the more important threat of the two.

Otherwise, I'm not really sure. Perhaps the mechanisms I suggest above can point us in the right direction on structural conditions. Perhaps the ethnic-identity piece depends on the president having close ties to a communal groups of a particular size (not too big, or else identity gets to be not very predictive of loyalties; not too small, or else it doesn't provide enough manpower), but even then, there are multiple ways to use group identities: Saddam relied on Sunnis specifically from Tikrit, for example. We could maybe look for structural conditions in which there is a large supply of individuals willing to fight loyally to overcome the supply problem. If we game this out, we can come up with some comparative statics. But this all remains a neat puzzle without an obvious answer.