This post discusses spoilers for:
– Christmas Eve, 1914 by Charles Olivier
– Black Mirror S3E5 “Men Against Fire”
Our Shared Humanity
Last week, I noticed a deal for Audible members. It was offering a free download of the short story Christmas Eve, 1914. I decided to download it and check it out. It takes place on the trenches of World War 1. After listening to it I was so inspired.
In the story, the British officers brace themselves as they receive orders from their commander to set up a machine gun to fend off a German attack on Christmas Day. Tensions run high as they take their positions, kill or be killed by German “men — or what used to be men — running at you and firing.” When four German soldiers enter No Man’s Land and slowly start heading toward their camp, the captain waffles under pressure and hesitates for a few seconds on giving the order to fire.
Then suddenly, one of the soldiers starts singing Silent Night and the Germans (who were holding little Christmas trees) join in. The German officers had hoped to negotiate a one hour ceasefire for the opportunity to collect and bury the hundreds of dead bodies lying on the battleground for weeks covered in mud and blood. After the British captain breaks down in tears over the men (and boys) he’d sent to die, the German captain suggests that “Gentlemen, maybe war takes a holiday today.” The British and German soldiers begin to talk and see one another as young, scared boys just like themselves. They share chocolate and cigarettes, and after realizing one of the German cooks used to live in England before the war, they start up a friendly game of soccer. The story ends with the British captain’s reflection about the empathy and bravery he learned that day. The story really did make me tear up. The British were seconds from gunning down those German soldiers out of fear for their own lives, but one of them saw the nakedness and vulnerability of what the Germans were doing, their better selves were able to win out for the day.
But there’s an unspoken sad part of this story. Although it was a truly beautiful moment to see the war pause for the day, the reality is that they are soldiers fighting in a war that they can’t end themselves. The next day, they will have to return to killing each other, and no amount of empathy or understanding can change that. And it’s likely that the generals would’ve been incredibly mad to hear their men with playing soccer with the enemy on the battlefield. To the generals, the war is a means to an end; they need to defeat the bad guys in order to preserve their own way of life. In war, empathy is counterproductive to the mission.
The Potential of Technology: Dehumanization on Steroids
Empathy makes war harder. In 1947, US Army Brigadier General S.L.A. Marshall argued in “Men Against Fire: The Problem of Battle Command” that the vast majority of soldiers on the battlefield in World War 2 never fired their weapon to kill. This wasn’t because they ran or hid – often they were willing to risk greater danger to rescue fellow soldiers or run messages – but because they simply couldn’t bring themselves to kill another human being. In response to suggestions by Marshall and others, the US Army instituted training changes such as de-sensitization and operant conditioning which brought the firing rate up to 55% in Korea and 90% in Vietnam. Indeed, the less we empathize with “the bad guy”, the easier it is to kill them.
There was an episode of Black Mirror by the same name (Men Against Fire), which explicitly addressed this. In the episode, which is set in the future, a squad of soldiers are tasked with the mission of killing “Roaches,” which are creatures infested with genetic mutations. The soldiers use army-issued implant technology which helps them communicate handlessly, aim their weapons, and look at digital renderings of building layouts for tactical planning. When the protagonist’s implant starts glitching, he realizes that the “Roaches” he’s been killing look exactly like “real” people. Eventually he learns the truth: the Army designed the technology specifically to make the infected population look sub-human by modifying their physical appearance to anyone with the implant. Doing so made it easier to kill “Roaches” and reduce PTSD associated with murdering another human.
Of course, this show is just an allegory. To my knowledge, the US Army isn’t literally implanting tech into our soldier’s brains which edit their perceptions. But are there other, less obvious ways that technology current is dehumanizing each other?
“We 👏 Shouldn’t 👏 Be 👏 Nice 👏 To 👏 Murderous 👏 German 👏 Soldiers 👏”
Imagine if Twitter existed back in 1914. If a tweet went out “leaking” that the British and German soldiers were playing soccer with each other in No Man’s Land on Christmas, I’m pretty sure there would be immense public outrage about it. I suspect many people would be upset about how disrespectful it is for the soldiers to be chumming up with the murderous enemy that killed tens of thousands of British soldiers. Others would probably take it as evidence that either the war is hoax or at least that the soldiers are all in on the swamp of corruption and aren’t doing their jobs.
I can imagine that in 140 characters or less, it’s really easy to get people mad about the soldiers dilly-dallying and not taking their duty seriously, but it’s less easy to contextualize how rare and beautiful the moment was amid the gore of months/years of fighting on the front line. But that’s the thing about Twitter: very few people have the context but nearly everyone has an opinion. The problem is that Twitter isn’t designed to contextualize issues. It is designed to maximize engagement, which usually means inciting outrage. Outrage is a very effective way to keep users engaged.
Retweeting also allows for what social-media researchers such as danah boyd and Alice Marwick refer to as “context collapse”: removing tweets from not only their temporal and geographic context, but also their original social and cultural milieu, which is very different from most public spaces. … While readers may literally know nothing about the poster or the context except for what is said in that one tweet, they can still just hit “reply” and their response will likely be seen by the poster.
… This amplification and context collapse, coupled with the ease of replying and of creating bots, makes targeted harassment trivially easy, particularly in an environment where users can both mostly live in their own ideological bubble by following people who share their views, however abhorrent, and who can easily forget that there is a real person behind the 140 characters of text.
You could imagine a world where Twitter allowed people to try out ideas they are working through in order to get feedback and then improve. And you can imagine that when someone sees a tweet they don’t understand, there’s some kind of “explainer” mechanism to help the user understand what is being conveyed. But that is not the world we live in.
Our current version of Twitter is full of people harassing each other and throwing lots of snark at everyone they think is stupid. And everyone immediately attributes bad faith to things they don’t understand. And in a lot of those cases, they are even right because everyone on Twitter is virtue signalling anyway. But for the cases where there really was a good faith disagreement, the snark, harassment, “public fight” mentality, and moral outrage is usually enough to kill any attempts to understand where the other person is coming from.
Are You Saying I Shouldn’t Condemn The Injustices In The World?
Of course, moral outrage is not in and of itself bad. For instance, if your reason to throw hundreds of children in cages is because you want to send a message to potential immigrants that they are not welcome here, the problem isn’t context or “optics”; the problem is that you’ve dehumanized immigrants so much that throwing them in cages is “worth it” to accomplish your goals. That’s racism. Those kinds of things need a strong and sustained pushback by outraged people.
Knowing which issues require action isn’t always obvious, and context is what helps inform whether the we are outraged over the “right” thing. Typically, we’re forced to rely on algorithmically-generated news feeds and timelines as a first step, but we should look into issues further. First and foremost, we should listen to the people who are being impacted by the problem or problem/action.
In addition, in my own experience I found it helpful to try giving the other side the benefit of the doubt when I first see something that seems crazy. Clearly someone supports this thing, but why/how? It’s usually pretty easy to tell if you spend 5-10 minutes trying to investigate why someone could support that. Sometimes it really is bad faith, but if you always assume that, you’ll be dehumanizing people you might otherwise be able to reach common ground on and maybe even persuade. No one thinks of themselves as the villain in their story, so sometimes it can be useful to understand where someone is coming from. Even if you completely disagree, it can help you understand your own values more clearly.
1. Sept 30, 2019: Upon further reflection, decided to remove the discussion of FOSTA/SESTA because I don’t know enough on the subject to speak authoritatively. The podcast Reply All did an episode about FOSTA/SESTA which argues outrage over something bad may have been misguided. I’ll rely on their reporting for that assessment.