The Drone Debate – What’s Wrong with Objections to “the Dignity Objection”

Some of the ethical chat surrounding lethal autonomous weapon systems is morally bankrupt. I’m sorry, but it is.

Specifically, I’m referring to the arguments of those that would have us remove “human dignity” from the debate.

“Dignity” of course – wound up as it is in the preamble of the Universal Declaration of Human Rights – is that which deals with the inalienable worth of a person.

What’s more, as a concept that has informed the wording of the Geneva Conventions in definite ways – say, in prohibitions against torture and corpse desecration – the codification of “dignity” into the laws of war has served as an excellent barrier against the worst of war’s excesses.

Even though war, as organized murder, is probably the best illustration of institutionalized indignity-dealing one can think of, it’s not hard to see why deifying the concept is a good stop-gap against the worst war can deliver.

Without an acknowledgment for the enemy’s dignity – the intrinsic worth derived from his humanity – then the enemy is just the enemy, his corpse is just his corpse and his children just his progeny.

“May as well gas him with sarin, piss on his corpse, and neuter his children – just for good measure,” a soldier lacking an appreciation for basic human dignity might conclude.

As applied to the drone debate, the “dignity objection” is prefaced on the assumption that only a human being is capable of seeing the intrinsic worth of another human being. A drone, meanwhile, sees only a “target” – ones and zeros.

Indeed, that lethal autonomous weapons could one day be deployed on a future battlefield to make life-and-death decisions without a human in the loop is worrying almost solely because that machine lacks the ability to compute dignity.

And yet, here we are, on the cusp of a paradigm shift in war, and some would have dignity stripped away from the regulations – the government of China most notably.

As Pop, a Swiss government diplomat, asks:

“… What is it about AWS [autonomous weapon systems] that renders them particularly reprehensible from the point of view of human dignity? I fail to see what the relevant argument could look like and have also not found any satisfactory explanation in the literature.”

Or take the view of Dean-Peter Baker – who sits on the International Panel on the Regulation of Autonomous Weapons.

In Baker’s view, “human dignity” is a malleable, new-ish concept – only as old as the Universal Declaration of Human Rights. Therefore, he argues, it is “awkward” and we shouldn’t really worry about it when crafting “rugged and realistic rules by which self-interested states might actually abide”.

“Exquisite ideals are anomalous” to the ethics of war, Baker proposes. As such: “they inevitably end up in a position of pacifism”. Though I suppose this is Baker’s way of suggesting we omit “exquisite ideals” from our efforts to regulate war, it’s not entirely clear why exquisite ideals “inevitably” end up in pacifism (I for one know plenty of idealistic warfighters). But these views are what they are.

On the whole then, the moral gymnastics flowing like liquid mercury through the discourse is breathtaking. All of these philosophers are capable, well-mannered people, I assume. And yet… Gyrating between consequentialist approaches and deontological approaches – dazzling each other with jargon and the names of various Enlightenment luminaries, the arcana of the academy – at the end of all this we are left, finally, with something like: “we don’t really need to think about human dignity when regulating lethal autonomous weapons”.

Ummm wot? The absurdity of it is astounding.

More specifically, it immediately summons Hannah Arendt’s Eichmann in Jerusalem***. The banality of evil, and so on. How could it not, when a basic value like “dignity” – the inherent worth of a person – is under assault?

 

 

 

Eichmann, of course, was the Nazi bureaucrat tried and hanged for his role in the transportation of Jews to the concentration camps in the East. For the most part however, Eichmann was removed, both physically and emotionally, from the killing taking place – not unlike the remoteness of most of us from what is taking place in Yemen, Pakistan, Afghanistan et al.

Neither did Eichmann have an overriding hatred of the Jews – his role was simply that of facilitator and logistician – a lifelong mid-ranker for whom, as Arendt puts it:

“… the most potent factor in the soothing of his own conscience was the simple fact that he could see no one, no one at all, who actually was against the Final Solution”.

Eichmann’s mediocrity aside, the most interesting part in all this is the awful philosophical method by which “the group” (and therefore Eichmann) arrived at the Final Solution.

As Horkheimer and Adorno have pointed out, the Nazis rationalized increasing levels of wrong-doing with the progressive logic of a syllogism. First came the confiscation of property. Then came the evacuation. Then came the concentration.

For the average Nazi, once human dignity was taken out of the equation and the basic worth of a person was removed from strictures of how to act – the final step, extermination, logically followed.

Thus, the Final Solution.

This is not to say, of course, that the philosophical discourse surrounding autonomous drones is identical to the method followed by the Nazis. Even if the theme of “progressive levels of bad ideas” is the same, the actual content is different.

Nor is this to say that inhuman rationalization of the unthinkable always leads to evil. A disinterest in “human dignity” does not “inevitably” lead to genocide in the same way that an interest in “exquisite ideals” does not “inevitably” lead to pacifism. The evolution from bad logic to mass killing is not an orthogenesis with an inevitable end-point. As Charles Darwin showed us, from a common ancestor there are infinite possible outcomes.

Secondly, before deciding if we should discard “human dignity” from the debate, it’s worth stewing on Eichmann’s claim that he lived his life according to Kant’s categorical imperative. Eichmann’s “approximately correct” description of the imperative was as follows:

“… the principal of my will must always be such that it can become the principle of general laws”.

Which is to say that Eichmann believed that Man must act in accordance with what makes the most sense as a “general law” for acting.

For Arendt, Eichmann’s absurd suggestion that participation in genocide would make for a good general law was “outrageous, on the face of it, and also incomprehensible”. Still, as she worked her way through Eichmann’s stupidity she realized there was something resembling a “household” logic to it.

For Eichmann, the most sensical “general law” was the same as “Führer’s law” so Eichmann’s decision-making was, in fact, rational – at least insofar as it followed on from an initial (mis)interpretation of the word “general”.

But returning to drones, let’s work the categorical imperative into a thought experiment and apply it to our own actions – the act of actually having this debate about “dignity” and drones. Meta, I know.

If we are bound to act as if the action makes for a good “general law”, then imagine for a moment we are not only the ones having the debate about drones but also the ones who are on the receiving consequential end of the debate about drones.

Imagine, for instance, that rather than blogging this blog and reading this blog and producing criteria for the International Panel on the Regulation of Autonomous Weapons, there was a reasonable chance that we could be on the receiving end of one of these lethal autonomous weapons – a drone whose existence is, in part, the consequence of the laws this debate will produce.

Imagine, for example, that we are Yemeni villagers in Abyan Governorate listening to the BBC disseminate news about what kinds of weapons might be headed our way. How would we feel, for example, if in the process of regulating autonomous lethal killing objects, the relevant panel had decided that the “dignity objection” was irrelevant?

“Why,” the panel had decided, “is it important that a human pilot knows that another human should be ‘valued and respected for their own sake’ but a drone does not?”

“Killing itself is an indignity,” said the panel. “So we may as well discard human dignity altogether. Dignity is just too ‘awkward’ to be worth paying much attention to.”

As a Yemeni villager soon to receive the thermobaric consequences of this debate, we’d probably feel like our dignity had already been taken from us.

But I digress. Philosophical debates and momentary empathy with the Other aside, let’s think about some actual case studies. Case studies which show why “dignity” is important to the debate about drones – especially when it comes to the unique human capacity to exercise that intangible thing called “discretion”. The International Committee for the Red Cross, for one, has offered that many of the ethical concerns about dignity “are about process as well as results“.  So let’s have a look at this process.

Case Study #1 – Counterinsurgency Operations

Imagine for a moment, that a 5-strong reconnaissance patrol is posted in a mountain layup point overlooking a village in, say, Afghanistan’s Uruzgan province.

In Scenario #1, the patrol is manned entirely by humans – fully bombed up with a double load and ready to break contact in the event that the position is compromised.

Presently, a juvenile is seen ascending the hill beneath the position. Clearly moving with purpose, the juvenile walks over to a pile of boulders, reaches between the rocky gaps and procures a small bundle. Inside the bundle is a military-style bandolier, a high-powered spotting scope and a walkie-talkie. Although not armed, the child is quite clearly a “spotter” for the enemy. Therefore, by most interpretations of international humanitarian law, he is “directly participating in hostilities” and can legally be shot.

Then, something unfortunate happens. The boy spots the patrol. The patrol now has two options. Option #1 – they can legally kill the boy. Option #2 – they can peel-back with a “tunnel of love” back to a previously-designated rally point.

For a patrol manned entirely by human beings this is something of a moral dilemma. Killing the boy could easily be justified before a court of law. Still, some in the patrol might object to killing the boy on the basis of his age or the fact that the boy’s reasons for directly participating in hostilities may be unknown. The boy could be the victim of coercion. And, the boy is a human being with inherent dignity – taking away his life is something that should be stewed on before pulling the trigger.  More yet, killing an unarmed juvenile could lead to moral injury for the shooter – undermining the tactical health and well-being of the patrol for future operations.

There may also be another tactical reason not to shoot the boy. The muzzle flash could compromise the patrol’s position. Thus, regardless of the option chosen, the patrol might be doomed for “a compromise” anyway. In any case, legally the decision is the patrol’s but it is up to the humans on the ground to decide what is the rational and-or humane thing to do.

In Scenario #2, the reconnaissance patrol is manned entirely by ground combat drones. As soon as the walkie-talkie and spotting scope are seen, the boy is marked for death. There is no dignity objection here for lethal autonomous weapons. No discretion awarded either.

Lights out, see you later, someone down in the valley below lost their son today.

Case Study #2 – Violent Protest

Changing scenes now, let’s move to a civil disturbance scenario – say, a violent protest at the Gaza-Israeli border fence.

In Scenario #1, after receiving information that a specific telephone number is linked to a recently-stitched suicide vest, a team of electronic warfare (EW) operators triangulates the position of the bomber among the crowd.

The EW operators, in turn, pass the information onto a sniper in an overwatch position. The sniper, in turn, locates the bomber with his telescopic sight and sees that the bomber is a woman. The woman is also carrying a radio, a backpack and looks terrified. The woman is an imminent threat. But then… she’s a woman with inherent human worth. What does she know about the local terror networks? Perhaps she might defect. Is there some way the sniper could spare her, in order to glean information from her?

The sniper has two options here. Option #1: he can legally shoot the woman – she is a suicide bomber after all. Or, Option #2: he can request that the EW team jam the signal, which, they assure him, is technically possible. Then, once she reaches the border fence, authorities can pick her up and learn something from her.

By contrast, in Scenario #2, a future variant of today’s IAI Harop – a so-called “loitering munition” – has been loitering in the skies above the protest. As soon as the suspect telephone number is input into the drone’s system it geo-locates the number and flies into the woman, self-destructing with a high-explosive warhead.

An avoidable death probably, but with the drone set free to do whatever it will do, the call to end that life was only the drone’s to make.

Case Study #3 – Conventional War

Now, finally, let’s move away from so-called “low-intensity conflict” and look at a hypothetical large-scale conventional war between two states – let’s say between a Western country and Donovia.

In Scenario #1, a helicopter gunship has been dispatched to destroy an enemy command and control position in support of an advancing infantry company. Taking out the central nervous system will leave the enemy nodes situationally blind. After the Hellfire is fired and the command post destroyed, the human pilot sees someone crawling away from the position. It is unclear who the individual is. The individual might, for example, be an enemy intelligence officer or, even the commanding officer – a possible treasury of information.

It is also unclear if the person is still armed. He may, for example, still have his sidearm on him.

Per Protocol 1 of the Geneva Convention the person would be deemed hors de combat if (i) he’s detained; (ii) he’s surrendering; or (iii) he’s incapacitated by his injury.

Because of the pilot’s physical remoteness from the gruesome scene, the level of incapacitation of the wounded man is unclear. It is also unclear if the wounded man has surrendered. To whom would the man have surrendered to? There is no one on-scene (another ethical problem offered by remote warfare).

Anyway, with the available information, the wounded man who might still be armed is probably not quite “out of combat” per se – killing him could probably be justified before a court of a law.  The pilot then has two options. Option #1 – the pilot could legally kill the man. Option #2 – the pilot could spare him, with the full knowledge that the man is a human being with inherent dignity and the infantry company down the road will probably be able to pick him up for tactical questioning.

Discretion. The main job was to wipe out the CP anyway.

In Scenario #2, there is no helicopter but a lethal autonomous weapon. The dignity of a human doesn’t get a say here so the drone deems the wounded person a combatant and double-taps him.

Thanks for playing but you lost the war. All that time you spent reading Clausewitz doesn’t mean much anymore, champ.

The point of these case studies is that every single option chosen would have been lawful according to the Geneva Convention. But would the decision to kill have been ethical? Maybe or maybe not. Depends on the moral code of the individual. Either way though, it seems like a good idea to have a human make that call.

Additionally, all of these case studies involve physical distance between the killer and the victim – the patrolman from the juvenile spotter, the sniper from the suicide bomber, the helicopter pilot from the enemy commander. In some ways, the remoteness – the physical separation between targeter and target – is part of the problem. If the spotter, the suicide bomber or the enemy commander were in close enough proximity that they could be bear-hugged and put in handcuffs, then there would be no moral dilemma.

In the last case study, it’s quite clear that remoteness was particularly problematic for defogging “the fog of war” for the pilot. Remember, it was unknown whether the wounded enemy was still armed – though it could be reasonably presumed.

It follows then, that on some level, making war more and more remote will also make war more and more “grey” – exponentially increasing the amount of unknowns when determining the morality of an action. But “remoteness” is not the silver bullet for lethal autonomous weapons that many anti-drone activists imagine it is. The advent of the bow and arrow, after all, has already made “remoteness” possible.

Indeed, as seen in the above case studies, the real problem here isn’t the drone’s “remoteness” per se but rather the drone’s “lack of humanness” – the drone’s inability to ask the big questions about the worth of a human, factoring in the inherent dignity of Man.

As a collective action that involves the deliberate killing of individuals by other individuals, war is the most intimately-violent act of social behaviour observed in humans. Why one would wish not only to maximize the remoteness of warfighting but also to vest a silicon-based life-form with the in-built autonomous capacity to kill a living breathing human without asking permission first is beyond me. More yet, why one would think it a good idea to remove “dignity” from the discussion altogether – to nullify human discretion and to treat humans simply as another carbon-based life form with no inherent worth – seems even more alien to me.

But such is the moral bankruptcy of the philosophical discourse surrounding drones – a demiurgic piling-on of increasingly bad ideas.

No doubt, in some distant, future battlefield, a soldier will be lying on his back, machinegun just out of reach, with blood and viscera slopping from his stomach cavity onto the ground. The drone bearing down on him has not determined that the soldier is hors de combat. A mere momentary glitch.

“Have you no dignity?” the wounded man will ask the machine as a gun barrel is levelled at his head. “And what about my dignity?”

But the word will not register with the machine. The “dignity objection” , after all, was struck off during the proceedings of the International Panel for the Regulation of Autonomous Weapons. The machine knows nothing of human dignity.

Then, echoing Baker’s words perhaps, the machine will reply: “an invisible dignity violation is an uncomfortably ethereal basis for regulating as crude and base a practice as the use of violence in war”.

1024px-mq-1_predator,_armed_with_agm-114_hellfire_missiles

*** = Side note: you’ll be hearing more about Eichmann in Jerusalem from me shortly.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s