Killer Robots are closer than you think! Thomas Nash: Campaign to Stop Killer Robots

As near as a decade ago, many would have considered the use of pilotless drones fighting wars controlled by military personnel on a separate continent, as within the realms of science fiction. Yet the deadly use of this technology has become so commonplace, reported hourly through many news channels, that the moral question of its use has slipped away as the word Drone becomes part of the everyday lexicon.

Sidestepping the overarching question of whether military intervention is justified in the first place, the moral question of using drones for remote killing is a battle that has already been lost as their use has become routine.

It is important to note that this is a political issue rather than a technological one; the same robot that drops bombs on a pre-defined target is also capable of dropping aid over disaster zones. It is the decision of our governments as to how this technology is applied; then it is down to the pilots, sitting in front of monitor screens, often thousands of miles away, to pull the trigger.

But how long will it be before the pilot, along with his or her human instinct to judge the appropriate use of deadly force, is removed from this sequence altogether? How far away are we from that responsibility being handed to an autonomous robot?

“Closer than people think” says Thomas Nash, Director of Article 36 (a UK-based organisation promoting public scrutiny over the development of weapons) and joint Coordinator of the International Network on Explosive Weapons.

As Coordinator of the Cluster Munition Coalition from 2004 to 2011, Nash led the global campaign resulting in the Convention on Cluster Munitions, having previously worked for the New Zealand and Canadian Foreign Ministries in Geneva and Ottawa.

Through Article 36, Nash co-founded the Campaign to Stop Killer Robots in April 2013. The Terrestrial caught up with Thomas to discuss autonomous weapon development, the role of the AI sector and how the fight against killer robots can be won.

Interviewed by Marcus Lawry

 

TT: What prompted the formation of this coalition?

TN: Scientists as well as human rights and humanitarian campaigners had begun to express concern about the potential development of autonomous weapons as early as the 00s.

The concern was and still is a fundamental moral objection to weapons systems that can fire missiles and drop bombs without a human being pressing the button. It wasn’t until 2012 that things really got going though. Article 36 called for a ban in March 2012 and Mary Wareham at Human Rights Watch led discussions throughout that year towards the establishment of an international coalition. After a key meeting in New York in October 2012 the Campaign to Stop Killer Robots was launched in London in April 2013.

 

TT: How close are we to seeing autonomous weapons introduced to the arms market?

TN: Closer than people might think. We already have automatic systems that can fire at incoming missiles or that can attack enemy radar, but they aren’t really selecting their own targets, they are more detecting a stimulus and automatically attacking it. That’s the same as a landmine really.

We also have systems like “automatic target recognition” – a sophisticated set of sensors and software on board drones. This system gathers data and can suggest targets to humans. So it’s not hard to see how we could move from this situation to one in which the weapons system doesn’t simply suggest the target, but actually selects the target and fires the weapon at it. That’s really more of a political decision rather than a technical one. That’s why we need a legally-binding treaty to make that political decision impossible.

 

TT: What sort of weapons are we talking about?

TN: Autonomous weapons could take a variety of forms – aerial drones, armed vehicles, boats, humanoid soldiers. We usually see articles on this topic illustrated with pictures of the Terminator or Robocop, but it’s more likely that aerial drones are going to be on the frontline of developments towards greater autonomy in weapons systems.

 

TT: Is it possible to keep tabs on what arms developers are producing? Do you rely on whistleblowers or have to wait for them to announce a new product?

TN: This is a good question and it’s a key concern really – there is just so little transparency in the way weapons are developed. It happens behind closed doors, with discussions between arms manufacturers, the military and other bits of government and there is no public scrutiny. The good news in relation to autonomous weapons is that there is an international level spotlight now, with the talks happening at the United Nations and the media watching. So I think already the terrain is a little less conducive to the development of autonomous weapons and that in itself is a good thing.

 

TT: What sort of response have you received from governments to your lobbying?

TN: The response from governments has varied quite a bit. Some have been very vocal against autonomous weapons, including Pakistan, which has obviously had a lot of experience with drones hovering over its territory and firing missiles at people. Others like Israel, UK and US that already use armed drones have been sceptical about the need for new international rules in this area. Then there are a bunch of states like Austria, Ireland, Germany, the Netherlands, Switzerland and lots of others actually that are talking about the importance of meaningful human control over attacks.

This concept of meaningful human control has become a bit a cornerstone for discussions at the UN. How can we ensure meaningful human control, when do we know that we have it and should we prohibit weapons systems that operate beyond it?

 

TT: What is the view on the development of autonomous weapons of the Artificial Intelligence industry?

TN: The AI community set out its views pretty comprehensively against autonomous weapons in an open letter in July 2015. Over 20,000 people have signed this, including over 3000 researchers in the fields of AI and robotics. The letter explicitly calls for “a ban on offensive autonomous weapons beyond meaningful human control.”

 

TT: Even if the AI industry is against the development of autonomous weapons, it will be on the back of their work that such weapons will be created. What can they do to, practically or politically, to stop this happening?

TN: I think the AI and robotics communities can do a lot to stop the development of autonomous weapons. Getting involved with the Campaign to Stop Killer Robots is a first step and there is also the International Committee for Robot Arms Control that gathers academics and thinkers in this area and is also part of the wider Campaign. When you have a broad consensus within an industry that a certain direction is unacceptable this can have a major impact on what society as a whole believes is appropriate.

In the end autonomous weapons will be prevented because political leaders in different countries can see that their people don’t want them. Then it will be a question of showing there is a feasible way for them to work as a group of countries to ban them and develop a forum of meetings to ensure that technology is scrutinised and discussed and that killer robots are never developed.

 

TT: What’s the next move?

TN: The next move for the Campaign is to get countries to develop national policies that embrace and explore the concept of meaningful human control and reject the development of autonomous weapons. These national level discussions are extremely important now and the AI community can be a part of them in parliament, with the military, in whatever committees and scientific advisory bodies that various governments have and so on. Members of the Campaign to Stop Killer Robots will be increasingly active on this front.

At the international level, the next set of UN talks will take place from 11-15 April 2016 in Geneva and we want to see countries coming along and raising their flag to call for a ban on “lethal autonomous weapons systems,” which is the term used at the UN.

The crunch time for decisions will be in November 2016 when the UN body discussing autonomous weapons has a major five-year review meeting. States could decide then to start negotiations on a legal instrument to ban autonomous weapons and that’s what we will be pushing hard for them to do.

 

TT: Where will this battle be won or lost?

 I think really the battle against autonomous weapons will be won on moral grounds. It’s a question about humanity and what sort of world we want to live in. I think most people, including most political leaders, have a sense that the principle of humanity is the common thread amongst all people and that this principle means something.

Weapons systems that can select targets and fire missiles themselves, based on some pre-programmed algorithm are morally repugnant. They would be an affront to our very understanding of humanity and human dignity. People just don’t want killer robots and that’s what gives me confidence that political leaders will decide to ban them.

 

Want More?

Visit the Campaign to Stop Killer Robots website for updates on their battle. 

Read the history of the first armed Drone over at Wired.

 

Comments are closed.

Visit Us On TwitterVisit Us On FacebookCheck Our Feed