Image: TriStar Pictures | Carolco Pictures| | Pacific Western Productions | Lightstorm Entertainment

In conjunction with a major AI conference in Stockholm, Sweden, leading A.I. researchers and business leaders announced that they now signed a pledge not to develop killer robots.

The war between man and machine was thoroughly established in our collective minds with the movie Terminator some thirty years ago. But the fears of artificial intelligence probably first became a film phenomenon with the premiere of 2001, A Space Oddesey, and several modern movies have depicted robots in the context of dystopia, general doom and gloom.

“What they show in the movies is definitely still science fiction. It’s still 50 or 100 years old. But there are much simpler technologies that will be used in the next 5 or 10 years that we should be concerned about, “

– Toby Walsh, professor of Artificial Intelligence at the University of New South Wales, Australia.

The development of killer robots has been called third generation warfare. The first generation was the invention of the gun, the second atomic bomb. Now the murder robot is waiting around the corner. Leading AI researchers and business leaders across the world have for the first time signed a pledge to prevent it from happening – a pledge not to develop killer robots.

“I’m excited to see AI leaders shifting from talk to action, implementing a policy that politicians have thus far failed to put into effect,”

“AI has huge potential to help the world — if we stigmatize and prevent its abuse. AI weapons that autonomously decide to kill people are as disgusting and destabilizing as bioweapons, and should be dealt with in the same way.”

– Max Tegmark, president of the Future of Life Institute, which organized the effort.

The letter of intent was signed by Musk; DeepMind’s Demis Hassabis, Shane Legg, and Mustafa Suleyman; Skype founder Jaan Tallinn; and the well-known AI researchers Stuart Russell, Yoshua Bengio, and Jürgen Schmidhuber.

The pledge does not advocate a total ban on artificial intelligence in arms or military contexts. Many researchers and business executives still think it is a good idea, as long as a person takes the decision to kill. The issue has been discussed regularly in the United Nations since 2014, but so far only 26 countries have supported an international ban and regulation.

Although, it will probably be hard for military R&D to resist the notion of full automation, since, after all, a dogfight between fully automatic drone and a semi-automatic drone will most certainly result in a win for the former.

And in practice, it may prove tricky to prohibit autonomous weapons. A few fully autonomous weapon systems are already available, and many others have some degree of partial autonomy. The underlying technology is also already widely available, and many companies are eager to fulfill lucrative military contracts.


Artificial intelligence (AI) is poised to play an increasing role in military systems. There is an urgent opportunity and necessity for citizens, policymakers, and leaders to distinguish between acceptable and unacceptable uses of AI.

In this light, we the undersigned agree that the decision to take a human life should never be delegated to a machine. There is a moral component to this position, that we should not allow machines to make life-taking decisions for which others – or nobody – will be culpable.

There is also a powerful pragmatic argument: lethal autonomous weapons, selecting and engaging targets without human intervention, would be dangerously destabilizing for every country and individual. Thousands of AI researchers agree that by removing the risk, attributability, and difficulty of taking human lives, lethal autonomous weapons could become powerful instruments of violence and oppression, especially when linked to surveillance and data systems.

Moreover, lethal autonomous weapons have characteristics quite different from nuclear, chemical and biological weapons, and the unilateral actions of a single group could too easily spark an arms race that the international community lacks the technical tools and global governance systems to manage. Stigmatizing and preventing such an arms race should be a high priority for national and global security.

We, the undersigned, call upon governments and government leaders to create a future with strong international norms, regulations and laws against lethal autonomous weapons. These currently being absent, we opt to hold ourselves to a high standard: we will neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons. We ask that technology companies and organizations, as well as leaders, policymakers, and other individuals, join us in this pledge.

In 2015, Musk donated $10 million to the Future of Life Institute for a research program focused on ensuring AI will be beneficial to humanity. And last year, Musk, Hassabis, and Suleyman signed a Future of Life Institute letter sent to the UN that sought regulation of autonomous weapons systems.