Robots fighting wars. Science fiction? Not anymore. If machines, not humans, are making life and
death decisions How can wars be fought humanely and responsibly? Humanity is confronted with a grave future
— the rise of autonomous weapons. Autonomous weapons are those that select an
attack target without human intervention. After the initial launch or activation, it's the
weapon system itself that self-initiates the attack. It's not science fiction at all, in fact
it's already in use. The world is in a new arms race. In just 12 countries, there are over 130 military
systems that can autonomously track targets. Systems that are armed. They include air defense systems that fire
when an incoming projectile is detected, “Loitering munitions” which hover in the
sky, searching a specific area for pre-selected categories of targets. And sentry weapons at military borders which
use cameras and thermal imaging to ID human targets. It’s a pretty far cry from a soldier manning
a checkpoint. Militaries are not turning to robotics and
increasing autonomous robotics because they think they're cool. They're doing it for very good military reasons. They can take in
greater amounts of information than a human could, make sense of it quicker than a human
could, be deployed into areas that might not be possible for a human system, or might be
too risky, too costly. In theory, any remote-controlled robotic weapon
— in the air, on land, or at sea — could be adapted to strike autonomously. And even though humans do oversee the pull
of the trigger now, that could change overnight. Because autonomous killing is not a technical
issue — it’s a legal and ethical one. We’ve been here before. At the beginning of the last century, tanks,
air warfare, and long-range missiles felt like science fiction. But they became all too real. With their use came new challenges to applying
the rules of war, which require warring parties to balance military necessity with the interests
of humanity. These ideas are enshrined in international
humanitarian law. In fact, it was the International Committee
of the Red Cross that pushed for the creation and universal adoption of these rules, starting with the very first Geneva Convention in 1864. These rules have remained flexible enough
to encompass new developments in weaponry, staying as relevant today as ever. But these laws were created by humans, for
humans, to protect other humans. So can a machine follow the rules of war? Well that's really the wrong question, because
humans apply the law and machines just carry out functions. The key issue is really that humans must keep
enough control to make the legal judgements. Machines lack human cognition, judgment, and
the ability to understand context. You can think of the parallels with how we deal with pets. The dog is an autonomous system, but if the dog bites someone, we ask "who owns that dog?" Who takes responsibility for that dog? Did they train that dog to operate that way? That’s why the International Committee of
the Red Cross advocates that governments come together and set limits on autonomy in weapons
and ensure compliance with international humanitarian law. The good news is that the ICRC has done this
work for over a century. They’ve navigated landmines and cluster
munitions, chemical weapons and nuclear bombs. And they know that without human control over
life and death decisions, there will be grave consequences for civilians and combatants. That’s a future no one wants to see.