Autonomous weapons and international law

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
we've picked what I think is one of those compelling and complex topics not just for lawyers ethicists military planners but for society itself which is the rapidly increasing role of artificial intelligence and in this particular context looking at that in in the with respect to autonomy in the use of force we've been spending a lot of time on this here at Brookings of late more in the security sphere watching very closely how China is moving very very rapidly in this space how Russia is moving and how the United States is moving as well modern militaries are already very far advanced and how they're automating everything from personnel systems and equipment maintenance to surveillance drones and to robots and the farther end of the spectrum defense systems like Iron Dome but the focus of today's discussion however is going to be on the development of weapons systems that could have fully autonomous capability to target and deploy lethal force in other words to operate in a dynamic environment without human control the prospect of developing such weapons is not a matter of science fiction this is a real-time debate I think we can already talk about this as the next arms race the arms race of the 21st century it's in front of us now this is not a future question the United States China Russia France among others are investing tens of billions of dollars to determine who will win this race both Vladimir Putin and Xi Jinping have made it very clear that they intend to win I think that claim is more credible in the case of Xi Jinping than in Putin but we are certainly going to see huge investments on both of their parts to outmaneuver the United States on AI I was struck by the way the other day to read that this year for the first time China has more dollars in start-up funds in companies in AI than the United States if we think about the role of the private sector and generating III advances we tend to think of China's lagging in the United States technologically that is really no longer true in the computational sciences the question for today's session is whether and how fully autonomous weapons systems can comply with the rules of international humanitarian law and human rights law and the heart of this debate is the notion that's such systems should have quote meaningful human control to comply with these legal requirements distinction proportionality precautions against attacks in civilians etc a number of scientists and technologists as well as NGOs are leading a campaign to argue for a binding international treaty banning the development of weapons that don't have meaningful human control others are suggesting a more measured incremental approach under existing rules of international law to ensure that humans remain in the decision-making loop in our discussions in Beijing it seems clear that the Chinese do not share our normative starting point on this question so this will be a fraught debate in international politics just next week governmental experts and officials will meet in Geneva under the auspices of the Convention on chemical unconventional weapons to continue to try to find consensus on next steps in regulating these weapons and we hope that today's debate will provide useful input to their deliberations as well as raising public attention to help us we've assembled a very distinguished group of people I'll ask Ted to introduce our panelists but I will first briefly introduce our keynote speaker it was Mary Ellen O'Connell who was the Robert and Mary Ann Shore professor of law and a research professor of international dispute resolution of the croc Institute to the University of Notre Dame her research expertise is on international law in the use of force and international legal theory she is the author of two numerous books to mention but they include books like the prohibition of the use of force unlawful killing with combat drones and a number of books that relate directly to the topic she is a rich intellectual and professorial history but I also want to highlight that she was a professional military educator for the US Department of Defense at a certain part of her career and I think that combination of the applied practice of this and the theoretical and ethical debate is exactly what we want today's discussion to go so she'll be our keynote speaker that will be followed by a panel that Ted will introduce but will comprise of a genre her own vanden Heuvel and Charles Dunlap my welcome to both of them and before turning to that I'm going to ask our special guest the mayor of The Hague Pauline Kripke did I get that right so come to the stage and join me in welcoming to excellencies ladies and gentlemen thank you so much for the invitation to attend this Justice Breyer lecture I consider it a privilege at least because Brookings Institution's is the world's foremost think-tank the theme of this meeting is the influence of artificial intelligence on weapons and warfare and the challenges that these developments bring in these areas of international law ethics and human rights you will appreciate that this is very much the top of topic of interest of The Hague too it is certainly not something new even when the Hague was just starting to evolve into a center for law the impact of technology on where warfare was already a major concern conventions were adopted at the first place at the first this conference banning the use of gases and dropping explosives from balloons the conference in 1907 an initiative of President Theodore Roosevelt addressed various technical aspects of warfare it was at time when industrialization was transforming the world just as today's digital revolution has enabled countless innovations creating opportunities to do good to save people's lives as well as new technologies that is that are used to attack and kill people The Hague the International city of peace and justice is dedicated to pursuing technical technological innovation for the benefit of humanitarian aid peace and justice a deliberate choice also to support the UN sustainable development goals The Hague Humanity hub is specific example of such a new initiative in the field of humanitarian aid scholars professionals business leaders and students have joined forces in this organization to find solution solutions to the problems surrounding humanitarian aid you an archer is now also a part of the humanity hub and during his visit to the Hague last December the UN secretary-general was very impressed by what was being done in our city in this domain all the more reason to continue along our chosen path a path in which which in my view has also another purpose watch but which has to do with the other side of the digital revolution and establishing boundaries rules about what is and what is not acceptable it is clear that the digital rest technology is changing our world in countless ways nothing stays as it was but this also means that measures need to be taken to prevent these technologies from falling into the wrong hands and are being used in the wrong way or to put it more strongly it is vital that applications of artificial intelligence Big Data and quantum computing are facilitated that will benefit mankind The Hague is delighted to offer a platform for that and this is also something that we are very capable of doing first and foremost because the first knowledge and expertise available in our city and I am referring not just to the Hague itself but also to the world-renowned universities of Leiden and Delft with whom we work closely together and who are present in our city together with the Netherlands Ministry of Foreign Affairs the heck is preparing for the arrival of only uniquely Centre for artificial intelligence and ethics apart from this concentration of knowledge over the past hundred and twenty-five years The Hague has proven itself to be a unique place for confidence-building between nations The Hague will do whatever is necessary to ensure that this continues to be an international center of law expert expertise in a hyper-connected world also for the next 125 years a place for international dialogue on peace justice and security and just as binding agreements about warfare were made for the first time at The Hague peace conferences more than a hundred years ago so today we need to establish rule rules for the digital world and given that everyone is affected by this digital world so everyone has a responsibility to ensure that is that it is safe digital world this is also why the hague has specifically tuesday chosen to end involve private sector organisations in this process given the breadth and the scope of the digital world it will become sum over Herculean tasks but I'm convinced that humanity can do that in that respect we could take an example of your countryman Andre Andrew Carnegie who donated the Peace Palace to the Hague it was an idealist but also a realist and it is realism which forces us to act because it is not just about us but it is also about future generations and we hope that they will benefit from whatever we do today thank you very much first my compliments to Ted Picone to the Brookings Institution to Maher quicker to the city of The Hague and all of you here gathered today for recognizing the importance and timeliness of this issue of lethal autonomous weapon systems in my view there is no issue of greater importance on the international law and the use of force agenda today with the possible exception of the use of force in the nuclear weapons context and that problem is in some ways an easier one for those of us in international law to deal with because we know what the law is it's a problem of compliance when it comes to lethal autonomous weapons we're not so convinced of what the law is or perhaps even what it should be yet getting two answers to both of those points is essential we are facing a real challenge as has already been indicated in the first two speakers by the first two speakers today just in February the former head of NATO Anders Fogh Rasmussen said at the Munich Security Conference that he fears swarms of AI and abled robots attacking cities add to that concern what's going to happen when computers go beyond current artificial intelligence to actually learn or when you combine nanotechnology or you think about all the hackers in the world who can't wait to reverse-engineer the inventions of those Department of Defense research labs I'm actually beginning to discuss lethal autonomous weapons as entering into the category of weapons of mass destruction unlike existing weapons of mass-destruction nuclear biological and chemical weapons the international community has already begun to discuss regulation of autonomous weapon systems begun to discuss in advance of a truly harrowing use of such weapons as was the case for nuclear chemical and biological so in that sense this is a more hopeful talk than it might have been and there's more ground for hope in that this discussion around we thought timeless weapons and their regulation already has a basic concept which Bruce Jones mentioned meaningful human control so there is a topic around which various positions are taking place and meaningful human control will be the focus next week in The Hague as parties to the Convention on certain chemical weapon conventional weapons begin to move toward their meeting in November before November as Bruce said there will be a meeting next week and then another one in August of the group of governmental experts and they'll be talking about both defining autonomous weapons and what the human machine interface is so that they can better understand how to come up with some kind of legal instrument devoted to meaningful human control so I actually have a different question than those two I'm not so worried about defining or even what the the topic for discussion in Geneva will be meaningful human control I'm interested in the next question perhaps a bigger question and that's whether the international community as a whole will demand compliance with any legal developments in Geneva on autonomous weapons or compliance with the sting law we already have that's implicated with this new technology we have the United Nations Charter and other principles restricting the use of military force we have principles of international humanitarian law to govern conduct on the battlefield and we have human rights law it's all relevant and I suggest to you that that human rights law often left out of these discussions of autonomous weapon systems is more important because nothing is going to keep this technology within the military sphere it is going to move out so in these remarks which will be quite brief let me talk about three points first a little more about the emerging understanding of what autonomous weapon systems are second the current debate over meaningful human control and finally the role of international law in regulation of these weapons as a general matter so as I said there is a point of helpful consensus you already heard the definition that people are agreeing to you might not know that that definition Bruce Jones mentioned of autonomous weapons actually originated in a document from the US Department of Defense weapons systems that once activated can select and engage targets without further intervention by a human operator I began my own research on autonomous weapon systems in around around 2010 as part of the ongoing research and writing I had been doing about drone systems and their role in targeted killing in that first research I was doing on autonomy I found a 2005 public document of the US Department of Defense reporting on the possibility of autonomous weapon systems in a report from the Committee on autonomous vehicles in support of naval operations I think this issue of autonomy came to the wider public sand around 2011 in a Washington Post article by Peter Finn titled a future for drones automated killing that article revealed what the Department of Defense was doing in terms of what we thought then was going to be the next generation of lethal drones in 2012 the u.s. DoD followed up with its directive and then in 2013 a dramatic new development occurred and that's when the UN Special Rapporteur on extrajudicial killing Christophe Heinz issued a report on autonomous weapons and he called for a moratorium on further research on these weapons because he was not convinced they were going to meet international law standards for lawful weapons from there the developments moved to Geneva and to the Convention on certain conventional weapons where the role of States Parties are to review new weapons technologies to ensure that they comply with international law especially around two particular principles that new weapons or means and methods of warfare are able to discriminate on the battlefield discriminate between civilians and combatants they do not have indiscriminate effects and that they do not cause unnecessary suffering to either combatants or civilians lethal autonomous weapons went onto the agenda and they and they're entering their fifth year of review so with these two meetings scheduled of the group of governmental experts next weekend in August on their agenda is definitions as well as this human machine interface and I believe that they are going to actually come up with something that the meeting in November of the state's party will result in a tangible legal development it will either be a protocol like the protocols that the the CCW has regularly added to their convention including for example a recent one to preempt future technology that doesn't meet international law standards and that was a prohibition on blinding laser weapons or they'll pass the the other thing that the CCW does is okay's new weapons technologies so they review drone technology and found it was a mere Launch System that it did not there was not per se unlawful and drones were allowed to fly but I think in November we're going to see some kind of limitation on autonomous weapons either in the form of a new protocol like the blinding laser protocol or at least a declaration that brings meaningful human control into our thinking our legal thinking about weapons where different states party to the Convention on certain conventional weapons come out on the shape of the content of either of those possible developments I think can be linked back to how they see the history the development of this new technology there's really two kind of groups that we're seeing around the technology there are either the groups that see it as a mere incremental development in the computerization of weapons it's just the next drones autonomous weapon systems are going to be a positive addition to Arsenal's they will replace soldiers in in battlefields and light drones they will save soldiers lives and that of course is a good thing but the researchers are promising us that they'll do even more that they've got more advantages because drones operate with a remote human operator somebody who could get fatigued who can get angry who can seek revenge who makes missed those problems are removed when you've got a fully autonomous select and destroy function that's already been programmed into the weapon researchers are telling us that in fact autonomous technology will be better than human beings at being consistent complying with international law in a major new article in the South Carolina Law Review by US Army Major Christopher Ford taking really the u.s. position on these questions he presents that autonomous weapons light drones will not be per se a violation of any weapons standards because they'll be able to discriminate on the battlefield and they will not cause unnecessary suffering it's not they're not they can deliver the kinds of weapons that will comply and that they will be better able to comply with international humanitarian law targeting rules with one exception and this is another place where we're finding consensus among the different parties major Ford understands that you cannot hold a robot weapon accountable for mistakes that could happen on the battlefield that is conceded still he and others taking this incrementalist position are very strongly arguing that research should continue to improve the technology we now have to bring this potentially very helpful new form of weapon onto the scene in fact they'll go even further and say that with the context of the battlefield today with the rapid changes in AI with competitors around the world doing this kind of research it's imperative that advanced nations also continue this research and develop these weapons now the other perspective sees autonomous weapons not as a mere continuation of computerized military developments but as a true paradigm shift Jacob Kellenberger the former president of the International Committee of the Red Cross has said that a major qualitative change has taken place he and others agree that this is a true revolution in military affairs and that it's a revolution that will result in greater danger more risk more loss of life more militarized solutions to humanity's problems well these two perspectives have been meeting and debating in Geneva for as I said in more than four years and it seems that the incrementalist do concede that there isn't some need for a human role because of this point about accountability they understand the need for somebody to be held accountable for mistakes for the whole law of armed conflict to really function a mascara a long time jag officer with the Israeli Defense Forces has also written a new article in which he comes out in favor of a ban on fully autonomous weapon systems because of this point of accountability he says that accountability is in direct contrast to the profession of arms in January 2017 it seems that DoD in the US has also taken this position they issued a statement saying that autonomous and semi autonomous weapon systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgement over the use of armed force major for describes how I think DoD is projecting the role for humans in this area within an accountability context and he says that we will now start thinking about how we can hold a human being somewhere in or near the loop as he says for joint enterprise liability with the robot obviously for the revolutionaries this is much too little quite inappropriate they want real and meaningful human control at the first meeting of the CCW Monsignor Tomasi of the Holy See's delegation said decisions over life and death inherently call for human qualities such as compassion and insight to be present while in perfect human beings may not be perfect may not perfectly apply such qualities in the heat of war these qualities are neither replaceable nor programmable the Holy See sees support of a ban on autonomous weapons as the only way forward and they are joined in that effort by such powerful NGOs as Human Rights Watch and a number of newly emerging NGOs devoted just to this topic including stop killer robots but also article 36 they want a ban on weapons without meaningful human control or put affirmative ly they want a legal regulation requiring meaningful human control and as the CCW agenda suggests what counts as autonomous weapons and what would be a human control regulation is yet to be decided it could very well be human control light along the american version or it could be human control heavy that groups like articles 36 1 articles 36 has described what they want as deliberative moral reasoning by human beings over individual attacks and the computer scientists and co-founder of stop killer robots Noah Sharkey has described the requirement as a commander or operator having one full contextual and situational awareness of a specific attack to the ability to perceive unexpected change in circumstances three retention of power to suspend or abort the attack and for time for deliberation on the significance of the attack in 2010 I also argued that there has to be some human ability to abort an attack prior to impact but I quickly realized that that would actually have implications for existing weapon systems such as Israel's Iron Dome a defensive system but one that can result in people being killed Amos Kai Ora would leave Iron Dome out so you see there are many questions even around this amongst those of us who agree there has to be meaningful human control so meaningful human control light human control heavy will have to have further discussions in these next months despite the hurdles I still believe something will result in November at least on the points of agreement there is simply too much popular opposition to unregulated development currently there are 22 States announced in support of a ban the non-aligned movement which represents many times 22 wants action last August the founders and CEOs of a hundred and 26 robotics and AI companies including Elon Musk such an open letter to the CCW imploring States to prevent a laws arms race high-profile AI experts including Stephen Hawking sent a similar letter in 2015 just this morning National Public Radio is reporting that a coalition of AI researchers around the world said that they would no longer partner with a South Korean University that had a joint relationship with a weapons manufacturer unless there was a promise by the university to ensure meaningful human control in any AI developments with a military application talk to your families and friends ask them what they think about killer robots you will you already know what they'll say nobody likes it opposition is widespread so the debate heavily involves international law as we've heard already the rules are part of every step of this discussion but not only specific rules on the use of weapons on the battlefield the wider context of international law the very sources of international law are coming into play the human rights or the sorry the human the human control heavy group is relying on positive international law on state consent and on the role of affirmative treaty positions and so far forth and they are taking the position that the technology can meet these positive requirements of international law that there's been no showing it cannot measure up and that without such a negative showing states are free to continue unimpeded in developing AI with this one workaround for accountability however and I think that that that is actually that that's actually true that positive law is not necessarily restricting future developments the moral philosopher Rob's Sparrow who's been very active in this field has recently written that if our main concern is to reduce the noncombatant deaths it becomes easier to imagine autonomous weapons being ethical they would simply have to be better than human beings at distinguishing between legitimate and illegitimate targets in some given domain but he goes on to say the deeper issue concerns whether the use of these weapons is compatible with the requirement of respect for the humanity of our enemies which underpins the principles of international humanity in law he admits that further questions are beyond his expertise he just doesn't know whether we can have a positive prohibition in international law if one could even come into force or could even be effective in this wide-open area of research he's right I think about the prospects in positive law but he may not know of this whole other realm of international law which is deeply influencing the meaningful human control heavy group and that's natural law the proponents of a ban have been relying principally in the first instance on the Martens clause an agreement of states from the 19th century that says the dictates of public conscience and the principles of humanity have a role in restricting new military developments in terms of weapons and techniques I agree with Diego Mori of the Holy See's delegation to the CCW that the martens clause is part of natural law and that it requires no consent by States to be binding it needs as Daniela Amoroso has written in an absolutely great new article on this topic what it needs what the Martins Clause what natural law needs to bind is the emergence of a global trend in favor of the principle and that I think she's absolutely right we have just Tuesday of this week the International Committee of the Red Cross released its report on leith law autonomous weapon systems and it is basing its conclusions on the Martins clause there are other concepts beyond the Martens clause which are found in natural law and which I suggest to you be very important in this debate going forward very important in Geneva and beyond for example the human right to life as found in the Universal Declaration on human rights is part of natural law it is durable it is unchanging you cannot move away from it through treaties or through consent of states equally the prohibition on the use of military force found of course than the UN Charter but also considered a peremptory norm a rule of higher law that states cannot walk away from that they cannot change it is deeply embedded with principles of morality that have to endure no matter how badly human beings mess them up the momentum building on these concepts is moving forward to a regulation and that kind of a regulation out from these sources as opposed to a mere treaty provision can have an impact on norm formation that we're only just beginning to understand remember natural law binds regardless of a consent second action at the CCW will have a further impact being based on these kinds of norms in creating a worldwide all encompassing and all impacting set of principles those Chinese researchers that Bruce talked about they'll know about this norm regardless of what China does whether it joins a treaty or not they will be told about it by their fellow AI researchers around the world this will follow the technology as it leaves military laboratories and it will create the sense that developing autonomous technology that can kill or harm human beings of the environment is anathema is beyond the pale is not what civilized people what moral people do there's one other I think very tangible impact from such a norm emerging out of this debate and that's to create the appetite for counter technology where are the antidotes where's the AI that reduces the harm of lethal AI where is that coming from with this kind of a moral context around the debate in Geneva I think we're going to have a rich new set of incentives for people to be involved in that kind of virtuous research I've called for a very similar kind of thing with respect to cyber weapons and now my last point of what the impact of such a norm could have may be in this country and in the UK and a few other places will also in light of this new norm against fully autonomous robotic weapons will have a clear-eyed national security debate about what we have gained in the national security realm from drones and targeted killing from cyber weapons and what we might gain from lethal autonomous weapons is the record of what computerized militarization has done for us really so great are we safer is there greater respect for human life around the world is there work being done to protect the environment which truly is an existential crisis for us in my view we have been misdirecting our research dollars and are very thinking about what gives us national security in this direction so a new norm could bring about such a new and I think extremely necessary discussion certainly in the United States okay so now all eyes will turn to Geneva over the next few months but long before then in the next few minutes our eyes in this room will turn to my colleagues Charlie and Jerome thank you very much for your attention that was a lot to chew on and it was deep and powerful so thank you very much Mary Ellen for setting that kind of encompassing frame I think it really brings us back to some really fundamental points about the use of weaponry and technology in today's world so I'm Ted pecan I'm a senior fellow here at the foreign policy program I work on international order and strategy issues and I'm here to help moderate this discussion we won't have enough time this is going to be frustrating because I can imagine we want to talk about this for a long time but with very short introductions to our co-panelists Arun Vanden Hoeven you should have bios for everyone he is a professor of ethics and technology at Delft University of Technology in the Netherlands he has many activities in the fields of ethics science and technology he's editor and chief of ethics and information technology journal he is among other things very involved in the deliberations of the European Union on ethically aligned design of Technology and autonomous weapons recently a report came out just last month in which she was very involved trying to lay out what the main principles are from a European point of view and European Union is preparing a new policy document I believe later this month that you've been involved with as well and we'll hear more about that in the discussion and we'll then turn to Major General Charles Dunlap who retired from the Air Force as deputy Judge Advocate General long career in the field of military law ethics and national security he currently runs the center on these topics law ethics and national security and Duke University Law School and as a professor of the practice of law so we will hear from both of these speakers and engage in discussion here and then with all of you so let me first talk to turn to your rune and hear a little bit more of the European perspective we have heard an increasing number of comments from different leaders in Europe including most recently an interview that president of France gave a Manuel McClellan in Wired magazine which I encourage you all to read it was in the context of a French national policy and European policy on AI in which he called for an outright ban on fully autonomous weapons let's be clear about the terminology I think in this debate we are talking about the future prospect of fully autonomous weapons we already have a lot of automation in our military technology and as you pointed out Mary Ellen some of it is already in use and it's proceeding very rapidly so in that context can you tell us a little bit more about the European view on these issues I certainly do recommend everyone to read that that's interview in my Wired magazine with a man well Michael I think he he captures the European kind of point of view to the extent that there is the kind of view and I would characterize it as follows that he and I have a quote here on it this if you do not want to block innovation it is better to frame it by design within ethical and philosophical boundaries that's the idea of the European Commission that ethics is a design issue right it is we're confronted in so many areas by these kind of stalemates these kind of conflicting values on the one hand sustainability and the other one hand efficiency privacy and security all of these things we keep confronting them over and over again but innovation is a way of dealing with those kind of conflicts in values and we can actually design systems in such a way that they can perhaps or help us to overcome those conflicts of interest by kind of changing the world in such a way that it becomes more amenable to you know what we want to have and I think he also in in passing remarks that there was a lot already a lot as you said a lot of automation going on in the in the military domain and he he leaves a little bit of space for that but he says clearly and that is the European perspective you know delegating the power and the discretion the choice to kill and take a human life to a fully autonomous machine is is just beyond the pale and so and that is also the position that we have taken in that European document on behalf of the European Commission that has been published so that is the statement on artificial intelligence and autonomous systems and there we take that position as well and that is almost as strong and because the Holy See and kind of the southern European countries also represent it there and they take the position that you you quoted and referred to and that is basically basically our position having said that I think and this is the other thing that this is also I think a European approach on the one hand we are because we are so interested in design and innovation we tend to complicate things in in such a way that it may in the end help us so it's kind of regular form you so Jay first you think oh but this is this is only complicating things and I said when in doubt complicated so on one hand we have to look at these systems it's not a it's not a gun it is a very complex system of systems at the socio-technical systems you have people running around your software they interacting with infrastructure communication protocols etc incentive structures manuals all of these details count it is that unit that you need to think about how to design it in order to get out of it what you want out of it so that's one thing so you have to look at critical technical functions all the functionalities kind of in the loop on the loop related to the loop kind of very remote to the loop etc so all of these distinctions you need to bring into play and then you have to see how you can have your way how how you can design it in such a way that you can have the functionality that that you want out of it in this case military functionality for all good reasons without the drawbacks of giving up on accountability and responsibility all together right and therefore second complication is not only we need a much more fine grained account of these systems with their critical technical functions and the way there are aligned but we also need a much more detailed and fine-grained and innovative theory of responsibility because what we do is we come to these subjects which are very complex and dynamic and it is we bring a theory of two thousand year olds or a concept that is that has been with us for quite some time and we try to apply it to a very new dynamic and very complex world as if you want to repair a swiss precision watch with a sledgehammer you cannot do that it won't work it will never give you the right results so what you need is a kind of a fine-grained theory of responsibility and one thing to to bear in mind is is that in other areas we are also working with where we're shifting towards a new conception of responsibility for a new high technology world because we are no longer talking about someone punching someone else on the nose and being blamed for that we're talking about groups people together omitting certain things which will later negatively affect people living in completely different places remotely in future times right so how far can you will be removed from that paradigm case of harm right where you just you know in a very simple causal set up in a here and now you you afflict some damage to someone and now we're talking to it but this doesn't mean that we have completely given up on the idea of responsibility we still hold on to the idea of responsibility although we're talking about groups of people jointly omitting something that will affect people further down down the road so I think the same applies here so we need a more complex more adequate view on those military systems of systems of systems and we need a more fine-grained account of responsibility and we have to bring them together to squeeze out of it what meaningful human control could mean because I was very fascinated by Mary Ellen's kind of speculations about what would happen in in November because if that will happen or will move closer to that we will have to know what it means meaningful human control we have for the first time now we've published a paper on it on philosophical conceptual analysis of meaningful human control and we have some people in the audience we can testify that it's been very hard job to kind of to to help ourselves to to a decent definition of that then we have to do some more work in the coming months basically so yes these are all kinds of I think contextual considerations that may make it more complex but may also open open up the opportunity of arriving at more adequate solutions I think that's very helpful because I think we are as you talked about Anna paradigmatic shift we're still wrestling with what it all means so the contextual points are very important I want to turn to Charlie to give us a military perspective on some of these issues let's talk about grounding this in the reality of how weapon systems actually are developed and deployed and how law is handled by military units in real time because these issues as we've discussed are not new we're already seeing the automation raising all kinds of different legal and ethical questions so maybe you can say a few points about that and about compliance and whether we think in the international domain the current standing of international law is strong enough to constrain the race well thank you very much Ted and thanks to Brookings for hosting I can't think of a more important topic more timely topic and tab than this one and I and I want to say although I'm a retired Air Force officer I'm not speaking for the US government or indeed all militaries everywhere but allow me to say a couple of things number one I do think we have a we do have law the law of war has been it's not it is well established and whatever system we have whether it's an autonomous system a human directed system or whatever it's going to have to comply with the fundamentals of international law or on conflict period now here's the thing we think that this is a revolutionary matter but remember in the history of warfare we've had other revolutionary matters the internal combustion engine aircraft radios there's been lots of things remember when submarines first came out they tried to have this specific kind of law limiting it and they found out during warfare that's not going to work and it never was implemented I'm also very cautious I've always been an advocate strictly comply with the law of war as opposed to trying to pick out certain technologies and ban them because inherently you're looking at a snapshot in time Maryellen use blinding lasers yes there's a convention against the development and employment of weapons intended to blind you can't have a weapon that intends to incinerate you but you can't have one intended to blind why is that because in the 1990s if you go back I think it's still on the ICR website they said that blinding if you're blind you have no life or something like this that was their argument down at Duke University right now they're developing Bionic eyes what I'm saying is there's a big risk in trying to capture a snapshot of time of Technology and trying to to ban it what we're really talking about here is weapons that don't exist as has already been said we do have weapons now that select and it's engage targets and we've had them for thirty years the close-in weapon system on all ships it has a fully autonomous mode why because a human being cannot react to a swarm of things coming out of ship so it it gives that there's human supervision but what we're really talking about now and correct me fellow panelists if I get this wrong is machine learning autonomous weapons in other words neural systems so in other words the machine will continue to learn and adjust in ways that some would say we're unpredictable and we have to remember that this technology is not limited to the military or even to the use of force it's going to be in all of our aspects of our life and so we have to keep that in mind so we will see a progression in the area completely unrelated to the military of restricting and controlling these technologies so that they don't do things we don't want them to do I'm an advocate of a robust testing system in other words we are going to have and there's different arguments out there I'm not as much of a technologist to give you all the answers but as a neural networks there are certain nodes you're going to have to engineer into those knows the law and the other limitations social or whatever that you want to have so that as it builds it will not learn in a way that will do something at the end of the day that you don't want it to do and there's always going to be responsible responsibility I've written a law review article on personal accountability in with autonomous weapons so you can do it and in any event international law doesn't require that you'd be able to criminally prosecute an individual to have a lawful weapon there is going to be state responsibility and it may be that we need to tweak the state responsibility doctrine my friend at Yale read Rebecca Crewe toss doing some work on that so we may have to do that but what I'm saying is that other nations are going to be developing these weapons and are developing these weapons Maryellen pose to you a question about what were your friends and neighbors think of killer robots here's the question I would phrase that a little differently ask your friends and relatives what they would think if their daughter in the Army was killed trying to do something as a human being that a machine could have done just as effectively just as safely and just as ethically or asked them what they would think if their daughter was killed by an enemy autonomous system that reacted and operated in such a faster way then she was able to respond to and that there could have been a weapon that would have defended her absolutely there's a moral issue here there's a moral issue every time you use force and those who have been in the military never real people who've been in the military never take pleasure even in killing the enemy other than the fact that it may save other people that's why I'd like to bring up the moral hazard of inaction we should never think that because we don't implement a technology it's so facto civilians and the innocent will be safe from harm no we're living in a world where there are very evil bad people who are going to get technologies and they're going to be used doing things to people that unless we stop them and unfortunately sometimes force is the only way of doing that we have to be able to meet it so do we need new norms yes in this sense I think we need a norm as to what the test and evaluation protocol ought to be does that have to be an international treaty no because we're never going to get into international treaty keep in mind China has not yet accepted in the cyber realm that the law of armed conflict the Geneva Conventions and the protocols even applies to cyber operations general Gerasa mob two weeks ago is that speaking at a conference where he was talking about the Russian technology and and their interest in an autonomous weaponry and as an aside you said yes our first targets are going to be the political and economic structure of the enemy which by the way is often much of that is not targetable under the existing law of armed conflict so the idea of getting a treaty I think is is is not palatable I think the better thing is developed these are the norms that you must go through in the testing process so that a commander will have a reasonable basis to believe that the application of this particular technology and it's a whole range of technology in this particular instance will do what what he or she believes it will do and that will comply with the law of war is that going to be hard to do yes because we don't even know what the weapon is yet because we don't have fully autonomous neural system based weapons but that's you know lots of things are hard to do and if we can't come to that where we can reliably say that the weapon is going to operate as intended lawfully then we can't field it and there may be some iterations of that kind of weapon that we can't feel but in the meantime I think it would be very dangerous not to very dangerous for security this country in the world not to proceed you know we've heard a lot about the Vatican I was at a conference just and I'm close with this I was at a conference mostly about pacifism you can imagine where I fit in on that not very well and there was a discussion and I'm Catholic and but there's a very strong Catholic pacifist movement that's correct and I really respect them but then I I told him I said you know you do know the Swiss Guards are more than ceremonial right and looked at me they didn't know evidently that the Swiss Guards are trained with modern weapons and they have mp5s and they have machine guns and they have other you know use of force technology and they're very sophisticated to protect the Pope as I want them to do because Isis has said they want to kill the Pope and so we live in a world unfortunately you know Cicero Park only dead have seen the end of war and so as long as that's the case I'm kind of a John Stuart Mill sort of person we have to be prepared to use force and this is the next range so you've raised a lot of really key points debate no I really appreciate it I mean when you think about the Pope example I think their job is to is to be defending defensive so the defense offense division I think is very important in this discussion are we going to restrain militaries from designing and developing defensive weapons using more autonomous systems or go further and go on offense and where does that end I mean artificial intelligence seems to have no limit at this point and it's happening so fast that I can imagine a world in which it's AI against AI and it's so far removed from the human control that it's somewhat meaningless in terms of actually having a military advantage so I'd like to consider that and also give me on the chance if you want to respond to some of these points I want to raise one other when your deliberation catches this point that you raised about design and at what point in the chain should responsibility go back so somewhere technology doesn't just happen it's human beings that are actually writing this code so at what point in that design phase you bring responsibility back and make sure that the ethics and law are truly baked into the writing of the code I want to agree with much of what Charlie and Jerome both said and Honan just pick out I think for our further discussion whether along with the paradigm shift in technology we aren't also seeing an accompanying paradigm shift in the law both jiron and Charlie talked about what we need that's new in the law and but both also brought forward some old concepts of law and I want to really raise with you whether they're right on that second part so one of the things that I think has happened as a result of drone technology is that the United States no longer sees the clear division between war and peace that it used to see we're carrying out targeted killings in areas that 25 years ago we would have said our zones of peace and you cannot do intentional killing with military force in those areas that's gone but the u.s. then tried to bring along a permissive regime from military sphere to this new idea beyond war and peace distinctions and I think that's what's got to change that's what's got to be invested in Jerome's idea of a new legal paradigm going forward we've got to see when there's no longer a limitation on the battlefield that we used to be able to see what happens and what's the default the right default position for law to be building forward it's not these permissive rights to kill but rather that if all the world's battle then none of the world is a battlefield it's all zone of peace so we've got to I think that the way forward is to actually think about the world the way the Chinese are thinking in in Charlie's example that they refuse to see the Geneva Conventions applied to cyber they are absolutely right in my view cyber sphere is communications economic sphere it's not battle space we've got plenty of law and we need more peacetime regulation that kind of thinking and to stop thinking about all these zones as free for free fire zones where the US can use its snappy new technology to kill without limitation we are causing ourselves to increasingly not only be the exception but to create a dangerous world that is not good for us for Americans so isn't the better way forward to shift that mentality to defensive peaceful uses and you can put that same you can protect that soldier my husband's a combat veteran I'm all about deep defense but if you think about what people like and evasive itch have been saying for years the best way to protect our troops is not to put them in harm's way so often and in such no-win situations if we could begin to shift to what we were supposed to do at the end of the Second World War and have a Department of Defense I think you see many of these legal issues go away yes we'll always have the need to have a defense I agree with that but we certainly don't have to be doing the kind of really non-productive counterproductive offense that that we've been doing and I think that answers your question Ted I'm yeah let's just shift that money both for norm development but also tech development into what we need for protection ya know that I mean we've we've had the same we're facing as I said these problems in all sectors big data you know so there's there's an ecosystem it's it's a multiplicity numerous actors being involved in producing using analyzing sharing the data who is responsible for the outcomes right so somehow we have to help ourselves to a more sophisticated and richer notion of responsibility and an ability to share that your own can I just say briefly do we have such notions in regular peacetime law we have notions of product liability yeah we already know how to hold a bad designer responsible but it's a whole different thing if you're holding her responsible for an unintended failure of a defensive system versus actually killing people unlawfully so you get out of that very difficult second parameter if you start thinking in these more day-to-day but I don't think the Europeans have made that jump despite their you know we all remember they are Venus we are Mars if you get out of that cut you know I don't see the Europeans truly making the paradigm shift that the Holy See and others have in mind and I I think you're capable of it I invite especially what you're doing Girona is so important you got this lagging NATO we want to be as big as the US in all these wham bang new technologies that is destructive thinking and I think you're the person who can persuade your colleagues but I agree I agree definitely at this moment in time as the also was mentioned in the introduction this is a kind of geopolitical battle that is going on and so we seek an arrival in ideologies rivaling conceptions of man of society that are being harnessed and supported by this new digital technology and playing and Eddie at each other and so we have a Chinese model we have your team model of Russian model of us American model and and I think that Europe is is sticking to its she was the pun but it's thinking to its its guns and I think it's it is really kind of investing in this what you what are you mentioning so every investment in R&D and technology should be a kind of a responsible innovation it should be aiming at problems to solve at positive kind of causes and aims first and foremost and then we have to proceed and work towards that in a responsible and careful way and that is that is the official policy in in Brussels and and so there repeating in reiterating it with respect to this new digital dossier including AI and read the interview with McQuoid and what I kind of feel build everything on that on the basis of that interview but it I do think that that is the kind of the the spirit that is that is now in Brussels the European Commission is going to launch in two weeks time a big AI initiative it's gone it's gonna be a huge flagship and one of the first things that they have been asked to do is to come up with a normative framework and the normative framework is going to build upon the e GE statement European group on ethics statement on ethics and artificial intelligence and also on the systems that has that you know the your desired I think kind of vantage point you know offense/defense question yeah that's a false dichotomy and that's why you know really struck me I think an Elon Musk letter they talked about that they're only opposed to offensive trust me they really don't know that much about how militaries operate because if what's a policeman's pistol is that an offensive or defensive weapon what's the mp5 that the Swiss Guards have that's being used to protect the Pope that's also used by Special Forces all over the world if if we attack Isis because they're turning yatshi' women into sex slaves is that an offensive or defensive use and so if we use an autonomous weapon for those purposes is that an offensive or defensive use I think Mary Ellen raises a whole question about you know when should we use force that's a different discussion what we need to focus here is how we can use when we do decide through democratic processes that force needs to be used how can we do it most efficiently and effectively and I would suggest to you that it may be unethical and even illegal at some future point not to use an autonomous weapon system because if you look at article 51 a protocol one to the Geneva Conventions it requires commanders to use the method means or method which will be most protective of civilians and that often may be an autonomous system because if people think that we have full control any military on planet earth since the beginning of time has full control of what the human beings at the point of the pointy end of the stick that are under extreme stress when people are shooting at them are exercising all this deliberative compassion and everything else they don't know what real war is about and I would invite your attention to there's a I had my students look at this all the time there's it used to be on YouTube it's called I'm hit it's a camera mounted on a helmet a soldier in Afghanistan where he's he's trying to draw a fire away and he's just and pings bolts are coming in and he's firing back in a general area that's the way things really happen and if you have an autonomous system that isn't going to be worried about being hit it will have the opportunity to apply force in a discreet manner but we what we have to do and I would suggest in terms of the engineering it has to start at the very beginning and here's the challenge you have to have legal experts who understand the technology and right now we have all kinds of super smart internet lawyers but if you don't understand the technology you're you're you're not in the game and but the good thing is because there's so many commercial applications of neural network technology there's going this is not something unique like a Manhattan Project that's only making nuclear weapons this is something where there will be people in the community that can understand the law and also the technology involved so there are very good points about this being everywhere I mean it's you have that stuff on your smart phones and in your car right so that gives rise to that too huge you will use the issues and the other thing is very important and I also have to look at my own professional ethicists they were you know asleep in the last twenty thirty years and they're waking up now to a completely new digital world and they didn't address like some of the lawyers not all of them of course you're the exception but they they didn't address and therefore we have in part the problems that we have now I agree absolutely but I would say the bigger problem now is that we don't have the international lawyers we've got our students know all about tech but we in this country have been moving away from education and from deep commitment to international law I think that's our bigger deficit that's why we're having this event so here's some hands there are microphones coming please tell us your name and where you're from if you'd like and keep your questions short because there's lots of hands in the air we'll start right here this woman the randon and the gentleman at the end of that hi my name is netra Halpern with peace films I wanted to add to the discussion between Mary Ellen and Charlie and what is defensive and what is offensive and I wanted to add into that President Eisenhower's discussion on the military industrial and he originally had congressional complex when is a country saying that it's about protection and it's really about economic or resource theft basically that they're doing the war and I'm wondering at I know that the crime of aggression I'm not sure exactly what the status is that that was considered a crime and also about does the ICC how can the ICC start having influence over countries such as the u.s. that have not signed on to it and how can the crime of aggression be stopped worldwide Steve winters independent consultant direct us to Mary Ellen this is a point it didn't really come up but you mentioned the possible leaking of you know AI from Defense Department out and then you said well what about antidotes for those so I just wanted to ask about this point that Helen musk brought up when he when he started his open ki initiative which was it wouldn't it would be dangerous for one great company he has somebody in mind or one perhaps defense department to have AI abilities way beyond everybody else and yet there's a big debate and a lot of resistance to the idea of everybody having AI because then they say that's that's the bad guys will have AI how does the ethics of that come out and oddly enough even President Putin said he wanted AI for everybody in effect I eat not for us the US but and we'll take one more before we come back to the panel this young gentleman right here oh I'm sorry someone already has the microphone where's the microphone Thanks my name is Mark McCarthy I teach a course on AI and ethical issues over Georgetown and my question has to do with the principle of proportionality which is a rule that says you can't target a particular weapon unless the the the consequences in terms of civilian casualties are justified in terms of the military gain what's the role of an AI system in making that kind of judgment and is this principle of proportionality that's relevant to the ongoing discussions in Geneva an AI system is going to have to comply with the principle of proportionality and the only way you're going and there's a good article that just came out by Anne Henderson and Kate Reese it's at the Stockton Center where they talk about the reasonable commander standard which is the law now for human commanders and so you an AI system is going to have to meet that standard how do you know you're gonna its it gonna be an iterative process where they're going to have to continually test scenario after scenarios so that they know that the system is going to be at least as good if not better than a human being making that judgment the problem now in international law is there's no international law you know detail for every possible scenario because it does come down to a reasonable commander in that circumstance much as American tort law comes down you know you have to at some point there are factors that are unique to that circumstance where they'll be judged and the international law is you judge it from the point of the information that a reasonably well-informed commander would know versus what may be happen after the fact because let's face it we have weapons that go wrong you know now I can tell you we've had bombs for that you know the fuse is very sensitive where it did fuse operate the way we thought in many other instances accidents happen and sometimes terrible things happen and war for which there aren't responsibilities because accidents happen in our life so there has been a lot of talk about you know ethics coming out of the of the the big tech companies a lot of it actually if you look at it is a little bit of ethics washing it's it's it's in Brussels they say you know as long as it's ethics it's not regulation so that's that's that's good news and yeah so I think the open AI doesn't people are still working in their own kind of spaces on on the stuff that is really interesting and that they will I'm not impressed by all of the ethics that I've seen let me just add very quickly in the history of arms control trying to keep your technology a secret has always failed the best way and the way that you can keep your country safe is to invest in defense so that comes to your excellent question and Charlie and I are just gonna have to disagree on this we know when you have used major military force offensively when you have violated the rules against resort to force when you are responding to such a violation you're using a defensively we have embedded in our law there's very different conceptions of offense and defense so you can design and I'm confident that our colleagues in the AI world can help us create defenses to what they can imagine will be the offensive uses and that I'm I'm very much in support of as long as it's also being done with this clear-eyed view to the ethics cetera that are involved and just very briefly yes the world continues to understand the importance of these rules on restricting resort to military force when they just last December agreed to add to the International Criminal Court statute the crime of aggression I was so however somewhat disappointed it's a good news bad news story because the u.s. which is not even party to the International Criminal Court persuaded the other many members to only add a very diluted version of aggression in order to give continuing space for the US because the u.s. fears even where we don't have an actual treaty or our we're not a party to a treaty these norms escape and we they're I think very unproductively for our future living as as moral beings in the world we took a we used our credibility to limit and we persuaded the Europeans for that I hope it's gonna be a news story a different story for autonomous weapons yeah we're a little over time but I'd like to go at least five minutes late we started a little bit late and this is such a rich conversation if people don't mind if you need to leave please feel free but there are a few more hands in the air but I want to build on it when you come back the next round this analogy to arms control whether there's much more robust measures around transparency and confidence-building measures that we need to adopt and put in the place here so that we can start building a sense of trust in in among our competitors I have a hand here in the yes and then we'll come to you thank you my name is Li one Cheeto I'm a doctoral candidate at Georgetown Law I have two questions short I'll make them short so first two the major general I heard there are two presumptions and what you spoke of one these systems are weapons and the other that the law of armed conflict applies and my question will be and we heard that a little bit of it now you talked about proportionality and distinction and how something needs to applies these norms but you were talking about these weapons that need to apply as these norms are normally speaking weapons are not apply proportionality and distinctions commanders do doesn't that isn't isn't that telling as to these things are not really weapons maybe there are something else that one question we'll take two more just because in the interest of time there's woman in the green scarf and the gentlemen the blue shirt hi thanks I'm Sasha Raiden managing editor of the ICRC humanitarian law and policy blog thanks to all of you I just have a quick comment based on my previous job at the u.s. Naval War College Stockton Center Lieutenant Colonel Chris Ford did write that article in his personal capacity as all the articles there are written I just want to say that yeah hi I'm Matt console' I'm a grad student at the University of Maryland in international security and economic policy my question is given the chief country's leading in AI like the US China Russia UK and Israel how we all the regulatory frameworks need to change in order to accommodate rising powers such as China given that they see a lot of past international laws as against their interests just just real quickly you're absolutely right commanders have to make the you know are responsible for using means and methods of warfare you know in compliance with distinction proportionality and so forth but when you look at the article 36 reviews that are required on your protocol one article 36 I'm not talking about the NGO I'm plugging in order to 36 of protocol one that requires the weapon to be able to be discriminant and to be able to comply with proportionality so that's why I and and they are weapons in my view there are other people who will argue that their means and methods of warfare a you get to the same place but I'm somebody wants to say that I'm fine with that so yeah I just to your point that I think international laws we said before and especially in this realm of artificial intelligence and coming back to the Hague's position and your offer to be platform and place were where we can accommodate these kind of talks and discussions I think this is this is very much needed you know public international war and all branches with all human rights law I mean they have to really rise to the occasion and step up to the plate because I haven't seen very much on this alright you've mentioned the sources right I mean you know there's there's going to be a lot of debate we are together with with Oscar Institute in setting up a study that explores you know the the possibility of running a big conference in in in the Hague on this topic so that's I think very much needed first thanks Sarah I lieutenant and my point was just that I guess his article is quite consistent with what US views were not that they are the us's views important distinction thank you but I wanted to say just as the last Masseria is challenging all of us so again compliments Ted it's challenging it's really showing where our failings have come about with in my area international law we've been very focused on a narrow view of human rights we're very focused on creating new exceptions to the law of self-defense we've lost sight of what really makes law law how we can have law for a global community and that's where our challenges and so you know we're being brought to that challenge through these new technological developments it's long overdue and I hope this is going to be one of those positive and constructive discussions on that road I was gonna make a final point that's rather grim but I do want to keep in mind this overwhelming concern that I've heard repeatedly about China's advantages in this space because they are able to use data big data in a way that frankly our democratic societies are reluctant to use they're using it for social controls but it can be easily weaponized in a faster rate because AI is all about learning based on your data applications so we do have I think a serious geopolitical problem on our hands we here working people continue to try to work on that angle working with all of you and thank you very much for coming we look forward to next thanks for watching be sure to LIKE and subscribe for more videos from Brookings
Info
Channel: Brookings Institution
Views: 9,267
Rating: 4.8055553 out of 5
Keywords: Brookings Institution, Justice Stephen Breyer, Justice Stephen Breyer lecture on international law, Autonomous weapons, international law, United Nations, artificial intelligence, AI, international codes of law, ethics, human rights, Bruce Jones, Pauline Krikke, Mary Ellen O’Connell, Ted Piccone, Jeroen van den Hoven, Maj. Gen. Charles J. Dunlap
Id: -vvryhsECno
Channel Id: undefined
Length: 88min 35sec (5315 seconds)
Published: Thu Apr 05 2018
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.