2019 Drell Lecture: Autonomous Weapons and the Future of War

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
good afternoon everybody my name is Colin call I'm the social science co-director for the Center for International Security and Cooperation here at Stanford welcome to the drell lecture this lecture series is named after Sid drell who you can see on the screen there a theoretical physicist arms control specialist and a founding co-director of CISAC the series was established in 1994 by bud and Sicily Weill Whelan to honor Sid who bud had worked closely with on satellite reconnaissance when Bud Whelan was deputy director of the CIA for Science and Technology since its founding in the early 1980s SESAC has always focused on the intersection between science technology policy and international security bringing together scientists and social scientists to analyze some of the greatest challenges facing humanity initially SESAC stood for the center for international security and arms control and the center's work initially focused primarily on issues of nuclear security arms control and disarmament over the years the center's nei the center's name changed to the Center for International Security and Cooperation although we were lucky enough to be able to keep our acronym SESAC and the research focus has broadened to include not just nuclear security but biosecurity environmental security and of course digital security for years SESAC has been at the cutting edge of policy research on cyber security here at Stanford and now our focus on digital security has expanded to include the implications of artificial intelligence and machine learning on international security it is in this context that we are thrilled to have Paul Sciarra join us here at Stanford to deliver a drought lecture on autonomous weapons and the future of war Paul who all welcome out here in a minute is a good friend of mine and a senior fellow and director of technology and national security program at the Center for a New American Security a nonpartisan Washington dc-based think-tank he's the author of army of none which came out last year the subtitle of which is autonomous weapons in the future of war which Bill Gates called one of the top five books of 2018 Paul formally worked at the office of the Secretary of Defense at the Pentagon where he played a leading role in establishing policies on emerging weapons technologies including leading the Working Group that drafted DOD directive 3,000 point oh nine establishing the Pentagon's policy on autonomy and weapon systems he is also a former infantryman in the Army's 75th Ranger Regiment and served multiple tours in both Iraq and Afghanistan Paul will speak to us for the next 30 to 40 minutes and then he'll engage in a conversation with dr. Arata a young gar plumb from Facebook and professor Jeremy Weinstein from the political science department here at Stanford followed by your questions and hopefully you had a chance to pick up an index card on your way and if you did and you have a question I recommend that you write it down during the some time during the lecture and they'll be collected afterwards and sorted by Jeremy and Radha so without further ado please join me in welcoming Paul Shaarei to the stage thanks Colin for that introduction thank you all for coming i'm paul shari from the center for new American security I want to talk to you today about how technology is evolving and warfare and in particular this very difficult challenge of what artificial intelligence and autonomy may mean in war now we are already well into the military robotics revolution which has been unfolding for the past two decades as we see countries around the globe building military robots of various shapes and sizes this is a map of armed drone proliferation around the globe this is a map that's in the book it's actually a little bit out of date already as more countries around the globe get a hold of armed military robots at least 90 countries have drones of some kind today including many non-state groups and well over a dozen have armed drones and you can see where they're coming from over ninety percent of international armed drone transfers come from China so the u.s. does not have a monopoly on this technology we don't get to control how it develops or how its spread and there are really difficult questions that were going to face about how humanity deals with this very powerful technology as we see this moving forward it's not just the spread of existing drones but their evolution in the future that is particularly concerning this is the x45 drone now in a museum but it was one of the first prototypes for a stealth combat drone that would be used in future high-end contested environments where advanced militaries fight against each other and with each generation we are seeing not only advantages like stealth features which you can see this is designed to start to incorporate radar defeating stealth technology based on its shape but also things like more advanced autonomy as a feature of all of these drones and just like we see in automobiles an evolution of more advanced autonomy with each generation the same is true with robotic weaponry raising the question for military robots as we get further down the line how comfortable are we with delegating some of these decisions to machines what happens when a Predator drone has as much autonomy as a self-driving car and how do we feel about machines making life-and-death decisions in war that's the topic of this book and I want to walk you through some examples of things that countries are building around the globe today this is a totally uninhabited boat it drives autonomously you can see there's nobody on board there this was being used in a demonstration by the US Navy on the James River in Virginia a few years ago now but five years ago and in this particular demonstration it was not one but actually five boats used as a swarm working cooperatively together now this was used by the Navy because the Navy faces threats from it ships threats from other countries terrorist groups who might try to get close and attack US Navy ships in this particular demonstration they were doing they had a US Navy vessel moving a a mock straight transit so as if it were for example transiting the Straits of Hormuz in the Persian Gulf and they had five of these boats that were used to intercept a suspicious vessel approaching it wasn't actually suspicious vessel right it was a another US ship but they said okay we're gonna pretend there's this suspicious thing approaching and tasked out five of these boats to go intercept all autonomously and then work together to encircle this other vessel now this demonstration was quite public it was pretty clear that the Navy was trying to actually do some messaging to potential adversaries they didn't say Iran but I'll say it probably are wrong who routinely harasses US Navy ships in the Straits of Hormuz and one of these boats not this one but another one had a machine gun on it 50 caliber machine gun so as they're doing this in the pre fingerless to reporters one of the reporters said what's up with the gun you tell me all about this autonomy and swarming I'm who's controlling the gun and the Navy's answer I kid you not was we haven't decided yet which you know was a very exciting headline for the reporter but but actually speaks to some of the challenges that militaries are facing in this space which is to say the technology is moving forward and they sometimes haven't yet decided what those rules are going to be for what they're going to use autonomy in the future now it's not just the United States building these technologies that building built all around the globe this is an Israeli system called the Guardian a ground vehicle again drives autonomously no one on board it has reportedly been used along the Gaza border and another version of this not in this photograph another one is also reportedly armed now Israel has said that while the vehicle may drive on its own humans will always be in control of the weapons use on board but not all countries might see it that same way this is the Russian urine nine gram robotic vehicle it's a much larger vehicle if it carried people we might say it's an armoured personnel carrier carries a heavy caliber machine gun and anti-tank rockets these Rockets actually our little extendable arms that stretch out to allow it to hide behind a hillside or a berm to ambush tanks presumably NATO tanks and Russia actually deployed this to Syria last year they had it operating in combat operations in Syria for a few weeks they pulled it back reportedly according to press reports it didn't do very well there were community problems with this communications in urban environments within Syria that suggests that it's not very autonomous at least today but Russia is experimenting with technology and Russian military leaders have said that their intention is to build fully robotic sized units in the future that are capable of independent operations so when we look at the way the countries are talking about this technology certainly the way that Russian or Chinese leaders talk about this is not the same as some of the more hesitant comments or comments about keeping humans involved that we hear from Western countries this is the x-47b of now also in a museum but at the time a prototype demonstration aircraft being built by the US Navy you can see it here landing at an aircraft carrier it was the first aircraft - at first unmanned aircraft uninhabited aircraft no one on board to autonomously take off and land from an aircraft carrier and to do autonomous aerial refueling so at the time we'll the groundbreaking aircraft now as you can see based on its shape this also is intended to be a bridge towards a stealth combat aircraft in the future the Navy's currently changed their plans but there were none of countries around the globe building aircraft like this not just the US but also the UK France Russia and China and Israel all building prototypes to be combat aircraft that could operate inside very contested environments now this matters because a lot of the drones they are not very sophisticated they could easily be shut down or had their communications jammed these aircraft are supposed to operate inside very heavily defended areas inside the enemy's airspace in that same environment where enemies will have advanced air defense systems that also would likely be able to jam these aircrafts communication links which poses very difficult questions about what you wanted to do when it's out on its own this this question the simple question of when this thing's on its own and someone has jammed its communications what should it do that motivated the department's policies back in 2012 to write an internal policy guidance that shaped DoD design of these systems going forward at the time I was working in the Pentagon and I posed that question to my boss I said well what do you think boss what should happen he said well I you know I think it should do this I said that's great that means nothing because who are you he said well I'm the deputy assistant secretary defense I said that's great that's not written down anymore though is it and so that led to that then policy directive but the variety of possible solutions one could say it's going to come home if it loses communication links you could also say well it's going to be allowed to take pictures do surveillance but not drop any weapons it's another option countries could decide to say well we'll allow it to strike pre-planned fixed targets it's kind of how a cruise missile functions today and that would be okay coaches could also say well you know there are many targets that militaries care about that are mobile that are relocatable that move militaries have learned to make their very high-value assets move around so they can't be targeted from the air and so if it finds these what the military calls targets of opportunity we're gonna let it to attack now that might sound very risky it certainly adds another element of risk but there are some targets where you really might care about that things like nuclear tip mobile missile launchers that North Korea has we're having an aircraft like this operate inside North Korean airspace in the scent of a war and strike those targets before they can be launched could save millions of lives well what about questions about longer to defend itself if someone shot at this could it be allowed to shoot back these are all very real practical questions militaries will have to face as they put systems like this out in the field in the coming years and there are no clear out answers and in many cases countries have not yet decided for themselves what the answers will be now it's not just in the air as well we're building ships ground vehicles in this case a very large sizeable ship this is the US Navy's ship called the sea hunter and to be clear there are many countries building things like this but for copyright reasons it's a lot easier for me to show you pictures of us ones which are public domain so in this case this is a totally robotic vessel there are obviously people on it in this photograph you can see them but this sailed from San Diego to Hawaii totally autonomously earlier this year and as part of the US Navy's fleet now this one is designed to track enemy submarines so no weapons at all on board you think that would make this a lot easier but now you have to worry about things like what if someone undertakes a hostile boarding of this a couple of years ago there was an incident where Chinese Navy went and just seized control of a US Navy robotic vessel underwater small underwater robot was doing reconnaissance in the South China Sea and a Chinese ship came up and Sue's control of it so in that case China give it back after you as protested but for something like this there's very sensitive equipment on board you might not want an adversary to get their hands on this so would this be allowed to defend itself would you be comfortable with that kind of thing with either lethal or non-lethal weapons it's not just in vehicles we're also seeing more advanced autonomy in missiles this is a screenshot from a video of an advanced missile called a long-range anti-ship missile or l/r azzam one of the things that it can do that's a little bit new a little bit different is while humans still choose the target in this case it's a ship and they're going after enemy ship it's allowed to change its route on the way to the target so if there is a pop up threat indicated human red then the missile can navigate on its own it's one example of some of this very incremental autonomy that we're seeing developed with each generation of these systems now when I talk a little bit about some things that have been in existence for a while because autonomy is not a new phenomenon in military systems this is the high-speed anti-radiation missile it is a homing missile that goes after enemy radars it's been around for decades and missiles like this are widely used by countries around the globe in fact homing munitions date back to world war ii to the very first acoustic homing torpedoes that could zero in on a sound of a ship's propellers now weapons of this type have a sensor onboard they can detect an enemy target and they can maneuver to hit that target that's very useful because lots of targets in warfare move imagine if we were on a submarine trying to hit a ship that ships gonna move and so these types of missiles or torpedoes are used widely by militaries around the globe many of them are what you might call fire-and-forget weapons so once released it's not coming back it can make some very simple decisions on its own but by and large whenever these are to use humans are still choosing the target so a human is deciding this is a valid enemy target and we're going to launch this munition to attack it so I'll distinguish here in this diagram between what I'm going to call semi-autonomous weapons here in the blue where a human chooses the target a human says I have some indication that there is a valid enemy target at this point in time and space versus what I'll call a fully autonomous weapon in the red that a human might launch a weapon to go over a wide area search for enemy targets and then the weapon itself can make the decision about which ones to attack the ones in the blue widely used today things in the red it's a little bit fuzzier so I'll give you some of the gray area here there are at least 30 countries today that have long in use human supervise autonomous weapons so weapons that can survey the environment that can identify potential targets and then attack them all on the room but humans supervise their operation they could turn them off if they're not happy with them this is the US Army's serum counter rocket artillery and mortar system it's a gun that was originally designed for US Navy ships and placed on land other examples include the Army's Patriot air defense system and the Navy's Aegis combat system again at least 30 countries use things like this today and they're used for situations where the speed of incoming threats might overwhelm humans ability to respond so if you're in a certain generation you might remember an old Atari game called Missile Command and Guiler how good you are there's some level where the incoming threats might overwhelm your ability to respond and you lose the game it's the same thing in real world just with much higher stakes and these are used for situations where someone might have to turn it to fully automatic to shoot down incoming threats to defend a ship or a land base or a ground vehicle now when an offensive capacity there are some isolated examples of weapons like this being used this is the Israeli harpy drone it's a lord and munition goes over a wide area hunts for radars when it finds one attacks it all by itself it's a kamikaze weapon you actually don't want this really coming back at you it's got a seeker in the front and then a warhead and when it finds an enemy radar it'll dive bomb into it an attack it it's been sold to India Turkey China South Korea and perhaps not surprisingly China has reportedly reverse engineered their own version of this that's not particularly new invention lording leadership date back to the 1980s this is it's a little bit fuzzy here because there's an actual photograph of a pamphlet from the US Navy from the 1980s of a missile called a tomahawk anti-ship missile which was designed to go over the horizon hunt for Soviet ships so we launched from a ship or sub like if that up hold on from a ship or a submarine fly out over the horizon and then search for ships so the way this might work is that a maritime patrol aircraft what identify that there's a Soviet ship in an area by the time the missile gets there the ship will have moved so the missile can fly its own search pattern now this was in the u.s. inventory for about a decade and then taken out of inventory in the 90s and talking to us Navy officers from the time one of the things they said is that there was a lot of discomfort with this weapon that people were concerned about two big things one was when it gets out there and it turns on its seeker and it starts looking what's it gonna find the technology from the 80s not very precise it could distinguish between a ship and ocean water but it couldn't accurately discriminate between a Soviet worship and say a merchant vessel and so that was one concern the other one was and this is a little bit more practical if you don't really know where the enemy ship is why are you launching the missile in the first place if there's some degree of uncertainty and there was a concern that you might waste this weapon for militaries these are not only expensive they're also very scarce a ship only carries so many on board and so if you launch it and it misses its target you've wasted a very scarce and valuable asset in war time and so we have not seen this technology widely proliferate around the globe even though some of it dates back in a crude form several decades what's possible though is the drones begin to change this equation because for drones you could send it out on patrol and if it didn't find the target we'd come back and you could send it again the next day and the next day and the next day and so we may begin to see more freedom being used with autonomous systems now that more just have recoverable drones there are also people pushing the boundaries of this technology in things like swarming applications this is a demonstration of swarming aircraft being done by the Naval Postgraduate School down in Monterey California and they had let me come out and observe some of the demonstrations interview them I open up the book army of nun with a scene where they're actually doing a swarm versus swarm aerial dogfight so real planes these little tiny styrofoam a lot of planes they built very very cheap the most expensive thing on this is that GoPro camera and they're flying them in the air now they're not actually shooting that part simulated but real maneuvering between the aircraft and they're trying to work up to a 50 on 50 aerial swim dogfight and the one that I saw they've got a red swarm and a blue swarm 10 versus ten up in the air and the really interesting thing about this is when the referee gave the countdown and said go they had a human sitting at each laptop commanding each of the two swarms and all the humans did was push enter now is that they were done everything else was totally autonomous all done by these aircraft individually and so that's one of the areas that we're seeing warfare evolve where people are being pushed back from the edge of the battlefield not just physically but also cognitively as more and more decisions are being made by these systems operating at machine speed now it's not just physical systems but also in cyberspace this is a diagram showing the spread of one version of Stuxnet across the Internet where Stuxnet spread across USB drives and networks in search of what security researchers presume was its intended target Iranian centrifuges at Natanz at which point it deployed two encrypted payloads one to sabotage those centrifuge operations and another one recording normal operations and then playing them back to people observing them like a bank surveillance video and a bank heist movie telling people everyone's fine here this what's really interesting about this technology is unlike many systems where if you got a hold of a a physical object you have to find ways to replicate it and rebuild it these things are open-source weapons this code has already been reused by others in malware and other attacks since then it's not just sucks that we continue to see autonomy evolve in cyberspace since then this is the winner of DARPA cyber Grand Challenge from 2016 built by a team at Carnegie Mellon out of Carnegie Mellon rather I should say that could be winning computers name was called mayhem one of the challenges I understand folks from DoD were out here last week right talking about ethics and principles one of the challenges the department often faces in this space as well they are simultaneously telling people don't worry we're going to use this stuff responsibly they do people naming things like mayhem which probably doesn't calm anyone's fears but in this case what this is being used for is autonomously search and computer networks to find cyber vulnerabilities and then either defensively patching these or offensively exploiting them same technology totally dual use it was being used in this case in a competition among different computers to do this but it's also been pitted against human hackers since then now it's not better than the best human hackers in the world but in competitions it's ranked in the top 20 that's pretty good and more importantly it's much easier and cheaper to replicate this than growing a new cyber security specialist it takes a lot of time and so this kind of technology is some of the things that can be used to help secure the billions of IOT devices that are now being put out on the network that are very insecure use this as an automated way to test technology beforehand and then begin to raise its level of security much cheaper to do this they're trying to create hundreds of thousands of new cyber security specialists but also as offensive applications as well now when I talk briefly about some of the legal and ethical and strategic questions surrounding this technology each of these is a whole chapter in the book so I'm just gonna have to like really condense things to a couple highlights so it's about the laws of war for a second that's one of the first things that comes up people said what about the laws of war what do they say about this but what's interesting is they don't say very much at all there's nothing in the laws of war and all that comments one way or the other about autonomous weapons now there are two ways to view this one way to view this and this is what I often hear from Western military legal experts is they'll saying it's because the laws of war are about the effects on the battlefield so the laws of war lay down a set of principles that militaries have to follow in warfare things like the principle of distinction that militaries can only intentionally target the enemy that cannot intentionally target civilians or the principle of proportionality that acknowledges that some civilian deaths and collateral damage happen in war they aren't unfortunate but what very real and unavoidable part of warfare but that any civilian deaths cannot be disproportionate to the military necessity of the target being attacked and so what some of these experts will say is that if autonomous weapons could apply with these principles comply with them better than humans then we should use them and maybe they're not there today but if they can get there someday then we have an obligation actually to use these just the same way that someday self-driving cars will be better than humans and we'll put them on the roads and we'll save lives maybe we can use this same technology to save lives in warfare now the other view is that a ton does nothing in a loss of war but autonomous weapons because it was just obvious throughout all of human history that people were making these decisions and people still should be making them and so now we need to write this down because now that's a choice and there's a variety of different arguments about why that might be a good idea anywhere from machines aren't good enough to even if machines were good humans might still need to be in charge for reasons of ability or moral responsibility I also want to talk about some of the ethical issues that come up so the laws of war give one framework for thinking about warfare but there are also ethical concerns that might be outside of the laws of war ten illustrate this I want to talk about experience I had as an Army Ranger when I was fighting in the wars in Iraq and Afghanistan there was an incident early in Afghanistan where was part of an Army Ranger sniper team that was sent out to the afghanistan-pakistan border now we were looking for taliban fighters we infiltrated that night under cover of darkness into terrain that was not too dissimilar from this actually this is not a photograph of exactly where we were but places like this that it was like this a very little vegetation very little cover and when the Sun came up we were very very exposed and a farmer came out in the morning to relieve himself in the fields and looked up and saw about eight of us where their heads bobbing above sort of a small rock outcropping and he tried a crew lunch a lot were basically this guy's backyard I tried to pretend he didn't see us and then he scurried back inside and we knew we were compromised and we expected to see some Taliban fighters coming after what we did not expect because this was still pretty early in the wars was what they did next which was I sent a little girl to scout it our position so she came along she was about five or six excuse me she had a couple of goats in tow sensibly as cover that she was hurting goats and she comes along and walked a long slow circle around us now she was not very sneaky it was pretty obvious she was there to scout us out we heard what we later realized was the chirping of a radio that she had on her probably reported back information about us so we watched her and she watched us and eventually she left and some fighters did come after that now we took care of them and the gunfight that ensued brought out the whole valley so we had to leave but later we were talking about how we would deal with a similar situation what happens if we came across someone and for sure if they were a civilian or if they were spotting for the enemy now one thing that I can tell you never came up in the conversation was this idea of shooting this little girl no one raised that there was not a topic of discussion what's interesting is under the laws of war that would have been legal because the laws of war don't set an age for combatants your status as a combatant is based on whether or you are participating in hostilities and by scouting for the enemy she was participating in hostilities just the same way if she was an eighteen year old man doing so so if you built a robot to perfectly comply with the laws of war you would have shot this little girl now I think that that would be wrong in this instance if not legally then morally it's certainly not consistent with the values that I was raised with or those on my team but it begs the question how would you design a robot to know the difference in what is legal and what is right and how would you even begin to write down those rules ahead of time if he didn't have a human there to interpret these and to to bring that whole set of human values to those decisions lastly I want to talk about a really important issue about international stability so what would a world of autonomous weapons look like if countries are to move forward with this technology well we might have a situation where countries are using lots of automation operating at machine speeds and competitive environments where coaches aren't willing to share their algorithms with others we have one example of what this might look like and it's stock trading and we've seen examples of accidents that come from this environment in stock trading where we have an arms race in speed among algorithms we have surprise interactions among them like for example flash crashes this is the stock market on a day of the big flash crash in 2010 now what's interesting about this is first of all there's still dispute over what drove this event among different security researchers there's clear there was a combination of factors volatility in the market some algorithms that had some brittleness their behavior that continued executing their programming even after market conditions changed dramatically high frequency trading looks like it exacerbated this as well as people deliberately manipulating algorithms spoofing these algorithms and finding ways to take advantage of them all of those are conditions that might exist in a warfare kind of environment now you may have heard less about this because what regulators have done to deal with this problem is I haven't actually fixed this what they've done is they've installed circuit breakers to take stocks offline if the price moves too quickly so then stop the event from occurring many flash crashes continued to occur and in fact a few years ago there was an incident where over a thousand circuit breakers were tripped across multiple markets in a single day what they've nearly done is install these circuit breakers to then mitigate the consequences of these things but that doesn't exist in warfare there's no referee to call timeout if things begin to spiral out of control so if countries are gonna find ways to build in safeguards they have to build on themselves or actually working hand-in-hand with potential adversaries which is very challenging now countries have been coming together at the United Nations for several years now since 2014 to discuss autonomous weapons this is a photograph of me and a number of other folks in the international community people from Human Rights Watch and well the consortium of 60 nongovernmental organizations that have come together to argue for a ban on autonomous weapons people from the ICRC and other international organizations and around 100 countries that have come together to discuss this but diplomacy is moving very very slowly on this issue certainly is significantly out of step with the pace of technological advancement I'm going to close with a quote from General Paul Silva the vice chairman of the Joint Chiefs of Staff is the number two person in the US military seen here in the background next to former deputy secretary defense Bob work who's been a during his time at the Pentagon was a major champion of robotics autonomy general Silva here says I think we should all be advocates for keeping the ethical rules of war in place lest we unleash on humanity a set of robots that we don't know how to control I think this is really interesting because this is again the number two military official in the u.s. talking about humanity talking about ethics expressing a sentiment that we want to keep humans in control of these decisions but it's also interesting because how do you put this into practice let's say you said okay this is it we're gonna do this we got it how do you tell an engineer right we need to do this and that's a challenge not just for the US but for others is to figure out as you move forward with this technology how do we find ways to use it that might wake more for a more precise and more humane board that we don't lose our humanity in the process thanks very much thanks for coming today and we've got lots of time for an interesting discussion to follow great thank you Paul that was terrific I'd like I'd like to now ask Radha and Jeremy to join Paul on the stage for our conversation and as they come out or come up I should say let me do tell you a little bit about each of their backgrounds so dr. Radha Iyengar plumb is the head of product policy research at Facebook and an adjunct economist at the RAND Corporation previously she served in senior staff positions at the White House National Security Council the Department of Defense and the Department of Energy and she has her PhD in economics from Princeton Jeremy Weinstein is professor of political science a senior fellow at the Freeman spogli Institute for International Studies and a non-resident fellow at the Center for global development in Washington DC his research focuses on civil wars and political violence ethnic politics and the political economy of development and democracy accountability and political change he also served as deputy to the US ambassador to the UN on the and on the NSC staff during the Obama administration but perhaps most germane today's discussion Jeremy helped design and co-teach a core course in the Stanford computer science department to help students Gravel grapple with the ethical social and political implications of emerging technologies and in other words how do we make sure that all those folks were graduating who create those killer apps don't create killer apps so with that I will turn it over and they'll they'll talk amongst themselves with us all eavesdropping in for about 30 minutes and then we'll go to your questions and I know that the index cards have been collected and Radha and Jeremy will curate those and yeah great so I thought I would start us off and as Colin warned you I'm an economist so I'm just gonna start with far away from the ethical discussion and talk a little bit about some of the pragmatics of of what Paul is saying and sort of pepper him with a couple of questions so I think to start with and taking a step back from frankly whatever environment we're looking at and fretting about today it's helpful to think a little bit about what is a counterfactual and again I warned you I was an economist I'm gonna do this which is what are we comparing this to and so it's not necessarily to say that there aren't things to fight about or there aren't real concerns but I think it's helpful to distinguish the question of what are concerns and things we're gonna have to deal with regardless of automation regardless of technology and innovation and and what are the things that are unique to this environment and parsing those things that has has a couple of useful elements to it just one at its core when we talk about autonomous weapon systems or autonomous anything what we actually mean is human programmed machines and I and I want to keep using that terminology for the reason that ultimately what the decision we want to make is how much of a rules-based explicitly consistent decision architecture are we comfortable with and at the end of the day when you say it like that you're like well I kind of like the idea of consistent decision making right I think a rules-based system might make sense and a lot of the constraints we want to put on this system are ultimately themselves rules based systems like laws or regulations circuit breakers right all of that are our rules based questions that are solving a fundamentally rules based problem and so in in separating out sort of this idea of autonomy / human programming I think it's helpful to understand what is fundamentally different and how much is the fundamental differences driving our concerns versus things like speed or scope where we think problems that currently exist might just exist some more that doesn't make those problems not bad does it make them not problems but it makes thinking about how to deal with them potentially different than new problems that we have to deal with so question one for Paul and I'll ask three and then I'll stop which is what is fundamentally different and I and I want to kind of return and harken back to a Paul story because my second question is really well illustrated by his story about the little girl which is we really like judgment when it protects little girls with goats we don't necessarily like judgment what it means that we feel very uncomfortable with collateral damage in certain settings with certain types of victims who maybe look more like us or feel culturally more like us but we're much more comfortable with collateral damage and civilian casualties in settings where we don't in those senses our judgement intuitions and sort of tolerance for different types of collateral damage might actually not be that useful and and so and so the second part is how much of the fundamental difference in these rules based human program machines versus human executed operations do we think we want sort of this type of consistency and where is that sort of strategically valuable in and the last one and I'll just close with this is at the end of the day what we really need to think about is how do we think about assessing the risks and benefits of these systems in the context of what is fundamentally different and in particular I think it'd be interesting to hear Paul from you on two kinds of assessments which are one this question of like what are the benefits and risk how do we how do should we think about valuing them how should we think about ideally measuring them in a consistent way but - or B whatever how should we think about what that means on when we're comfortable launching new technologies right and and and by that I mean at some point we're just trading off risks here we have a benefit and cost analysis and we have some fundamentally new things that we're not gonna know we're not gonna review and are not gonna be realized until launch and so how do we decide when is it okay to put this in the battlefield and kind of see what happens and when do we think we want to hold back either first Street strategic or first sort of values based ethical legal reasons stop there okay I will try to remember all those but if I harass you I mean I think I think you really hit the nail on the head the way you frame this that it's not it's not really that you're giving human decision-making is that you're embedding human decision-making in the machine ahead of time and so you're getting the benefit or the handcuffs depending on a point of view of consistency someone once described it to me as autonomy as a flywheel for human agency and and so I think there are a couple aspects of this one is there might be some things where you just don't have good enough data you know good enough sensors to make a decision to allow machine to make those functions reliably right and okay that's fine things like that you can set aside and say not there yet brings you really to the question now as the technology evolves what are you going to want to do when it's no longer a question of can the machine do it but should the Machine do it and there are some things where I think the answer will be yes when there are clearly defined parameters for what is better performance right if you're trying to do medical diagnosis well like you want the right diagnosis what is the right you know decision to take afterwards what is the right course of treatment they may not have a clear about answer might depend on discussion with you and your doctor but you want the right diagnosis to start self-driving cars is another good example of this like getting from point A to point B without hitting anyone that's better a performance that's a place where once the machines are good enough you're going to want to automate that but did this other question about judgment I think they're clearly areas where you can see the argument on either side right let's talk about sentencing for example okay you might have issues of fact in criminal justice right did this person kill this other person there's like there's a fact there but then there's a question of once we've determined what the facts are what's the right punishment and that's a place where I could see strong arguments on either side right on the one hand you might say look we won't want to be stuck into these rigid rules about what the punishment is we want a human who could look at the whole situation I could try to understand does this person feel guilty are they were more full have their tried to to make amends afterwards what is the human cost of these actions and those are things that you might want factored into the decisions on the other hand we also know that humans have like biases humans may mistakes there are huge examples of in the criminal justice system racial injustice and bias in the way our system is involved in in sentencing and so those were places where algorithms could help and so I think those are like I think those are challenging questions it's also worth pointing out that we don't have to make a binary choice between one or the other right and that's probably the best outcome in many of these situations particularly when there isn't an element of speed where you could take the time to pause could we use algorithms to correct some of those biases that people engage in but also allow for humans to weigh in people have looked at this in medicine of trying to combine doctors and machines together there ways to do this wrong and get the suboptimal outcomes but the ways to do this better if you do things like allow doctors to override the machine but when they do so the doctors have to explain why what the reasoning is and so those are the I think in general I very much am a fan of this paradigm of trying to combine humans and machines together because they're better at just very different things to your question about risks you know there are some areas where again we can we can dig up good data for performance and then try to find a way to assess this if you look at self-driving cars where most said that they've clocked you know 10 million miles on roads and then they're doing another 10 million every day on synthetic data of simulated driving right so we'll be able to get a sensor over time of the performance of these things this is really hard in the space of warfare because thankfully we're not fighting 10 million Wars every day which is great war is a very unusual phenomenon most militaries are engaged in peace on a day to day basis what this means though is that unlike in areas like self-driving cars or commercial airliners where we can test things we can have high reliability because we can actually engage in day-to-day operations in warfare we can do lots of testing in peacetime but we're never really going to know how systems perform until you get into wartime and then we have accidents sometimes it was an incident in 2003 with a patriot air defense system one of the pictures I showed there shot down to friendly aircraft during the opening phases of the Iraq war and there was combination of things that came up in testing but weren't identified and corrected weren't educated to the operators as well as new things that had never come up before and we're novel at work and that's always going to be a challenge in particular because your adversary is gonna try to force those incidents on you so let me take us in a slightly different direction which is in some sense to interrogate your use of the word we yes all right so so in your response you've been talking a lot about we can do this we can do that it doesn't need to be a binary choice we can embed the rules we want but of course from the slides you showed us we is not just we in the United States or just we in the United States government and so you sort of you know gave me some bad memories of my time in government with the picture of deliberations at the United Nations this is a really difficult place to get agreement and so I want to start by asking you to describe what you think ought to be our objective in the international sphere and then we can shift to what might actually be achievable in the international sphere but let's let's assume that we could write the terms of some sort of structure you know at the international level what actually would we want out of that structure when we think not only about this kind of destabilization that we might want to avoid in the international system but also just protecting and defending the security of the United States yeah I mean I think in the best possible outcome it would be great to see countries agree on some set of rules for how to use these systems they'd avoid some of the worst possible outcomes you're never going to entirely avoid risk of accidents or harm but we don't want to transition to a world where either countries are deploying things that have markedly more propensity for civilian harm than people today and of course people make mistakes people aren't perfect but we don't something that's significantly worse and we don't want something that leads to some of these stability risks that I mentioned right where countries are maybe incentivized to engage in this arms race in speed the problem is a couple things most predominantly nobody trusts each other and so you know countries have been coming together to be part of these discussions internationally but other than like agree to come to the table to talk about this there's no agreement at all on where this is going you have three broad camps you basically have a group predominantly being led by the human rights community by nongovernmental organizations saying we need an international legally binding treaty that would create a pre-emptive ban on autonomous weapons before they could be built when you take these off the table and they're looking to prior bans on land mines and cluster munitions as examples of what might be done in this space and they're saying look countries cannot be trusted you guys do a terrible job you guys as in nations right do a terrible job of complying with the laws of war anyways so don't tell me about the laws of war we need to just take these things off the table and there's a lot of validity to that to that reasoning the problem is is that there aren't a lot of countries that agree with them they're about 27 countries that have said they agree but none of those are major military powers or major robotics developers and so it hasn't yet that that movement has not gained a lot of sort of certainly there's no consensus and not enough momentum to drive towards action internationally you've got a handful of countries prominently led by the US and Russia that have said look we have this thing called laws of war they're just fine thank you very much everything that you're concerned about is already prohibited under the laws of war so we don't need anything new and we'll agree to talk but that's it usually the u.s. is a bit more tactful than Russia about how they communicate this but that's that's really the position at both of those countries and then there are a handful of countries that are sort of in the middle France and Germany have been advocating for something like a non legally binding code of conduct on this technology but all of it sort of misses the boat on what are those rules a lot of the debate is about like what is the color piece of paper that we write it on is that legally binding or is it not instead of really focusing on you know what do we want to do with the technology and unfortunately that's taken a bit of a backseat partly because the politics of the issue has driven sort of you know the shape of of an agreement first but also in parkus many people don't don't really know there's a little bit of consensus a shred of consensus around the idea that humans should at some level be involved in lethal decision-making that has come out of these discussions and it's been written down in in post meeting documents that's I think a good starting point but there's a lot to go from there to try to figure out what is that really mean in practice so I'm gonna push a little bit further on this because obviously you were involved in the creation of DoD policy and DOD policy at this point requires appropriate levels of human judgement over the use of force you probably typed that into the computer when the directive was being developed and so like what I want to ask you is if you were authoring this document you suggested that there ought to be a set of things that potentially we rule out what would be on your list of what gets ruled out and and given the world that we exist in anytime you say well the US and Russia totally agree on that you know kind of red flags go up for me you know basically countries must be agreeing to something that enables them to do profoundly different things given the different objective functions of the US and Russia so even if you have a list of things that you think we ought to agree to at the international level why would you have any confidence in any mechanism that would be constraining on other countries with respect to those things that you want to rule out so what's on your short list and what's the mechanism that gives us some confidence that this could be meaningful yeah I think okay there's a lot of a lot of questions kind of wrapped up in there let me talk about the mechanism first I guess I have very low confidence in the ability of international treaties even legally binding instruments to constrain countries when they don't want to be there's there's no enforcement mechanism for these treaties so there was just backed out of the INF treaty intermediate-range nuclear forces treaty because Russia was treating and when another country is cheating the only thing you actually can do to try to enforce that is to say all right we're out - so that's that's a real fundamental problem there's no forcement mechanism for any of these kinds of things I see these treatises ultimately a coordination mechanism between countries which is to say that if we both actually agree it's in our interest to do something it's valuable to write down on paper what that is so that we're clear and what that agreement is but you can't force the country to do this if they don't want to maybe there could be pressure some countries are more or less susceptible to some international pressure but that's a real challenge so I think any the only way to get to any kind of agreement whatever the rule set is would be if countries see it is in fact in their interests to set some things and say we're not going to do that for one reason or another and one of the challenges internationally in this space is the way the conversation has been driven it's been driven by people in the NGO community saying you can't have these weapons because you nation-states cannot be trusted with them well that leads every country to basically say well we're a circle we're humane we comply with the laws of war that's not oh of course obviously true in all cases many countries don't at all comply with the laws of war but they all can play about as much as they want to and so all of them so that that sort of argument doesn't have a lot of residents actually with States it might resonate with populations and might resonate with people in the media it might resonate with sometimes Parliament or others to try to put pressure on the governments but certainly not the security apparatuses of those countries and so no one's yet made a really compelling case for why it's in their interest for militaries to not do this the same way that the US and the Soviet Union actually did agree it was in their mutual interest to put a lot of things like intermediate-range nuclear forces and other types of weapons actually off the table I would come at this from perspective actually of saying there is some limit what we do want humans involved that how do we come at that where do we set that limit is I think a little bit fuzzy but I would say a good starting point is the laws of war themselves and I would take the position that's been espoused by the US Defense Department and their law of war manual that says that there is an important asymmetry between humans and machines under the laws of war which is that humans are agents and machines are not so an autonomous weapon is no more a legal agent than an m16 rifle this and that oh it's humans who are bound by the laws of war and humans who must comply with them and that implies in my mind at least and this is my interpretation of this duty doesn't say this but my interpretation is that it implies therefore some degree of involvement that humans actually have to have with these decisions to ensure that their actions are legally compliant what that is how much information do people need to have is I think open to debate and this this idea that humans should be involved in some level circulates all of this conversation internationally one of the terms of Curran is meaningful human control people say there needs to be meaningful human control over these weapons over lethal decisions the US government has latched on to this term appropriate human judgment which is in the DoD directive it's not defined it's so there's no I mean I I can't tell you what it means because it means exactly what it says right there's no definition written inside the Defense Department of what that is but it does imply that it's some level humans ought to be involved in these decisions and I think going forward that's an important question that that all of us need to try to unpack not just you know military experts working in a space in technologists but also legal and ethical experts to try to understand what is that human involvement and what should it look like going forward so before we transition to some of the questions that that you've passed up to the stage let me push on one other issue that is all in the spirit of interrogating the we so one we was the United States visa vie other countries around the world but now I want to think about government's visa vie technology companies because ultimately a lot of the progress that you're describing while government may play a role in spurring some of this research a lot of the innovation is going to come from the private sector and potentially partnerships between the government and the private sector either in the United States or in other countries so of course you're aware of project maven and the debates about project maven with Google where you had 3000 employees sign a letter basically saying that the company shouldn't be involved in working with the Defense Department and the national security agencies in the US government Google should not be in the business of war and in fact even in google's AI policy google has come out and said we want to be clear that while we are not developing AI for use in weapons we will continue to work with governments in other ways so Google has taken this strong position of not being involved in autonomous weapons and one might argue that some of this is a function of the reaction that's come internally at Google from workers in the tech industry who say they don't want to be a part of technological innovation in this space what do you make of these protests what do you think of the perspectives of the tech engineers who don't want to be in the business of supporting the development of autonomous systems for the US government do you see that as unpatriotic do you see that as entirely justified and ultimately how do you see us navigating this challenge where the government at least in the United States and other governments have strategic interests in developing technology but then you have private sector entities that are going to have to make their own decisions that reflect their own values as companies but also their need to maintain top talent yeah no that's a great question I mean I think you know for starters I think it's great that we're seeing AI scientists and engineers be concerned about how the technology is being used I think that's a really important conversation I'm glad to see that people are are engaged in that and I think we need more of that overall and we I think we need an open and multidisciplinary dialogue about the use of some of these technologies and a variety of settings not just military applications but also domestic surveillance things that we're seeing AI technology only being used for overseas for repression certainly China is doing some horrific things using facial recognition technology using algorithms to track people in Xinjiang and elsewhere and I'm that I'm deeply concerned about I do think the you know there's a couple there's a couple facets of this this kind of particular debate between kind of Silicon Valley and a Pentagon part of it is that DoD doesn't do a very good job of explaining what they're doing maven was a good example of this a lot of I am I'm told from people who were close to this issue that most of the signatures came before people even knew what maven was and then afterwards once there were some more details about it there was less of that I think in this particular instance all of the angst and frustration from employers at Google was probably exacerbated in fact that it was a secret project which I don't think was helpful in this case whenever you do something that's in secret people assume it's a secret because you're doing something wrong right and that if Google and the Pentagon had come out clearly and said here's what we're doing here's what we think it's important here's why we think it will save lives in warfare it'll save servicemember of lives it'll save civilian lives it'll make what we're doing that make us a better situational awareness I think that would be a good thing parlor has to do with maven is like not even though what was doing was like super innocuous had nothing to do with weapons they're learning more about the environment it's doing so on a drone and anytime you talk about drones people get people get upset alright because people are thinking about drone strikes and people are upset about drone strike policy and it gets wrapped up in these broader kinds of things that are challenging but you know there's there's like sort of a couple different debates that are going on here one is about what the military does with this technology I think that's a really important conversation I have confidence that in these instances what the DoD is doing is right but I think we need to talk about this and I could envision things that are not a good idea right and I think it's good that people from the technology community are involved but a whole other facet of this technology is that there are lots of people at these companies who are not Americans who don't see themselves I mean it's not just that they don't see themselves as having an allegiance the United States some of them are not from the United States because these are international companies and they have researchers from abroad that are working here or in other countries and so a lot of these these are us-based countries companies but they see themselves as global companies that are multinational you know in some instances we've seen leadership from Amazon and Microsoft come out and say look we are headquartered here in the US we grew up here in the US we are gonna work closely with the US government on these technologies but a realist inaudible companies are gonna see themselves that way I do think there's a really important discussion that we that we need to have we in the sense of I'm gonna say broadly people who care about human rights about how this technology is used not just in the US but elsewhere and now AI technologies become in very very real it's been used in military and other applications and I do think we need to have a conversation about like who are these companies gonna work with Microsoft their leadership has said that there was at least one instance where a foreign government that wouldn't say which one wanted to use their facial recognition technology and they said no they weren't comfortable with that country's track record of human rights but you know I think it's it's good that we have the right mechanisms in democratic countries to handle these kinds of issues in military and other space and we're the right institutional mechanisms to deal with this they don't exist elsewhere but I do think that we have a broader conversation about where our individuals where our universities and where our companies who are they're gonna work with as they use this technology so let me shift us in to questions from the audience so we can read some of them so let me start with this one which is to you Paul do we really think the threat we face is sufficiently great to warrant a major growth in autonomous weaponry so I would say no actually right so I know mostly machine to robotics and I'm lethal autonomous weapons I think there's there's lots of huge advantages to military robotics where you could still keep people in the loop people making these kinds of decisions the question is you know how much do we really need to go to like fully autonomous weapons it's not obvious to me that you need to in fact some of the most compelling reasons to do so our fear is that the other person is going to do sail wow that's a bit of a problem and that's in fact what US Defense Department leaders have said they said their intentions to keep humans in the loop for these decisions they're the only reason they might change that they've said is if others and they feel that they have they're compelled to respond the challenge there is how do you get a position of trust because if other countries are also believing while you're developing these weapons in secret and there's no good way to prove that you're not that could lead to a world where countries are building autonomous weapons simply because of the fear that others are already doing so so let me return to this theme that that you raised with one of these questions from the audience about the laws of war and you know we read a lot in the paper about the decline of the liberal international order so I'd say that this question comes from a place of real concern with respect to the International mechanisms and norms that govern our interaction among countries at this moment in time and it's basically framed as following are there really any laws of war left that you need to think about when it comes to the design of autonomous weapons that is I assume this means in part you look at the behavior of Bashar al-assad in Syria you know there's no sense in any way that proportionality or distinction are our sort of laws that are meaningful in that context by the Syrian regime you think about sort of the authorized use of force at the international level that comes from the UN Security Council clearly Russia's invasion of Ukraine was not authorized by the international community so in what sense are there even a set of laws of war that do need to be programmed into autonomous weapons or are we reaching a point where these norms and rules are no longer binding in ways like the INF that the US and other countries should stick with well I mean so the term laws of war is kind of a funny term because they're not laws the way that domestic laws are and since if there's no one to enforce them there's no police there's no judicial system there's no attorneys to try people there they're a set of rules and principles for appropriate conduct in war and they exist for a couple of reasons one so that when countries fight each other they can agree that there's a set of sort of rules for how we're going to behave to constrain the worst excesses of work to say hey look let's all agree you that once someone surrenders we don't shoot them anymore and if I agree to do that and you agree to that then this will all be a lot better off for both of us and and that's you know most of these are self reinforcing in the sense of that people violate them the natural consequence is the other side then reciprocate and so reciprocity is the real sort of driver here there are always going to be some actors that don't comply with the laws of war they're always going to be terrorists and rogue regimes that doesn't mean that they're not important for countries who do care about human rights to comply with they matter because they make war more humane and less terrible for everyone involved they also I think matter because how we conduct ourselves as a nation matters as somebody you know who fought in uniform like my actions mattered to me and complying with with you know American values with things that I could live with things that I could look at myself in the mirror afterwards mattered to me and so I think those are important there's another really important reason why we have these rules is so that they can keep in check some of these were successes in work for ourselves so let me return to the example you gave there on if if I surrender to you we don't kill you and and vice versa because I think it's a useful example and one that's echoed in this question but also in thinking about what the pragmatics of having autonomous weapons on the battlefield are so the question here is what happened if true what happens if troops try to surrender to an autonomous weapon must there not be some provision to make that possible what if they surrender then destroy the weapon right so I think there's to some extent our intuitions and rules are based on this kind of like look I don't want you to kill my guys when they surrender so I'm not gonna kill your guys and we do this though that starts to break down when you have this sort of asymmetry of humans on one side and weapons on the other side so in a pragmatic sense how do we think about either updating or programming or addressing those types of realities in the context of autonomous weapons on the battlefield yeah so it's a really good because it's easy to say in broad strokes if the autonomous weapons could comply with the laws of war then we could use them that's not really big yet and there's a lot of things like captured another tiff and this is one of those really tough problems so how would we figure out how to do this to make an autonomous weapon identify automatically if someone was surrendering so you might say you might down a set of rules right if someone waves a white flag then they've surrendered if they stick their hands up then they've surrendered now that sounds ok but it has a couple problems one might be that there's situations people are surrendering and they're not doing those things and the machine might miss it and it kills them right so that's one set of problems maybe the person's like got their arm shot and they can't lift their arm up and they can only got one arm up maybe they don't have a white flag or the white flag is dirt even it's not recognized right a whole set of problems there the other interesting challenge here is all of these rules and conditions could be manipulated by some nefarious actor now it is illegal under the laws of war to engage in what the loss of war two is a ruse so you can you can sneak around you can hide that's fine but you're it's illegal to say wave a white flag of false surrender and now when people come you attack them doesn't mean people wouldn't do it particularly if you had an autonomous weapon that was tricked by these things and so that's a situation where like humans will probably do a much better job of being like they're not serious I could tell they're not serious like they're ready to shoot us this is fake or they might also be able to say look these guys are surrendering even though I don't meet the certain criteria this is one of many reasons why a lot of autonomous weapons that would target people but the military calls anti-personnel autonomous weapons are just much more problematic than things that would target machines can I just push on you and that's just a little tiny bit which is to say in some senses we have these types of I mean to put it bluntly type 1 and type 2 errors and malicious and well-intentioned actors right like that's the two by two problem you've drawn where you have false positives sometimes by people who just have a dirty white fat flag sometimes like people manipulating the system and what do you do about them there are answerable questions that fundamentally there's gonna have to be a values decision you're gonna have to accept risk on one side or the other what I'm wondering is why is that different than now right now there's certainly differential I suspect I might be better at guessing whether you really mean surrender if your behavior is more culturally similar to mine or your age is more similar to mine or your traditional military versus a non-state actor right like I might be able to comply in much more consistent ways in that setting than another yet we don't feel like that consistency is critical to the notion of these rules operating in the way we do on weapons we're not looking over look we can't deploy these soldiers they might be biased right like that's not that's not our military decision-making so why here like what is fundamentally different that we feel like when machines are making that decision I think what's challenging in this particular instance is that these types of decisions about what whether a human is surrendering depend upon a theory of mind I mean with a look at this person and then try to estimate what are their motivations so you know a person is approaching at a checkpoint this is a situation military servicemembers encounter throughout a checkpoint someone's approaching is this person hostile or not and it's not just about you know some checklist of you know do I tell them to stop and they keep walking if you follow the checklist that might help you defend your actions to a commander but ultimately there's a lot of things that bake into their decisions of people looking at this thinking about the context what's going on and trying to decide like is this person trying to approach you with a suicide vest on it they're gonna blow me up or they just confuse and they didn't realize that I'm telling them I need to stop the machines are terrible at those kinds of things machines are good at recognizing faces that can recognize posture they can recognize objects the machines today at least are not able to put this together and sort of make a prediction about how are people thinking what is their intention they don't have the context to do that and so that seems like a place where maybe if some day machines could do that better but that's a list of a real long ways off from where we are today so I think you're being super generous - what machines are good at even outlining what they're not good at right in the sense that we know that even recognizing faces machines have extraordinary limits especially as a function of what kind of input data is available to the machine learning algorithms and so it motivates you know this follow-up question which is you know you I think have done a real service for the foreign policy community in helping to educate people about these new technological frontiers you know your recent peace and Foreign Affairs is introducing people to sort of what autonomy actually means and how it functions and that has turned you in effect into an advocate for what you call safety first right you believe that the US government as it approaches the use of new autonomous weaponry on the battlefield has got to err on the side of caution with respect to deployment because the harm potential harms that could be done by technologies that are under tested and underdeveloped are significant and in a world in which this represents some step beyond human judgement and the kind of legal understandings that we have associated with human judgement you want to go safety first but I think you're also cognitive cognizant of the AI arms race and the fact that others may not play by the safety-first rules and so I think it'd be helpful for you to play out a little bit for us that strategic dynamic so you want this position of caution right safety first dedicated testing others may not be so willing to move slowly with respect to the advance of these systems does that represent some fundamental risk for the United States if we are too cautious with respect to testing and deploying these technologies when other countries are building swarming aircrafts and other countries are deploying a set of autonomous submarines you know that that basically carry payloads with them that can destroy our fleets yeah I mean I think there's risk on both sides and the question is how do you effectively try to balance that risk and how do you not get in your situation we create perverse incentives for your adversary or your cell to short-circuit and shortcut test & Evaluation behind these systems because you know one of the challenges is there are huge vulnerabilities and a lot of these AI technologies today when you put them out in the real world they do weird things they do surprising things Stephen Moore so when you start to bake in machine learning you can't have issues of bias data you could have issues of you know surprising learned behavior all of these are really gonna be exacerbated in these kinds of competitive environments of court you know you know for a fact you're gonna have bias data because your adversary is not gonna let you have the actual data of their weapon systems and one of the things that the militaries do is they talk about reserve the whole reserve modes for the operation of their systems so if you build a raid our militaries have their radars turned on along their coastlines the borders of their territory on a day to day basis so they can see who's coming in and if someone starts to encroach they turn the radar on them and then they might scramble some fighters to intercept that's also a great way then for coaches to gather information about others later what frequency of their operating that what power how does it work and then they use that and try to find ways to then defeat that radar so you bake in things like reserve modes or ones that you never actually show in peacetime and you only use in wartime so you know that you're gonna have problems where something's gonna be different in novel manned warfare the way that we deal with this now is we tell humans things like no plan survives can't force contact with the enemy we tell people you know you have training but they need to be flexible and adaptable once you get out there on the battlefield but machines can't do that and so one of the things where I wrote this article in foreign affairs was to really bring to the conversation in the national security community all of these vulnerabilities that people are actually not super aware of in Washington AI engineers understand them but in Washington it's large right now a one-sided conversation it's people focusing on one element of that risk equation saying we can't fall behind others that's true that's fair if you really were just concerned about accidents you'd say we're not going to use this technology I don't think that's the answer I do think the US needs to be using AI to defend itself but we want to be thoughtful about how we do so so if you can think about computers as an analogy you were to go back a couple decades would we want to use computers to make the military more capable more advanced I would say yes but we should definitely think about cyber vulnerabilities and we need to be a factor in how we use them so similarly with AI we need to think about vulnerabilities in these technologies where as they might fail or be exploited by others and that should change how we use them so I'm gonna ask one last question from the audience and invite Colin to come up to sort of conclude our discussion and I think it's a very appropriate question to end with given sort of where you are and the message that you're bringing which is what are you know in the last minute what are what are two or three questions that you think the Stanford academic community is prepared and capable of helping to answer around these issues that are raising and relatedly how do we energize a set of undergraduates and graduate students to think that issues in this space are what they should be focusing their attention on rather than the naturally more attractive things that are happening in Silicon Valley which are dancing around some of these really sensitive issues related to national security yeah I mean I think in not just a military space but a whole variety of applications of AI we're going to be confronted with this challenge of how do we use this technology where are places where we want to use it to replace or augment human decision-making and then where are the places where we want to preserve space for human judgement and how do we do that and all of those are interdisciplinary conversations when we need not just technologists but also lawyers and ethicists and political scientists and sociologists and others in the room as part of that and so I think you know an autonomous weapons I've been very fortunate to be a part of a very interdisciplinary group of people working on these issues and we need that in a lot of other spaces as well and folks at Stanford and elsewhere can contribute a great deal to that conversation great Colin well great well first of all join me in banking Paul Rada and Jeremy for an enlightening interesting and at times terrifying conversation I think that those of you in the Stanford community know that one of the top priorities right now for the university is to make sure that all those brilliant minds they were turning out many of which going to work for companies in the valley are also equipped with the analytical and ethical tools to consider the full implications of the technologies that they are helping to develop that are going to shape the rest of our lives so we're committed at SESAC to engaging in conversations just like this and you can expect more of this we have a great partnership with the new human centered AI initiative here at Stanford which is very much focused on making sure that social economic and ethical questions related to AI development are front and center as as these tools evolve to make sure that we're doing more good than evil and I think that's a probably a good mission statement for all of us it's certainly what we hope to do at SESAC it is a SESAC tradition that every time we have a drell lecture we have a poster and we framed that poster and we give it to our speaker so I would like to present Paul with your poster and thank thanks to all of you for joining us tonight and have a wonderful evening
Info
Channel: CISAC Stanford
Views: 972
Rating: 5 out of 5
Keywords: stanford, autonomous, HAI, artificial intelligence, security, weapons, CISAC
Id: qgGkFRNH2HY
Channel Id: undefined
Length: 82min 37sec (4957 seconds)
Published: Mon May 06 2019
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.