UNIDIR 2023 Innovations Dialogue: The Impact of Artificial Intelligence on Future Battlefields

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
foreign [Music] foreign [Music] [Music] foreign [Music] foreign foreign [Music] [Music] [Music] foreign [Music] [Music] [Music] foreign [Music] [Music] welcome to all of you to the fifth edition of infinity years Innovations dialogue last year we took this event to New York and this year we're delighted to be back in our institutional Hometown in Geneva in this beautiful futuristic campus biotech the Innovations dialogue at Unity is one of our big Flagship events and it's dedicated to exploring the impact of emerging and new technologies on International Peace and security and we're really trying to bring together stakeholders from all walks of life and disciplines from governments of course from industry from Academia and of course also Civil Society the focus of the Innovations dialogue is really on fact and evidence-based discussions balance discussions we're trying to capture a broad range of different perspective in a very diverse Global community and the aim of the Innovations dialogue is to build shared understandings and to dispel myth about these new technologies and how scientific technological developments impact the Global Security environment now when we were discussing in New York last year we were focusing on responsible AI in the International Peace and security context but that really was in the pre-chat GPT era I would say and since then I think our perceptions about artificial intelligence have changed quite dramatically suddenly the topic has become real and it has become real for a much wider audience the global public policy makers more generally I think uh chat GPT has allowed Us in the global public a mesmerizing Glimpse on how good AI has already become how transformative on basically all aspects of society its impact is likely going to be and also how sudden these groundbreaking technological developments will impact Our Lives I think the Secretary General at the United Nations was quite right when earlier this year he said technology is not moving incrementally and the time to act is now I think that's clearly the case and it's also the case for applications of artificial intelligence in the military context AI is already being used in a wide range of military applications that use is only going to increase and in many ways it's quite likely it'll revolutionize the way Wars are fought in the future and how militaries operate so what not long ago was an issue really primarily for rather small expert communities is now getting much bigger interest and it's I think safe to say it's also gaining momentum in the multilateral governance space we've seen a lot of activity in 2023 alone so if you recall we started the year first with the summit on responsible AI in the military domain hosted by the Netherlands that was in February and that resulted in a call to action endorsed by 50 more than 50 states on the same occasion the U.S government launched a political Declaration on responsible military use of artificial intelligence and autonomy and they were then two International conferences in Costa Rica and Luxembourg that focused on the more specific issue of lethal autonomous weapon systems and supported the work of the dedicated group of governmental experts here in Geneva now this group of experts the GTE itself concluded its 2023 work with States agreeing on the so-called two-tiered approach that prohibits the use of autonomous weapons whose effects and attack cannot be anticipated or controlled and other for other autonomous weapon systems establishes that specific measures should be should be developed and implemented to ensure compliance with international law in addition industry and the scientific community at large have become increasingly vocal about these issues not necessarily with a military Focus but on the governance of AI and the urgency of it more generally earlier this year the future of Life Institute published an open letter that now counts more than 30 000 signatories including some prominent voices and Innovation leaders asking to pause the training of powerful AI models while calling on policy makers to dramatically accelerate the development of robust AI governance systems also last month's Microsoft published a proposed blueprint for legal and regular regulatory response to AI governance needs now building on all of these activities all this momentum the 2023 edition of our Innovations dialogue here at unidir will provide a platform for military Technical and legal experts to discuss more in depth the impact of artificial intelligence on all the various domains of warfare and including but not limited to autonomous weapon systems AI of course holds great promise from a military perspective for improving the speed the accuracy of military decision making and planning and in operations it's a tool for process optimization and all-purpose technology and it will play out in a wide range of fields of course also in the military domain especially if it's combined with robotics or latest generation sensor technology there really is a wide range of potential applications ranging from AI enabled logistical support to early Warning Systems intelligence gathering surveillance all the way I supported command structures AI driven cyber operations and then at the end of the spectrum AI supported combat operations and autonomous weapon systems so far the vast amount of existing work and discussions focuses on challenges that are common to artificial intelligence across the board and what we're trying to do with our Innovations Innovations dialogue this here is really to try and start taking the discussion to the next level dig a little deeper and focus on how AI plays out in the different domains of warfare of course AI is discussed now all over the globe and increasingly also in the military context but usually the focus is on rather General applications we as I said want to dig deeper and ask questions how will artificial intelligence impact on land C aerial Warfare what will it mean for outer space for cyberspace or the cognitive domain in the future what are specific military technical legal maybe ethical challenges that apply to these different domains and how can AI bring opportunities and benefits as well as challenges and risks to these different domains today many AI supported military applications are still at a rather experimental stage and it's difficult to know what will ultimately stick especially when these things are taken out of the Laboratories into a dynamic real world Battlefield context but I think it's clear the current environment of heightened geopolitical tension accelerating competition for first mover advantages in the tech domain and with an enabling technology as powerful as AI there are some significant concerns and risks as well and the list is rather long there are risks of inadvertent escalation risks of misperception when entirely new systems are fielded in a in a battlefield context risks of accidents and malfunctioning of course there are concerns about unpredictability lack of transparency and bias in AI supported systems but above all AI raises profound legal and ethical questions about human agency and human control especially when we're talking about life and death decisions now it's clear this risks cannot be mitigated by individual actions alone neither of States nor technology Developers unilateral measures certainly play an important role but Collective action at all levels political and Technical will be necessary to achieve meaningful and sustainable impact so to this effect today's Innovations dialogue will provide hopefully an opportunity to discuss what these Collective actions could and should look like going forward now with that and before closing let me just take this opportunity to express on behalf of unedia our sincere gratitude to the sponsors of the Innovations dialogue the 2023 Innovations dialogue is supported by the generous contributions of the security and Technology programs core donors the government of the Czech Republic Germany Italy the Netherlands Switzerland and Microsoft with that I very much look forward to fruitful and stimulating discussions throughout the day and without further Ado I'll pass the floor to Giacomo Percy Paoli he's the head of Union security and technology program and he's also the master of ceremony throughout the day over to you foreign thank you Robin for the introduction and also my behalf welcome to the 2023 edition of The Innovation dialogue uh whether you're joining us here in the room in Geneva or online it is truly a pleasure to have you with us today I don't have very long uh remarks most of my speaking points were put in the script of the video you saw at the beginning of the event so I'm just gonna make a few logistical observations to help you uh take make the most of the day with us so first and foremost uh the program the conference program you see it on the slide you should have it if you're here in the room printed the conference program is very rich where we're going to take you on to a journey really you know across the lands into the sea up in the sky and the stars and then into the Digital World um these are going to be for automatic panels that will be preceded by a scene sector that will just follow my my remarks and all of this will hopefully pave the way towards the the last panel of the day where we're going to do our best to try answering the question what's next where do we go from here the program as I said is very rich we managed to gather excellent speakers from all around the world and I invite you to consult the program online on the event page where you can read the full bios of our speakers in the interest of time during the panels we only be introducing them with their name and title but please do consult their bios on the website but as a as it was mentioned uh also in the promotion leading up to this event we really want this dialogue to be not only a dialogue among speakers but also a dialogue with you the audience and to do that we have set up a platform called slido you can whether you're here in the room and you should have received the card with a QR code or whether you're following us online you can access this platform using the QR code or just go on slide dot do and insert the code hashtag id23 through this platform you will be able to ask questions to our speakers throughout the day these questions will be then published and you can vote you can see whatever questions have been asked so help the moderators and do the job and vote if you see questions that you think are particularly interesting um for you please do uh express your support or add anything that you think should be added this is really a great opportunity for us and we're again very pleased that you were able to join us I think I've covered all of the operational and logistical notes that I had uh to cover and I think what is left to do now is just to invite uh uh on stage Dr Cecil Hotel the deputy director of unidir that will be moderating the the scene setting panel and introduce the speakers so with that I wish you a great continuation of the day thank you thank you very much yakomo good morning everyone whether you're good morning or afternoon for those of you joining us virtually and a warm welcome um Geneva is indeed very warm and we are very much looking forward to um intense and fruitful discussions today I am um absolutely delighted to have two esteemed speakers for this panel here joining me um in the room is Professor Brian Lee Professor Lee is the dean of the College of multidisciplinary studies at SEO kyung University in the Republic of Korea thank you very much for joining us Professor Lee and online I hope that we will soon be able to see online professor said good morning professors problems of the images and visual representation laboratory at the epfl the loss an equal Polytechnic Federal thank you very much and good morning to you Professor so strong so we have a yes as we will try to unpack how AI impact already and and works across different domains thank you as to whether we have matching efforts in times of default to to push and think ing dominance of the eyes so that that governance also keep up to speed any other technology is of course possible but it means that will make a regulation effective and also have a regulation that comply with international law so because we believe that so Professor so strong I'll start with you please um clearly and as perfect as as the Director um Dr guys was mentioning 2023 Series yeah of AI where many applications are being discussed services and Technology emerging but this is um yes thank you very much um I have troubles hearing you you're going in and out so I did not get the last of your question but I think I get the twist of your question so um no 2023 is actually not a new year um of ai ai has basically been progressing all along what is new though is that we have no applications of AI that have if I will say human-like characters so this AI or the whole discussion in the in the wider public came up because of this program called chat GPT that basically seems to answer like a human which of course it does not right I mean and here I do make a plug of saying artificial intelligence in the sense of intelligence the way we understand it does not exist it is all still machine learning that means programs that learn Based on data and then do something and that can be as easy as answering a question like charity PT or you know flying a drone in a military setting so um the the thing that has changed is that this chat GPT will give answers that are human-like sometimes they're even correct um and like this DDD this kind of interface with the general public has been democratized if I will say so it's much easier now for everyone to use a artificial intelligent software compared to before where you needed to have at least some knowledge and let's not forget the statute PT now can now just answer question but it can also code for you so now this is a program that has multiple purposes versus before we're just used to having a single purpose it's just it's still extremely limited right it can it can answer questions to you but it can code it can actually write a poem for you but it can still not drive your car so it's not yet a multi-purpose general intelligence system but it is one that can actually do multiple purposes but let's not forget it's a machine Learning System it has been learned it has been trained on data and it's only as good as the training data it was given thank you thank you Professor sister you mean all right let me turn to you Professor Lee as we consider integrating AI in military capabilities in a responsible way this is a term that we will be using repeatedly today we know this is not easy and we know that it is likely to require different sectors working together in close cooperation especially the government end users namely military users and Industry and Academia as well based on your own experience notably in the Republic of of Korea working across sectors how do you how do you think we can best manage this relationship okay just in time okay um I actually have prepared my answer for that question but the um continuation from Professor recipient's answer I like to add a few things there um yes AI so far well we know what it's capable of that machine learning the trained all the data that we give them the question is about not if they can learn what they teach them it is more about can it learn something that we are not teaching them there is a very difficult question to answer and there's a glimpse of ideas here and there that it may be capable of doing so if that is the case we need to find the answer to the question can we teach it to behave ethically and responsibly if that is possible then some of the problems that we may anticipate at this point would go away but if you want to do that we have to create a core value principle from all the sectors government military industry Academia we all need to be on the same page and we need to create something that we all can agree on as a core value that we pursue as human beings if the if you can teach some of those things if not if you can attach if you can somehow regulate it or govern it or control it and that's the direction that we may have to go thing is no one sector no one institution can do all that and we cannot put together the core value principle by one person or one organization I believe that's why we are here today to discuss where we should go find the direction and find the goal and uh I have been in a position to facilitate or mediate the process between the government and the industry and the Academia and that wasn't easy but if you can if you can get together on the same page and if you can keep discussing toward the same goal and things get actually better and the weekend actually look for what we call Proactive facilitation the more we talk the sooner or more efficiently and effectively we can find the inside and we can somehow set the core value principle then we can create a guideline that we can use on AI it is not about if we can regulate AI I mean you know so far people have been thinking that the um AI only has a limited capability and we just need to find a way to use it responsibly but now we have to actually think about we may have to code it in a way that it learns ethically and the responsibility so the only way to do that is using this platform and pursue multilateral discussion about where we should go if we can somehow set the goal then we can find a way to get there I believe that's what we are going to do today and um we all know what type of impact AI has on our lives but can we actually answer do we know where we want it to go where we want to go together so the only answer that I have to your question at this point is keep talking and keep buying the common ground and the core value principle thank you very much indeed and as you left us with the some of the questions including where do we want to go I'll turn back to you Professor strong to say where do you want it to go when and maybe speaking from the scientific Community what levels do you think the scientific Community can use to influence the future development of AI technology well that's a very good question and I have a relatively depressing answer that in general um at this point in time because it is the machine learning the data or the amount of data and the amount of training necessary to actually make um a sophisticated kind of model is beyond most universities so I mean the we do not own the data the the data is basically locked away we see that we do not know um on which data most of these models are trained on this is not public I know the Open Access and the computational power that is needed to actually train many of these applications is just beyond an academic facility it's not and Beyond a government but it's beyond the academic facilities most of the time so what then um science can do is they can of course start pointing the finger of all the shortcomings right I mean if nobody we actually know the shortcomings quite well um we can start pointing the fingers as just it was mentioned on the ethical implications we can also start pointing the fingers that maybe industry is not the best to regulate industry themselves that has usually not worked so well in our um history so that we you know we can basically an honest knowledge broker that's what what Academia can do but it is a fallacy to think that Academia can actually control the development of artificial intelligence it has gone beyond Academia thank you Professor sustering so clearly highlighting some of the limits of what can be done by the academic sector but you pointed out to the role of the industry which which you qualified as as being limited the the industry self-regulation Professor Lee what do you think of that the industry self-regulation learning from the experience in another or in a number of other contexts regarding self-regulation by industry um how do you see the potential role of industry-led initiatives in this area uh industry is going to have to do a lot of things but the um self-imposing regulation might not be one of them uh the reason I just said this is yes we need to find a way to regulate it and control it uh whatever way you want to prefer to call it but the um industry is not going to go out and set the regulation for themselves and that has to be done by the consensus and in a way agenda neutral governance structure that's how I see it but the industry has its responsibility and the Academia has definitely its responsibility as well but the government and the military they had their own agenda they they have what they want but in a way we all want the sustained sustainable development not disrupted Innovation by itself therefore um industry is going to have to do a lot of things to develop it and refine it and to make make it actually work and to make it viable to sustainable but at the same time that regulation has to be done by the consensus of all the stakeholders together in the governor's structure that's uh that's what I hope thank you so you both seem to be agreeing on the importance of multi-stakeholders dialogue and initiatives but then also both of you insisting on on the responsibilities and the central role of of governments in in the governance professor maybe just following up from that so so where do you think that nevertheless in these military stakeholders dialogue um the scientific communities could be possibly more involved than they have been so far um the scientific community and regulations usually do not go that well together either right so um basically that's not a role usually regulations are roles of governments and political systems and that's not where we actually are naturally at home with this one because and here I think it would be one thing that we really need to be aware of um it is actually progressing quite quickly so we cannot wait with regulation um in and for another 30 years you know I'm coming I'm coming from a country Switzerland that does not regulate very often but in this case we cannot wait the other 30 years or 40 years before we even start thinking about it having said that it is of course also not reasonable that each country regulates differently so there needs to be organization and here I think that that especially an organization like yours can uh play an important incredibly important roles because they can pull governments together and actually help maybe you know facilitate this discourse between politics and science what science can also do of course is now developing even new models right and let's not forget about this we um mentioned before that you know artificial intelligence might start to show some signs of intelligence um a big night by the way there is um you know as as many pointers there is to this one there's many poor people say this is not possible yet this kind of thing but these kind of things can be looked at right and then science can actually for the explainability of these models science can do a lot because that's not necessarily in the interest of Industry to actually work on the explainability of it that they usually work on the output of the models and and as you pointed out to the fact that certainly the scientific research is is going on if not globally at least in many different parts of the world at the same time um maybe a question related to that which is what is your view on the risk of proliferation of AI and is there an adequate term to use proliferation when we apply to AI yes it is it is the right term to use um I'm not worried about the proliferation per se I'm worried about the misuse of it so um let's not forget at this point in time it's humans using this AI for something and for every application is for the good one can actually misuse this application and this is what I said before you know let's use maybe an example that's not per se in the military um charity PD has gotten very good at writing emails but that makes it also very simple to write a spam email that also makes it very simple to to personalize email send it to everybody and incite them to click on a link that might be have a virus in it so there's a lot of democratization in this kind of like in these applications which can be used for the good but they can also be misused but let's not forget that's a huge you know there's still humans behind it that actually use or misuse this technology but with this providation um unfortunately it's not just the ethical usage that it will increase it's also the misusage and non-ethical usage will increase thank you turning back to you Professor Lee I mean speaking of misuse of course if there is one area one domain in which there's a lot of of fears of the risks of misuse it is the military domain indeed yet at the same time you also both um recognize on on the potential and of course the huge potential that comes with AI and the fact that we certainly cannot afford to as as humidity we cannot afford to stop those important scientific developments so looking at that balancing exercise what do you think you know where we could where can we actually start in terms of limiting the misuse while keeping the potential well actually I have to go with uh Professor Sabine on this one that the Academia is not exactly good for this uh regulation and uh sending that home for that but and I agree with her in a lot of ways that one we cannot wait and two yeah there's a human factor so far it's not AI that's causing the trouble it's a pretty much human at this point but at the same time uh can we actually answer to the question that do we know what how we are going to use AI uh it's a very irresponsible Behavior if you just develop it and uh give it to anybody they can use it for however because they seem to bid therefore as I said about 10 20 minutes ago the reason for us to be here today is to discuss and find the core value principle and somehow we need to find a way to regulate it and control it and as Professor Sabian said it has to be done internationally in the coherent way not just not country by country in a different regulation so we cannot wait and we need to start we need to initiate something but how do we balance the potential and the risks at this point that can be answered after we figure out what exactly we wanted to do how we want to use it and at this point a lot of people say different things about how we are going to utilize AI and unless we reach endless semi-consensus of how we want to use it it'll be very difficult to answer those questions and hopefully at the end of today's all the discussions and we can somehow find a clue to reach that conclusion or consensus thank you very much um I am tempted to really ask you the same last question to both of you which is that if you had you know a wish as to how can we really push this agenda forward and you could have really one one wish um what would be that which how will you formulating maybe starting with your professor so strong openness open science open data transparency in in AI I don't think we can stop AI it's because human curiosity everybody will think oh oh I can see this so why you know can I do this and can I do this so I think that the the research and development will just go um you know go on but one thing we could ask for is openness we could ask okay you know publish the code to your program publish the data that you've used to train this be totally transparent so all these AI applications are totally transparent that would give a knowledge base to everybody in the world and also the possibility to actually develop something else but it would you know but the idea would be everybody has to publish everything and it's been totally transparent that would actually mean that nobody has an unfair advantage as you speak of this I I am reminded of of the development of Internet initially and pushed by the academy Community for openness and and then it's extremely interesting to look at the Historical trajectory seeds then and what this has led to thank you your own wish Professor Lee uh in addition to openness I like to add empathy the reason I said that is all the stakeholders if they can try to understand what the other party is saying what they are saying then we can actually get to our consensus that much quickly then much more quickly so um empathy in my point of view in my point of view would be a very critical point so I like to add empathy thank you very much so openness and empathy what a beautiful way of starting class discussions many many things indeed Professor sustrom joining us from Luzan and Professor lijo coming all the way from from Korea to join us here today thank you for the very thoughtful contributions in addition to helping us better understand the complexity of of of this journey that we have to take um as we think of the future of multilateral governance um regarding the use of AI in the military domain um clearly the sensitive um has helped us to really also unpack some of the complexity and it was interesting to see that both of you insisted on on the risks of misuse the importance of international regulations the urgency of that regulation and I think that this is really something we also have to bear in mind we can not only be fascinated by the the the the speed of development of AI but also have to question whether we're keeping up in terms of the regulations and especially the regulation as it comes to those applications that we can consider as highly risky or even unacceptable and then really obviously both of you insisted on something that we at Uni DSC as crucial and Central which is the balancing the potential with also a clear determination of where the risks and and finding the right balance between these two dimensions with this let us move to our first panel which will be on AI and the land military domain and I am absolutely delighted to call on stage my esteemed colleague donkey who is a senior researcher at Unity but who is also a retired major of the Republic of Korean army and with over 17 years of experience working in military strategy defense policy and also also in fact at seyukyung University where she established under the center for future defense technology and Entrepreneurship before joining Unity so please join us on stage thank you very much again professors goodbye [Applause] thank you it's ready thank you very much Cecil and it was really helpful to hear the assistant conversation before the the actual the substantive paddle discussion good morning everyone my name is tuyon Chao I am a senior researcher in the security and Technology Program here at the unity and thank you all for joining us for the first panel focused on artificial intelligence and land Warfare we will discuss today a range of topics with leaders at the Forefront whom I will introduce in a minute and some of the most consequential technology and International Security challenges of our era as we already heard today in this morning AI is a reality in our day-to-day lives and development seek to make it a reality in the military context too anyone who raised National Security defense and international intelligence strategies issued around the globe over the past few years is well over the pronounced role that emerging Technologies and AI play across all of these documents AI is at the heart of these policies and strategies Central to the concepts of various military Concepts AI Technologies such as machine learning natural language processing neural networks and deep learning provide the military and intelligence agencies with new operational solutions for predicting and countering threats as well as foreign conduct and military operations on the battlefield given the increasing priority of information dominance and information advantages across the entire continent of competition crisis and conflict it is Central to current and future intelligence efforts across all warfighting domains especially the land domain besides automating the production of knowledge about threats AI can also automate decision making which could delude the role of human agency as an element of international conflict concerns at the core of the international debate about data autonomous weapons system or laws would also enter this discussion so the first panel focuses on artificial intelligence and the land domain and you may wonder why the domain matters there are largely I'll briefly explained about three elements we can think of first the land domain itself is composed of a macro scale micro Nano and other multi-scale materials presenting constantly moving and changing characteristics of force light sound heat electricity and magnets second on the ground each soldier's solution can decide life and death and it often conflicts with the opponent's will to fight so multi-scale high human dynamics and uncertainty will feature the actual application environment of AI and lastly recent rapid urbanization and new technology are forcing battles to move to urban areas to larger extent than ever before where AI systems are often required to work in extreme environment such as nuclear radiation and chemical leaks electromagnetic interference around high voltage power grids weather hazardous light disturbances and complex urine with multiple obstacles to name but a few so these environmental factors certainly post specific difficulties and challenges to the work of AI on the battlefield so as you can see on one hand we have to deal with the issues surrounding the integration and application of AI itself on the ground on the other hand we also need to consider its governance and policies set up by different actors involved in developing and applying this technology so to that end we have with us today a fantastic panel of experts to discuss both the technology and its impact on the land Warfare I will start by introducing them very briefly first we are joined by Miss Georgia Heinz a legal advisor with the international Committee of the Red Cross she has extensive expertise on legal and humanitarian implications of autonomous weapons Ai and other new technologies of warfare thank you very much for joining us and next we have a mystery takati Kano in the Japan ground self-defense and a fellow at the Houston Institute he has solid expertise in future warfare military use of AI cognitive Warfare and the relationship between emerging Technologies and operational Concepts thank you very much sir and next we have a Mr Michael dabb a research associate at the center for a new American Security with expertise in AI safety and stability welcome and last about NASA early in the least we have Dr wangbookang the director of the unmanned vehicle Advanced Research Center at the Korea Aerospace Research Institute thank you very much for coming so without any further Ado I would like to turn to first to Karo dakaki and Dr Khan with the first question from your perspective what are the applications of AI in the ground domain so in other words what issue is AI aiming to solve within the ground domain and how is a intended to apply for these issues to be solved so overview so first of all I'd like to ask what the historians of 22nd century will write about the changes in Warfare in the 21st century I think the most revolutionary changes could be brought about by ai ai enhances human brain in that I weapons developed in Caribbean's human history have enhanced human muscles eyes and ears compared to perimetable humans modern humans have acquired powerful killing power and are able to see their enemies thousand miles away and communicate with their allies thousands of thousands of miles away but for the first time in the human language history of human Warfare the brain is being enhanced the changes broader bite by AI is there will be unprecedented and distinctly in human history the fog of War one of the Essences of War according to cross a bit originated from the inherent human characteristic of fear and fatigue on the battlefield with the accurate computational results over AI change even such a nature of warfare difficulties of verification Insurance in AI become a new folk of War you will be now facing one question concerning the nature of warfare here we need to consider what AI will be used for in future warfare I think four types of military use of AI need to be considered that is information processing Unbound weapons decision making and information Warfare the first is the use of AI for information processing AI will be able to process larger volumes of information at higher speeds for example images taken by satellites what are the applications of ai-based information processing for random random domains in recent office the battlefield is said to have become transparent the massive and fast processing of images taken by satellites has enabled forces to get detailed picture of enemy forces in the Run domain soldiers weapons and positions are usually disguised air image processing could be used to spot them the second is the use of AI for autonomous amount weapons as AI develops control system will develop and armored weapons will become increasingly autonomous some other ones have been used in large numbers in recent Warfare the area at altitudes of 10 meters or to several hundred meters where we are small drawn to fly has not been utilized by Air Force to invite small drones over these low altitude areas it is Army which will use small drones for reconnaissance attacks and observation of our daily fire the third is the use of AI for military decision making with the development of AI in the future AI may be able to do the decision making of military operations what are the applications of AI based decision making in the Run domain the planning decision making and execution of operations is around domain have been much more complex and time consuming than in other domains land combat was the most complex domain where each Soldier has to make his or her own decisions and the thousands or tens of the thousands of soldiers weapons and sensors should be synchronized this requires a gratitude of time for planning and coordination and has to overcome a lot of fog in the future land con domain large enough large language models will assist in the planning of military operations I will optimize targeting and Logistics the planning on the decision making will be much faster the force is a use of AI for manipulating human cognition AI can produce highly accurate and large amount of disinformation at high speed the judgment and the cognition of the human receiving this information can be manipulated is around domain each Soldier is required to make his or her own decisions therefore this information produced by AI can be used to mislead the cognition of each Soldier thus air will lead to improved information processing the development of unmanned weapons faster decision making under manipulation of human cognition it's around domain I think this would apply to the identification of these guys soldiers weapons and positions the use of unmanned weapons in reconnaissance attack and observation at low altitudes decision making in complex land domains and the manipulation of social cognitions that is thank you yeah yeah in recently in military conflict what we observe is that there are a lot of drones using Battlefield but what what most impressed me is the way they use the Drone in the battlefield what I focus is that they use the Drone for the conventional artillery fire controls so they are with the Drone data they are calculated Equity position of the targets and the conventional act actually uh before the the drones appearance there is no way to have accurate target targeting but with the Drone we can they have they can do that with the Drone data so one uh retired Russian general or children media that the Drone turned the conventional artillery into Precision Precision guided weapon created weapons so I think that is the most important and quite what we should look more closely but uh with a closer look we can what we can find is that it's not drone or loans terrorism maybe uh three or four different nodes and what it calls nodal activism one is a drones they Gathering datas and in in the field the soldiers they bring the Android tablet with them or some kind of a handheldable notebooks they calculate with their drone data they can calculate the uh firings uh firing uh kind of Control Data we're using their Android tablet and another is the uh GL software platform and they we did it these three actors they created a kind of a better field uh Data Network and we did that that with this Data Network they use it to connect this network they use a satellite communication technology but with this this one they are gathered data in time and uh so analyze the data and make all passion or conventional artillery more uh sophisticated and the technology wise Advanced ones that is I think what we focus on in this uh Stacy so that with this uh kind of data we can call it the kind of a data Centric Warfare I think it with the Drone or unmanned Baker the real in real sense I think the data driven or data center but uh however I should have uh I should tell that uh currently assist apparent limitations because each node what I call yeah I can call it a node there is a truth and the Android tablet and even the gis software it is heavily depend on the manual operation it is not fully autonomous it runs a teleoperated or they navigated by the uh Waypoint in navigation it's not it's not an AI technology it's just uh the human operator uh so metadata and upload it to the Drone and they flow to the data so this is not an AIS so with this kind of manually operated there are the network system the human soldiers who operated this system should have a additional burden in the battlefield so now is that the question AI technology in the render uh Battlefield this time it because it really is kind of manually operated systems it is the human soldier who operate the system is quite vulnerable very weak to the enemy enemies by by firing or to the enemies so to protect the uh human soldiers we need AI technology so we need the autonomously flying drones or autonomous autonomously uh calculating or data processing in the tape uh under the tablets so this system should in in this system we use Android tablet we carried by the soldiers but there should be gone into the unmanned the ground vehicle or autonomous terrine uh ground Bakers which can calculated on board the uh the the data or the data what the battlefield want so with that we did the uh quaranteed data uh Network we can uh share the date uh the process data to all the uh entities who engage in the better field that's what why we need uh kind of uh AI in in this Stacy um I thank you very much I think um it's a really uh good starting point for the later discussion because the four types of military use of AI on the battlefield which are information processing on men's weapons and decision making and information Warfare and then based on that Dr Kang really uh gave us a detailed description why we really AI these and it's a debunk some of the myth the AI is really safe some of the human resources but the reality is the can be the opposite so that's why exactly why we develop AI to use and to use to save the human resources on the battlefield but based on that they're really a good starting point I do like to continue uh ask some questions to Michael in Georgia on the overall what would you say are the main advantages but also the challenges which are specific to the integration and implications of AI within the ground I mean maybe Michael uh sure yeah so I'll start with challenges because uh it's more fun uh so I think the biggest challenges for ground Warfare in AI integration would be the complexity you kind of talked about this in your your opening remarks um that when it comes to air Warfare or sea war for Naval Warfare you're kind of looking for like one big ship or one plane in an expanse that's just kind of uniform and that's definitely not the case in in ground combat um you know people are much smaller than than ships are they can change themselves they can act differently they can do different stuff um in order to obey detection uh my boss paulshawai's new book for Battlegrounds is a really fun anecdote about this where they had a camera that was designed to identify Marines moving across a field and after a couple of days of like being trained on how to identify Marines they ask the Marines to try to beat the system and none of the Marines failed because they were all able to find different ways to beat it one pretended to be a tree one hit in a box one kind of like somersaulted around so you know it's a lot easier for humans to do things to fool these systems than it is for ships or planes um and then secondly I kind of hinted at it a little bit that like air and C are fairly uniform in their in their terrain and ground is not uh I live in a forest and about a kilometer away from my house there's a swamp and on the other side of that swamp is a farm and some grassland so that's three entirely different terrain types that require different operational patterns different uh vehicles um you know all kinds of different things to allow you to operate in those and those are all within walking distance um so developing AI systems that can handle the multitude of tasks that it's facing ground Warfare is going to be a lot more difficult than it will be for some other domains um as for advantages uh it's ground Warfare benefits because it's kind of the main theater so advantages that get added to other theaters do impact ground Warfare so it's able to take advantage of some of the other Integrations that will be mentioned in later panels you know if you can have ai co-pilots that help that help Pilots you know achieve air superiority well that's great for close air support so the ground war ground Warfare benefits from that directly if you have chips shoot down missiles and prevent uh to protect themselves well they can also protect units on the ground as well so any advantage and value add you get in other domains can be stacked and included in groundwater storm one and uh yeah thank you for the the great interventions and I think we've kind of casted a lot of the issues already but us from the outset I mean the the icrc the organization that I work for we come at it primarily from an international humanitarian law perspective law of armed conflict of course IHL doesn't it explicitly prohibit AI um and the icic certainly is not opposed to new technologies of warfare and AI per se um and certainly you asked about potential advantages so if we take one of the applications that Colonel Takagi highlighted which is the incorporation of artificial intelligence into decision support systems so DSS certainly we can see an argument for a situation where an AI supported DSS could facilitate widespread information collection and Analysis and and in that way support IHL compliance I mean to requires planners and decision makers to do everything feasible to verify their targets and that includes assessing all relevant and available sources of information so if you do have an AI enabled DSS that is providing greater situational awareness potentially it could assist in reducing or minimizing civilian harm which is an ongoing IHL obligation as well so you know from that perspective we see that as a as a possibility but at the same time lost about challenges because I think we do need to be very alive to the challenges the potential risks and I think you know if you're looking at an AI supported DSS this can arise purely by virtue of the system's limitations uh so you know we've spoken Michael has highlighted the complexity of land Warfare and I think it's fair to say that CSS can ever take into account the full spectrum of that operational reality even if you have a very high performing machine Learning System they are prone to fail when they encounter inputs that differ from the data that they were trained on that they were conditioned on in testing so I think that is a very real inherent uncertainty in these systems which when you pair it in say an urban environment where uncertainty about the location of a Target by just a few meters will have very real consequences for civilians and for civilian objects so I think that is a real challenge when it comes to ground Warfare just to also then highlight even if you have you know a perfect system which I would say is a big if you can still have these problems around human machine interaction so um you know there are phenomena such as automation bias where you have the human then over relying on a machine output and compose problems purely from an IHL compliance perspective because IHL tests are not or requirements are not only concerned with the results of an attack so say for instance the IHL requirement of proportionality it actually requires commanders decision mentors to engage your forward-looking judgment process completed military advantage against the expected incidental civilian harm so to use outputs such as a collateral damage assessment but that is going to feed into that exercise of human judgment that is crucial for IHL compliance so that on the the AI in in DSS piece are some of the the key risks that we see just very briefly to highlight potentially on on autonomous weapon systems because I know this is another area that you act upon that could be AI enabled so autonomous weapon systems when we talk about them are walking systems that select and apply Force to targets without human intervention many of these deployed largely in sort of simply rules-based systems now concern is around introducing a level of unpredictability there is a very well recognized Black Box phenomena that occurs with many AI algorithms where you might be able to say nine times out of ten it does we can't say why it does X and so whether this is a sufficient level of predictions that you can adequately control the effects of that system which is a requirement under age it also challenges when we talk about things like the requirement that states review their weapons which under additional protocol 1 article 36 states have to review all their new weapons to make sure that it's imply with IHL or with international law more broadly actually do that if you can't predict uh or you can't explain why it is functioning the way it is and so you can't predict the the functioning of that system I think is a is an open question um and then I won't go into it but there are some very real accountability challenges as well uh if there is an unpredictable system that behaves in a way that is not intended then you start to run into some uh problems with bringing home uh the accountability for anything that does go wrong so I'll leave it there so thank you very much I think I'm [Music] um during this discussion we just report a really important elements uh which we would like to dive into later part of this discussion Michael addresses or some of the complexity the other domain entails I think I mean it's quite interesting to see the following discussions on the uh see it could be really complementary these complementary kind of discussions and of course but we can just get to that but I really wanted to more dive into the battlefield characteristics so my question goes to the account again how do you see AI changing land Warfare and ground operations and what impact will therefore have more specifically what elements are currently merely assumptions over to you yeah as I mentioned the changes brought about by AI need to be divided into four categories that these are improved information processing the development of unmanned weapons faster decision making and the manipulation of human cognition let us consider AI based information processing in future land combat even if soldiers weapons and positions are disguised in traditional way air image processing will be able to export to them in the future run domain no one will be able to hide we must fight over transparent Battlefield will supply attacks be possible in transparent battlefields motion processing will make physical surprise more difficult and the concept of surprise itself may change sobrieties maybe carried out only in non-physical domains such as cyber and electromagnetic field conversely we may be able to attack against AI AI depends on training data if the training data is contaminated performance of AI is yeah yes it's the concept of surprise about the use of unmanned weapons I'm able to use a smaller low altitude areas and the Run domain will become three-dimensional ground forces have been heavily influenced by the terrain and see dimensional run domain will be less affected by terrain these fundamentally changes the very concept of land Fred let's have a lookout is around domain in the future air will assist the planning of military operations and optimize targeting and realistics thus the planning under decision making will be much faster however AI has importance right to decisions and it is difficult to verify that plants and decisions of AI to reduce it by AI also in the extreme conditions of warfare and initiations where is a danger that is instantly make critical decisions such as starting the war relying on AI for military decision making would create fourth the manipulation of Human Condition by AI is around domain each Soldier is required to make his or her own decision around the combat will be fought in a fragile environment where perceptions of soldiers can be manipulated and influenced by thus is just in the Run domain AI just in the Run domain yeah attitude is not married there are complex interactions sometimes resulting in resulting in Butterfly Effect also unlike the maritime under air domains the round domain is not accept the best but human centered area where humans fight directly for this reason there is an inherent to unpredictability arising from the fear and the fatigue of each soldiers and there is a strong fog of War the relationship between Ai and the whole of war is not simpler if AI assists planning and optimizes targeting Android sticks it may speed up decision making in complex land operations AI will also free from fear and hot fatigue and many May overcome the unpredictable ability that stems from Human Nature faster decision making and reduced unpredictability may be advantaged in AI in the land domain however the unpredictability of air itself as to verify thank you very much um 3D detailed explanations give us a detailed explanation three-dimensional of the land domain the danger of the Reliance and AI based decisions and the vulnerability of soldiers cognition and I think of you emphasize the human factor in the end and that it's really aligned with the uh the setting scene conversation we already had in the morning and then uh based on that I would like to just um ask more about the human decision making to the Michael and Dr Kang so anticipated impacts are already explained well for the impacts to the observe oriented decided group otherwise known as the Ura Loop are some questions can you explain a little bit more on what changes you see emerging from the integration of AI within the loop within the ground domain so maybe Michael first uh sure thank you for letting me go first uh I think uh originally the father I think the better word would be fear was that was uh with robots close on the other hand as we've been talking about the effect on the second o and the D um Orient and decide that is something is like happening rapidly and I think there's a reason that in the United States the United States is most well-known attempts to integrate AI is Project Maven which is this like network of thing a network of computers that can take in information and kind of spit out targets to help you plan out operations um so I think that's mostly where you're going to see a lot of upcoming innovation in AI integration um particularly you know as States kind of think through how to how to put these Technologies to use they're going to focus on that o and that D I'm trying to roll back the fog of War as much as possible um I tend to be a little skeptical about how far they can roll it back but I think the goals trying to use AI to mass kinetic Force it's to use angles to enhance the force that you actually have already existing yeah as a developer I think it's the udalu can give you some quite Helper because with the wooda we can Define each function of AI clearly so recently we uh developed the uh software package in modular way so we can Define modules separately yes so oh is up here I mean the machine our data drivens for example or the Contour like the active part or control part it is well-defined area so we we can use a conventional deterministic algorithm to control the drones we don't need any AI algorithms for that others can have a approaches and of course but obviously I mean the object is in part so that what I mean is that we can uh we that each so we don't need to mix it mix it up so make it complicated so with the modular approaching we can make it more simpler so uh for this and the audition make let's say that we can approach that this make a module by the AI algorithm or I mean that data driven or more uh Curious heuristic equation or the uh deterministic approaches like a poultry analysis so for the decision-making part I think personally I think it's better to use more deterministic way because for that part we need more transparency or explainable it should be more explainable so data driven approach it cannot be a good product Approach at all so uh so what I um I'm telling you is that we can divide and make it modular and less approach it in different way with the conventional technology and AI Technologies combined together and the one bias which we use the UDA cycle uh I I think comment comment one things with the cycle terminology with We There is tendency there we consider it as a kind of a serial process I mean the sensing and the situation situation awareness and decision making and control so we think the time wise they should be happening sequentially though in real field they occur they are kind of a similar simultaneous process so it's better to make a clear distinction between the module and that they can feedback or feedback for world or feedback word each other's by the by the live data so that's why what we are doing uh in in the in the development period well thank you very much um so I really wanted to continue conversation with Dr Khan uh because even though today but from technical perspective you could provide another traditional alternative when it comes to the uh the operational Concept in the military so I think um it could be really uh useful for us to very understand the uh some technical perspective and how could you work with the users in this case at the military so I just want to ask friends and since oh yeah so what we automate is that we have a very different background in trading hours uh cultural work or the kind of uh even the uh the life of values so what we should do with first is that we should automate that we can have a very different background and values uh so and even even the words or sentences we can use the uh user the same ones but we have a very different context of the meanings so uh for the examples we call a bus many many uh military personnel the ordinary people can imagine the products that we use for our transportation for the uh Engineers like like me we we think about the boss that transport the data and the power to the other systems so it is yeah so we have quite a different image in our uh in in our brain so but in that one we can coin some as easily when we talked walking together but uh so but what I focus and what I tell is that recently there is a one uh one instance that happened in the robot community so last year always Community opened announced an open letter that they they rejected to use their loss of the software for military weapon development so what they uh yeah that was because they they what they said is that they didn't develop those they are their software packages to uh systems military weapon systems so that's that's not their intentions so uh more and more recently the the military weapons depends on the open source software so opens up open uh so let's say there is a kind of a Ros or the DDS or pxpo or ad Pilots such as open source so that software packages are developed by many civilian volunteers so they have a different well because of some kind of social issues to the military persons they don't have any uh they have a different they don't share same values on life the uh military development yes because the United States military want to adapt to the user laws loss Softail packaging as a standard software tool to develop the robots and the ground unmanned ground Bakers that's why the uh announced the open letter uh so but more among St the packages actually yeah robotic Technologies in in these towns this is by the time cushioning and now the people required so this is data for the AI so one question I wanted to say is that so really the AI technology will make a future world more cruel or destructive I think this is not and that uh that or that uh grounded on concrete evidences let's say the you know recent literally conflict we we see many loitering relational comic acid drones are used in the battlefield in that bad field we have a two different kind of drones one is the uh side one sorry six it is navigated by the Waypoint and this reducing system so they they are defined the way they um um Target and the uh military Target so Shahid once or six it is not equal targeting would be bombing the military Target because they they cannot Target them moving moving ones so it is very uh not good uh weapon to uh targeting the the battlefield objective because they move continuous mainly for the civilian building or civilian infrastructures but they are quite essential for the civilian daily life but with the business navigation they use a more higher AI functioned drones they use that one or mainly for the military targets so that people that their things give us some kind of idea that AI can AI technology can distinguish between the civilian Target and Military Target so with the help of the legal people already are some kind of a U.N or some uh kind of uh civilians entities we can make a framework that AI technology that can distinguish and the civilian Target and Military Target we can make a kind of uh what the previous session the are stakeholders together make a more uh Humanity uh Humanity yeah they can uh work for the humanity AI technology together so I think we if you want to work together and more absently we need to be the values between the civilian engineer and scientists and the military Society should have a common value we should instantly common value and the common visions that is the advanced monkeys Humanity so yeah in in in this panel I think it is quite good for me to speak about it thank you I think it reminds me that just I think just before I came to the uni dear uh we have a work together to come up with the uh dual user drones um standardization so the Dr Khan came from the developer uh perspective and I came from the military perspective it took a lot of time to coordinate come up with the really um uh one idea how would you develop this kind of systems and then I think it could be really I'm just and then just like to go to the tours yeah um you it's a rapid development on AI in the military domain I just would like to do if you might have because internet but that development technical development definitely have an impact uh in your legal or perspective I really wonderful to you thanks that is yeah definitely the best thing about how we move on on the law um so I think uh you know if we're looking at that that application that I spoke about first about Ai and decision support systems because there's there's so many different applications of AI in the military that I think the responses are going to be to be different for it is awesome and the key for us is that militaries decision makers they need to be using these tools to Live support and facilitate human decision making rather than replace it we really push back on the idea that AI does make decisions that is a false equivalence I think you know decisions are a human act they require human judgment and as I said I told reply was that exercise of of human judgment so you know maintaining that space for human judgment I think is really key now that you know that has design and Technical implications when you're talking about decision support systems so I just really need to be designed and used in such a way that the human can engage with them and engage with the inherent uncertainties in that system can engage with the assumptions that the functioning is based on um and the potential biases that are that are going to either appear in the data or or you know in the output as well um and in addition I think we're very uh strong that users have to be able to challenge the output as well so this requires uh not only the time so the time for deliberation to actually challenge the output but also a greater situational awareness to maintain that and not have a blind Reliance on one particular DSS output and I think that fits with existing uh principles around intelligence gathering and cross-checking so I don't think this is a New Concept in that regard um specifically on autonomous weapon systems and the integration of AI there the icse has made some very specific recommendations to States and this is on a legal side so for new international legally binding um and I think you know it's come out in this panel as well that there's a need for this uh commonality there's a need for a kind of a green set of limitations rather than each state acting on their own at the the national level so from the icrc side we've recommended um prohibiting unpredictable autonomous weapon systems and a lot of these systems where the effects can't be sufficiently understood predict predicted explained um and this comes back to you know this idea of the Black Box effect that I spoke about before there's particularly apparent when you incorporate AI he seems that it would probably cover all machine learning enabled systems where they change functioning after deployment so continuous machine learning I think would be very problematic from a predictability perspective um we've also recommended uh prohibiting the anti-personnel so targeting humans with autonomous weapon systems and I think this comes back to what Dr Kang was saying around you know there are still uh very real questions about whether AI would ever be able to recognize uh protected categories of persons so not only civilians who are not directly participating in hostilities but also combatants who are wounded or sick at that time and these are very complex determinations even for humans to make so expecting that an artificial intelligence algorithm could do that I think is currently a little bit unrealistic anything aside from that uh they're a very real ethical concerns around the targeting of persons with machines inside those prohibited categories and I think coming back to this uncontrolled cluttered complex environment particularly of ground Warfare but I would say of warfare in general there will still be lingering stabilities about the operational environment that need to be controlled and that's where we've recommended uh very strict constraints on use so around the types of targets and you drew this out I think quite well about how different weapon systems are limited in in different types of targets so for us we would limit autonomous weapon systems to objects that are that are typically military objectives by Nature limit the demographical scope the duration the scale of use and this will vary depending on the operation but if you're in it need to be incredibly tight due to that that volatility of the environment situations of use so perhaps you know Urban environments using them only in situations where there aren't civilians present and maybe this leads us into the naval Warfare discussion later on I told earlier about human machine interaction in the the other application but I think it is relevant to autonomous weapon systems so allowing the ability for a human to effectively supervise and intervene so it's not you know necessarily control over the weapon system itself at all times but essentially an overall wired by IHL and needs to be maintained with these different limitations so that is you know where we're urging states to start negotiations we have been for quite some time and and we see this as we need for international law response on this thank you very much um impacting the military domain is it forcing faster innovation so anyone can jump in so I'm happy to take a stab at that um I guess I think uh one area that numerous governments have been interested in is uh automating logistical systems um so a common example citing self-driving cars has been transportation of Munitions from bases to forward operating areas and you know possibly a collection of sip from floor operating areas back um so I think there is a very real connection between self-driving cars or or self-driving vehicles and some of the kind of logistical functions of military systems um that's been kind of the the most interested and I think the most like one-to-one application that I can think of in my mind yeah so um the the military side the most important thing is the associate chores and tools came from and where we really come from this uh recipient side so make the uh make the uh on manual autonomous speaker work properly we need a very well designed well developed a software tool to operate in so that is mainly from the civilian side so that is the key point I think thank you very much um the other question is about also um the rule of law jumping how can AI be used proactively by non-state human rights actors to improve accountability or the rule of law in conflict so I mean you've brought up the issue of accountability and the question is directed at how AI can improve accountability but I must say that as soon as I start talking about accountability what I see are challenges um because you know you have different mechanisms for holding not only individuals but States accountable in in Warfare for either breaches or violations of international humanitarianism so violations of human rights where they where that's applicable and I think that you know AI integration brings some very real uh questions around how you trace back and act I mean I said before that AI does not make decisions and I think that's very true but at what point do you lose especially when we talk about individual criminal responsibility at what point do you lose the required intention or the the men's rare that is a feature of international criminal law things I would say as well so you know it's it's an open question about the extent to which recklessness can meet criminal thresholds so if you blindly rely upon an AI enabled decision support um you know to to uh carry out an act that results in an IHL violation of attention well I'm sure that that would always be answered in the affirmative and I think if it's not then we end up with potentials where we cannot um Trace back accountability in terms of those is around um priest uh monitoring increased information uh particularly in many conflicts now we have and not even in conflict in other situations of violence as well we have more information synthesized and analyzed than we perhaps ever have before some of the fruits of those inquiries um but I would say yeah that there is probably potential and again it is about managing the risk particularly the risks that arise when you're talking about information manipulation and I think you raised this earlier uh in the applications of AI to information operations to misinformation disinformation now stored isn't it the more information you have and that is analyzed and produced by AI the greater the potential for things like decrease the potential for misinformation or deliberate disinformation so I think that you know how to educate people to query to to question uh information that is promulgated whilst also leveraging it for um you know I would say the the better accountability in conflicts that's a dilemma I don't know if anyone else so the less concerned by the private convenience on security who got the authorization and a role in land Warfare what do you think about that I think it's it's question about the uh in comparison with the naval and air domains the the impact of private sector could be larger maybe a library it's here Maybe previous intervention um be come but could you know when it comes to oh yeah awesome tools and the our products and even the data and the way so in military we they use the old old ones Ada Softail languages such as an old ones but uh in these days we don't have any uh new uh developers who can deal with the military languages according languages errors so so there is a so I think it is the main Drive military should adapt to the that uh private sector more particularly I think um so what implications have for future battle were worth it could thanks uh yeah I mean I think we've covered a little bit of this question already in terms of uh the the direct challenges of urban Warfare and how the inherent unpredictability that is in you know AI systems whether they be um at the the more decision-making level but I think this question goes directly to more autonomous vehicles and perhaps also they're implying sort of autonomous weapons in that um the key characteristic of urban environments is the interconnectedness of um the systems there so you know that leads to specific concerns around the reverberating effect of an attack now you know what you're attacking might at that time be a military objective by virtue of its use but the destruction of whatever that is could have flow on effects and I think we particularly see this when we start to talk about the infrastructure on the civilian reliant on that I mean even a hospital can be a military objective with many qualifications and the requirement for warnings and and things like this but its destruction is not just about how many civilians are present in that hospital at that time but it is about the flow on effect of Destruction of a service on which civilians rely upon especially during times of armed conflict and this holds true you know for um destroying areas of power grids that might be supplying both military and civilian facilities you know and I mean you know dams obviously a very topical example but these kinds of um you know these kinds of facilities that humans that civilians rely upon I think are especially prevalent in urban environments um I would say that you know um when it comes to things like dams there are specific protections that may release dangerous forces in that way and so IHL does try to um does try to address these challenges in an urban environment but I think even you know beyond the Baseline requirements of IHL there are policies about humility militants implementing faith those reverberating effects in these environments and considering another level of unpredictability with autonomy doesn't seem to fit with you know that that obligation necessarily um I don't know if anyone else wants to come in yeah but for example the satellite images provided evidence for war crime in butcher Ukraine so I mentioned that AI is used but also to process large amount of data you can buy satellites and so on so it's prevent War crime so I think that there is two side of coin thank you very much I know it's um so today's this panel particularly we focuses on artificial intelligence and land Warfare it's a very tactical level to discuss and then sometimes it's really challenging for the Austin to really deep dive into the discussion on the battlefield so it's quite um I just pose tough questions to all your speakers but thank you very much for your answer um maybe this one uh question um so is a particular type for example non-state actor private military companies Etc so it's more about do you have any thoughts on this um it's real light we discuss actually in this panel more when it comes to the AI applications but the reason why we stream this panel uh to talk about artificial intelligence and land Warfare um is if you um the land itself it might be really difficult to come up with the uh exact package AI enabled systems to use in the military so maybe these some of the questions about more learn domain itself so could you elaborate around that or maybe just to come back on um sort of Michael's comments earlier and I I think you know around like where are we at in terms of incorporating specifically autonomy into capabilities for the land domain I do think it's um it's it's sort of at a more nascent stage than when you might talk about uh Naval and air domains later and I was at uh kind of military industry conference in London earlier this year and it was focused on land Warfare and what struck me was that most of the time you know these capabilities even at the logistics chain level we're still running into trees and falling into potholes and time they sort of said you know it's probably better that we just remotely pilot them for the time being because uh it's so much harder to plan for all the eventualities and at the risk of becoming a broken record I think this is probably the key point to be drawn out of this panel is around the cluttered the complex the uncontrolled environment of ground Warfare and an extent to which you know autonomy is actually assisting at this point um I think we would say you know some of the benefits that are cited around greater precision and things like this soldiers from the battlefield I would say that these actually are achieved to the same degree if not more with remote piloted systems um so you know being able to have eyes on the target with a remote capability I can certainly see the advantages but I think removing the human entirely so that they do not choose the specific Target they do not choose the particular time the location of the strike so move is actually lessens precision increases risks not only for you know civilians and and combatants who are order combat but but also um it increases the risks of a failure in the capability so I would say that's probably particularly acute in the the land domain yeah completely agree with everything you just said fantastic Point um and I don't know why I took the week because I'm just going to kind of repeat when you more like where you said I I do want to speak a little bit about the stakeholders the um question which is like uh you're going to see because AI is really expensive and really really hard to implement as you've noted it very limits the amount of and use it effectively and who want to use it effectively um militaries that are very well funded that feel in one area either a little behind and so they want to kind of use autonomy or AI to catch up to rivals or that is kind of like over um yeah this person is driving this truck when they could be doing Xbox what what is it in America where does removing the person not actually cause any serious issues um and with the amount of cost and the amount of note it's a pretty spark group of people who can hit that very small interception so yeah so um but the idea lead and follow logic Logistics applications that I I think the the uh that kind of application first yeah so we can divide it a little bit uh and divide it and think it in different way um thank you very much I think um we are heading to the end of this discussion I hope um these areas but still this topic would be really challenging even though um specificities of this specific to me we fill up a future so I hope you can join us throughout the today's experience [Applause] and sound it's past the hour thank you enjoy everything okay we need to do the tests let's let's turn all of the microphones follow the handouts we'll stir them on we're going to try them one by one can someone be connected one two three one two three test one two three okay this one works let's try this other one one two three one relax this one works that's that's okay this one works this one doesn't test so the problem foreign so this one works well one two three one goodness sometimes can you go on settings if you can increase the volume this year test one two three test this works well okay can you call Joanna and and uh Sarah now the problem is the people there can we try we have four microphones test okay so not this one no ever foreign so close to avoid to avoid the noise canceling two three test how is the sound on the stream one two three thank you microphone number one test can you hear microphone number one test okay yes perfect all right so thank you foreign works much better than this one it's the same kennel I know it's the same color okay so it's okay if you want to I put the volume higher on the on this match okay okay so and uh okay if we ring on the boat foreign test s under 100. okay okay I put higher than both the boss okay the sun is better on YouTube the quality is better it's better on YouTube um anything from what he actually said otherwise [Music] [Music] [Applause] foreign [Music] foreign [Music] foreign [Music] foreign [Music] [Music] foreign foreign [Music] [Music] [Applause] foreign [Music] [Music] foreign thank you [Music] foreign [Music] [Music] [Music] foreign foreign [Music] foreign [Applause] [Music] foreign foreign [Music] foreign [Music] [Music] thank you foreign [Music] [Music] [Applause] thank you foreign [Music] [Music] foreign [Music] foreign welcome back I hope you can hear me okay online as well I'm gonna do my best to speak as loudly as I can into the microphone and invite the speakers here on the uh on the podium on the stage with me to do the same um welcome again as a you know I already introduce myself and Jacob mom the head of unity security and technology program and I'm truly delighted to have the opportunity to moderate this panel on AI and novel Warfare uh full disclosure part of the excitement is due to my past life where I spent 15 years in Italian Navy so as I Was preparing for this panel so many great memories came back and I'm really glad to have the opportunity to discuss um about this very interesting topic with our guests today so I will introduce the panelists that will be helping us in in going through this uh this very interesting panel which is going to be very different the conversations you're going to hear are likely going to be very different from the ones you heard in the context of uh land Warfare for for obvious reasons nevertheless I think it is really worth trying to reflect and perhaps build on some of the things that were said by colleagues before us to highlight even more where these differences are so I hope that all the online speakers are are connected I'll start by introducing the colleagues that are here on stage with me on my left we have uh Lieutenant Shannon Cooper who is a legal officer at the Australian Department of Defense and sitting next to her we have Mr guy carmeli who is the r d leader for the image processing department at the Israeli Aerospace Industries joining us online [Music] who is the commander of maritime operations and protections of the blue Amazon command of the Brazilian Navy with Miss Jennifer Parker deputy director for defense at the Australian strategic policy Institute and last but certainly not least we have Mr Abhijit Singh who is the head of the maritime policy initiative at The Observer which is foundation and in India in former India Indian officer so very diverse set of perspectives very very interesting and just to say a few words at the beginning of this of this panel as I was trying to to prepare for for it um two kind of anecdotes from my time in the novel Academy really came back and came back strongly first I remember Professor from our weapon systems engineering class that told us an anecdote that if you're asked the average sailor that had to be on the lookout during World War II um late 30s so the first part of the war what would they need to make their job better most of them would have said we need to be there binocular and then the reader thing right so that is an important lesson for two reasons first and foremost so deep so no use are doing what they do and here we go so [Music] can you hear me now online okay so let's just forget the microphones from the podium for a minute I'm going to do a 30 seconds summary of what I just made so as we think about the impact that technology can have in various domains of warfare it is huh he needed they would have said we need a bigger binocular and healing complete titles and we had a microphone okay anyway okay we're just going to keep going and hopefully that's right um fixed this two key Concepts that it doesn't matter f uh if you still control so everything that you do at sea to you wartime to to make sure it's concealed and have control if you don't have control then the other part of the strategy denial since that either so but um either so to the adversary so um so um yeah comes up if they do it so there's a few more minutes to to confirm that you start with colleagues here in in the room and perhaps thanks um being a military member and a legal officer these are my personal views and don't represent the Australian Defense Force or the Australian government but thank you for the question um some of the opportunities if I start there is that aren't so we've learned about decision supports to systems the situation awareness weapons development threat management and Logistics there's huge opportunities for the Navy to engage with AI and that to assist us and one of the challenges because I'll keep it short is that with the adoption of AI comes the risks Technical and legal and how you articulate that into system to design and testing which is a quite a challenge but not something that can't be overcome yes so first of all right limitation so we actually have been problem with the you know computations so we actually stop some researchers according to this so this is the Okay so systems so put it submissions so please okay system we have a section of creativity we can agree we also have okay coding just this technology is stable so the computer because to event that well as we see and I see this okay so it's about what so this is a everything this is the reason also that we have a lot amazing because the battleship there is no Infinity so it's been of course what's up it's like snow and simulate the Y so you can think of we have a simple picture it will simplify the way we see the underwater experience all right let's go there's an autonomous drones to land on the ship deck because the seat motion so Landing to a guided Landing and speak with this is close to the pilot the pilots actually don't look at some point they just land and that's it with two seconds after they decide they just land on the ship deck but this process is not we want to move from random to random Landing to guided Landing and because we want to decide as much as we can and the final area it can lead us to independent intelligence it can be vessels it can be submarine because sometimes it never situation you don't have second chance it's like a space you must to be around you must do redundancy happens so the forces and hey John if we do something that we do it will do and by this way we actually design it's called very complex issue because very complex so specifically to the end of the process with the combination eternity I'm not sure that the agent will be okay so I must agree that causes deficient that we will design a reward function like this so you cannot thank you see or have you seen operational requirements for navies uh having evolved what are small what problems could could AI help to fix it all right let's see can you give me wrong and good morning to our present I'm sorry I can't hear your question for all but uh I think it's the same we we were talking before uh I compliment Mr Giacomo and the other colleagues in this panel and uh I will talk about how can AI help the naves and the challenges we are facing today AI has revolutionated the naval Warfare by providing immensely capabilities in some key areas some of them situational awareness decision support and autonomous vehicles AI process sensor data from various sources including satellites sensors on ships animated systems to provide real-time and comprehensive situation awareness it enables the identification of threads monitoring of Maritime activities and prediction of enemy movements enhancing the active effectiveness of Naval operations and we can talk about this in Maritime security applications the intelligent decision supports AI analyzes complex Maritime scenarios considering factors such as weather conditions vessel movements and mission objectives generates recommendations and supports human commanders in making informed decisions optimizing Mission planning tactical maneuvering and resource allocation in autonomous vehicles AI enables the deployment and of a main and surface vessels and amended underwater vehicles that can operate autonomously this autonomous vehicles performance surveillance recognitions mine clearance and intelligence gathering tasks navigating complex Maritime environments and extending operational capabilities in an answered situation awareness that we talk about Maritime security AI plays a crucial role in processing and analyzing vast amounts of sensor data resulting in improve this situational awareness for Naval forces AI systems offer valuable assistance to human commanders by analyzing complex Maritime scenarios and generating recommendations as the intelligent decision support different kinds of systems on board the ships and about autonomous vehicles AI facilitates the deployment and operation of a managed surface vessels are drones underwater drunks providing capabilities in government conflicts at sea thank you very much thank you thank you can you hear me now can anyone online hear me yes intermittently what a relief okay Jennifer why don't you go next so look uh thank you and uh hello uh good evening uh to everybody there in in Geneva online uh wonderful to see my my previous colleague Shannon Cooper uh there in Geneva and I'm disappointed I can't be there with you look uh from uh from from what I've heard um what my comments are fairly similar I think when we talk about uh Trends in Naval Warfare we talk about Concepts like the speed of relevance uh Hypersonic capabilities uh challenges and the ability to detect uh track and intercept uh we talk about challenges uh with dealing with data which is relevant to all spheres of warfare um particularly constrained though in Naval Warfare when you're dealing with big data potentially in um especially when you're dealing with targeting and things like that um generally ships have limited communication abilities which is which is again one of the challenges uh with dealing with with AI um you're also talking about Trends in in sizes of fleets um as warships have become more and more complex uh industrial bases really struggle to keep up with building these warships we've seen that in a number of countries so you're seeing Trends where countries are looking to uncrew capabilities and uncrewed autonomous capabilities to try and deal with that to increase their Fleet size you're seeing a real focus on Logistics support to vessels at Sea and the opportunities that AI offers in that and I think uh Lieutenant Cooper flagged that previously uh again uh constraints and crewing of vessels as vessels have become more complex crew sizes have grown and that is a challenge for many navies and one that navies are looking for for AI to try and solve so I think there are a lot of key trends that can be addressed by the integration of AI I think it's important when we talk about Ai and Naval Warfare though to not think about this as a future thing um AI or simple AI has actually been in use on uh modern warships for quite a while um I'm sorry I'm not sure if some of my colleagues have flagged this I I couldn't quite hear but certainly most combat Management Systems today have some degree of simple AI most navigation systems and most Engineering Systems have some degree of simple AI so it's important to understand that AI Is Not A New Concept for Naval Warfare but it is an evolutionary concept when you start to think about concepts of strong AI or deep learning or machine learning which is kind of the next Evolution and is viewed as a mid again for some of those key trends that I've talked about in Naval Warfare thank you thank you Jennifer um I would like now to give the floor to uh abijit if you can hear me you have the floor thank you thank you Dr Pauli I hope I'm Audible yes you are okay thank you uh at the outset uh uh thank you very much for inviting me to the excellent panel I would have liked there to be in person but unfortunately I've not been able to make it I'm very happy to be here virtually and speak to all of you I think the point has been very well made that uh AI is an asset uh for maritime forces it is seen to have an invaluable benefit if Incorporated in in Naval systems uh the point that I'd like to highlight however is that the reason why AI is seen to be this game changing technology is of course one on on account of these significant benefits that it uh gives navies when they are in war fighting conditions uh and a lot of things have been brought out it improves situational awareness uh it makes decision making more more easy more smooth and it will it allows you to use uh autonomous vehicles in ways that have not been used so far so it makes autonomous operations more uh more efficient but beyond that I think uh the fact is that it makes day-to-day manned operations more efficient and this is a discourse emerging discourse within many Maritime forces around the world that uh you don't want to go into areas that are contentious for example AI in Lethal autonomous weapon systems is a contentious area you're not sure about uh whether a machine should be uh in a position to take that decision to engage a Target uh in in Lethal ways but the one thing that there is enough consensus on is that AI should help us in day-to-day tasks such as Logistics such as Supply Chain management such as training such as preventive maintenance so in a lot of the Navies including the Indonesia that I was a part of a couple of years ago the emphasis on AI is to bring it on to the margins and then see how things pan out and try and also then bring it into combat systems as regard combat systems I'd like to point out that the future Maritime scenario that we are looking at is likely to be very complex the issue is that we're going to be facing a multiple multiplicity of threats and challenges not issues that we have not faced up until now many of the threats that we will Face Tomorrow are the threats that we still face today but the difference is that the scale and Tempo at which these threats are going to come at us is going to be much much more uh much bigger and and more complex and more complicated than than it is at the moment so the appreciation is that um hostile forces will use sophisticated Technologies and they will create in the literal what uh Navy's call as a toad bubbles anti-axis anti-denial bubbles or strike reconnaissance complexes and it's going to be very hard to uh to burst these bubbles to to penetrate uh these these bastions of control so the only way a Navy can be effective if it stays outside but it makes these bursts into in combat situations into that bubble carries out its operations and comes out so the speed and scale and Tempo at which these navies will need to react or or respond in war fighting situations will need to be very different from what we we uh are engaged in at the moment in other words AI will offer us or at least the promises that tomorrow AI will offer us a competitive advantage over adversaries and that is the promise that all the Navies are working to uh two words however in in in Balance I would say that uh every Navy's on different point on the evolutionary curve some are much further they have their uh the the conversation is more robust the activities towards developing AI are are uh more developed uh but in some other navies it's taking a while because a lot of Technologies are being sourced from abroads a lot of the source the The Source codes the uh the uh the links etc for Tactical Systems are not indigenously developed and that means that there is a great dependence on technology from abroad and the AI story hasn't developed very fast so that's how it is but uh AI surely is going to be a technology for uh navies and Maritime forces uh going forward that's what I'd like to stop thank you thank you very much for your uh insights I still hope you can only hear me online um I think it is useful to reflect on on on the point you you just mentioned about how Modern Warfare is more is moving towards a competition in time and space which has always probably been the case but now space is getting bigger and time is getting shorter so modern warfare becomes the pursuit of that window of opportunity that would allow forces military forces to engage and and accomplish their their their mission and it is you know reflecting at the beginning we heard about how difficult it is to interpret the the image produced by a sonar well you know it's interesting because hunting a submarine it's can take days or hours days you know you can do 15 16 17 hours trying to find the submarine and then once you find it or if you don't once the submarine finds you and releases a torpedo in the water then you have seconds to react so you can have a crew that has been on a submarine hunt for two days and then Torpedoes in the water and you have seconds to react otherwise it's it's done so thinking about how technology can can help in in in supporting humans in doing their job better uh is definitely important on this end I have my second kind of cluster of questions which is really focusing on the specific technical considerations that have to go into um uh you know when we're thinking about integrating AI in in the novel domain and on on warships now artificial intelligence systems I mean they they you know people think of them as the model but the model itself won't take you far the model has to be run by uh by a system a Computing system so there is uh an infrastructural footprint in terms of computing power size of of the servers and the computers that are needed energy consumption cooling requirements I mean there are this infrastructural and kind of physical footprint is clear important about the novel domain because warships are not land bases where you can have underground floors full of you know of um servers and and stacks of chips in space there are limitations potentially on energy captains may have to be called to make a decision over which systems to prioritize if for example the the ship is hit and fall of energy available cooling might be a problem Etc the physics of the ocean whether you are on it or so what kind of one word with technology that we have today in the novel domain and I'd like to start potentially with guy this is the whole thing thank you so one vote so if you remember that today we just jumped in this step we don't so so this is about the things that we need to know sometimes scales decision to the cameras the same because you can get results and it's and again professor and get so which so everyone thank you this nice thing is suppose so Infinity so it's very hot yes what a lot of application also implemented on mobile so we can take algorithms and complex algorithms and make some changes because at the end of the day so we need to adapt some technology maybe to do some changes efficient algorithm but business I just like to add one point um one of the challenges is that from a technical perspective the use of AI particularly in the military domain and Naval Warfare is that it must be explainable we touched on this morning from previous panelists about responsible and accountable use of AI but a technical challenge will also be the explainability of that use thank you I'd like now to but thanks okay hey check them out uh I can't hear you very well online but happy to offer some comments uh on some of the technical challenges if you want to give me a thumbs up if that's useful perfect uh look I did hear some of the comments um certainly from uh Lieutenant Cooper talking about trust and the ability to to explain I think um although not specifically a technical challenge one of one of the biggest issues with the integration of AI and Naval Warfare is actually encouraging commanders to trust that um and to want to rely on those systems and that goes to them understanding how those systems work and then explainability to Lieutenant Cooper was talking about there's also some challenges I think in your in your intro you talked about are the limitations that ships experience from uh energy to to Communications uh and so I think that those things will be limiting factors for the integration of some of these capabilities I think also it's important to talk about how you intend to use AI we use it as a very general term we're actually talking about a whole spectrum of capabilities whether that is assisting in engineering diagnosis insisting in navigation or autonomously operating in uncurred vessel and it's important to understand which element of that you are talking about um when you're talking about some of the technical considerations you know for example uh with an uncrewed capability that is autonomously operate autonomously operated uh that did a lot of people talk about when they're thinking about Ai and Naval Warfare as a force multiplier um there are a lot of technical challenges actually operating those capabilities and I think it's important when you're working out how you intend to operate it and where you intend to operate it what your concept of operations is for this capability uh in Naval Warfare for example uh an uncured surface vessel that's autonomous you know there's there's lots of years of lethal autonomous weapon systems with uh autonomous ships up threat with missiles on board uh there's actually a number of technical constraints in using that um whether that be how do you actually maintain uh an AI LED autonomous vessel uh up through it so how do you provide the logistics support um and again when you're talking about underwater capabilities I think I heard you mention your intro that um the the impact of different environments on some of these capabilities you know uh it may be all well and good to uh to design a uncrewed underwater vessel with autonomy and can go off and can do these things without needing to have a person in the loop or in fact on the loop how do you actually usually use that in Naval Warfare and so what are some of those technical constraints for example you know an autonomous underwater vehicle that is using ISR right it's a key isi capability but how does it actually get that information back to someone to action that well how do you actually engage with that vessel to get them to take action on a Target and some people may say well the point is it's autonomous it doesn't need to um but I don't think we're anywhere near that concept of unrestricted Warfare where most commanders would actually allow that to occur Psycho Circus technical considerations it also go back to the trust there's also I think we'll talk about this in the next element session a number of legal considerations around that thank you um if you can hear me can you can you go next yes now I can hear you thank you very much uh I think your question is about the challenge and considerations and Naval affair uh some points I I can put to to debate uh the data quality and the reliability AI algorithms rely on high quality and reliable data for accurate decision making then ensuring the availability integrity and security of data sources are crucial to avoid biased or erroneous outcomes another point is the human machine collaboration maintaining effective human machine cooperation is critical in every in an armed conflict human operators must maintain control over AI systems ensuring that critical decisions related to the use of force remains subject to human judgment ethics and accountability another point is cyber security the integration of AI in an armed conflict at C introduces new cyber security challenges AI systems and networks from cyber threats is vital to prevent an authorize the access manipulation or disruption that could compromise Mission success and endanger human lives addressing these challenges and considerations is crucial to maximize the benefits of AI while minimizing risks and ensuring compliance with legal and ethical Frameworks thank you can you please go next thanks thank you so I like to make the point that there is this notion that uh AI can transform operations in a jiffy and if you were just to all just to switch over to AI you know we would be so much more effective in The Way We Carry Out operations which is really not true uh because AI is not a thing it's not like a software that you can upload on your systems and the system begins to perform Miracle Miracles what uh really happens is that Maritime forces look at a problem they look at a challenge they look at a mission set and they look at ways in which uh better data collection uh and uh and a move towards digital transformation can help uh help deal with that problem much better and and and all of this takes time uh and and one of the panelists made the point that uh AI is incremental it is it is not transformational in the way that uh that that that that is often portrayed to be uh if this is a technology that's going to evolve with a period of time and the main constraint really is not so much the machine it's really The Man Behind the machine and I completely agree with the Admiral when he talks about the the man machine uh partnership or the man machine coupling that is key the machine will act in the way it is asked to act but the problem is that the men behind it the human behind it uh uh is used to a certain culture uh armed forces are used to a certain culture of operation and that operation and that culture does not uh feel uh very comfortable dealing with uh seamless operations and uh and and cross-cutting ways of uh ways of dealing with data uh we're somehow uh conditioned to act in silos so one of the things when this digital transformation will occur and there and I'm and I must make the point that AI has really a digital substrator that runs underneath it if the men are not used to the idea of working in an integrated uh digital environment uh AI cannot be the success story that we want it to be uh so if that does not happen if you don't get the men to change their ways and and to unlearn and condition themselves again uh it's going to be very hard to work with AI but I also want to point out that there is a big debate within navies and I can I've served in one and I know for sure that at least in in India we this this is an ongoing debate about how effective uh AI is going to be there are proponents and they are critics of this new technology the proponents uh who are very tech savvy often point out that the digital transformation that's happening in AI is not going to remain uh limited to the the the the civilian World outside it's not going to be remain remain uh limited to Commerce and you know Health in other areas it's going to be a part of the military and and we better change our ways and we because our adversaries are going to include AI uh in their systems and their uh their own data collections their own autonomy their decision making is going to become much more Superior going into the future uh but the opponents say that there is a need to audit AI because AI has opportunity costs uh you can make a huge deal of uh investment in AI but there is no saying what returns it will beget you uh they also point to the fact that uh di may not be survivable in war the fact is that the adversary is going to Target our communication systems our on our data collecting ability our Communications and will AI then be helpful so we must not let go of our native skills so the better way to do things is to continue to do what you're doing but only integrate AI as I was talking about earlier on the margins just see how it can help you to do the basic tactical stuff and then wait before you begin to include it within bigger operations within your combat Management Systems Etc I'm not saying that that the the critics have uh have have have better convincing power I think that uh both sides make persuasive arguments but I'm saying that AI is it's not as if uh all uh of the decision make makers or the decision making Elite have an agreement over how much AI to incorporate and in what ways to uh to incorporate I'd like to stop here thank you thank you you arranged a very valuable points there and I I like to to reflect also on a kind of an underpinning argument that there is this sense of inevitability of AI that is is coming and there is nothing that we can do to stop it we can there's nothing we can do to control it um or to prevent it from from having uh effects and we don't want that's not actually true we're still fully in control in terms of deciding to what extent a technology that may well be developed in in labs in you know in companies Etc but as in military forces just because the technology is available on the market they're not just going to blindly acquire it and deploy it without thinking through all of the steps and consequences that integrating that technology could bring we're talking about a sector that is you know probably one of the most self-regulated sectors that exists from how you dress to how you eat so let alone how you know you will be integrating a technology this powerful into in into military capabilities um and there's also the point about skills that was mentioned I think it is important that um as we see potentially AI becoming uh or substituting human operators in the conduct of certain tasks how do we make sure that we retain those skills so that when AI fails human operators can a be that layer of redundancy that is needed to ensure that the operation continues but B also be able to detect that the system the AI system is not operating as it should so the Skilling problem I think it's a it's a very important problem in all domains but potentially in the novel one more so considering that when you're out at Sea and potentially you are under very strict communication control or emission control regimes you can't pick up the phone and call the developer and be like hey I need help the resources you have on the ship the resources that you have to fix the problem so it is really important that Skilling remains at the core of what we do the last cluster of question that I have is really more focusing about the the legal part the legal element here and well internationalized international law IHL is IHL and doesn't matter where you are um however there might be some specific challenges of that we can think of um when it comes to the the application of IHL or of international law in the novel domain particularly when artificial intelligence and autonomous systems are included so I just wanted to ask all of our speakers to reflect from their perspectives um what are some of these unique legal considerations if you want if not necessarily challenges that need to be taken into account when thinking about AI in the novel domain and Shannon as you can expect I'm gonna go I'm going to start with you on this so one of the fundamental things that I think for legal risks is that the states the law holds States and individuals responsible rather than the AI technology itself because a machine can't be held accountable it's the human command and control that it will be held responsible and accountable for the actions the effects that are generated from that particular capability um from a legal perspective legal reviews or systems that contain AI such as the decision support systems so problem solving that could be assisted with many military operations legal reviews hold an important part of that because it's not just assessing that that particular weapon or capability complies with international law as you've just mentioned it's the use of the weapon itself and how that use can comply with intimate International humanitarian law as I said um I think that you know we've spoke about responsible accountable and explainable and the application or the review on the use of these particular capabilities as I mentioned before that's where we can get the most traction on being responsible accountable and explainable use of AI in Naval Warfare I think we spoke about the land domain and the differences between the naval Warfare and the maritime domain and it is platform based the law of noble Warfare is very old we have different considerations from a naval Maritime domain in so far as I said it's particularly for targeting purposes or engagement or using ISR to identify locate and track a particular capability it's always platform based and any use of AI capabilities to do that will still need to comply with international law okay not illegal advisor but generally the engineers will give the solution so I think someone need to supervise this from the top um because it's illegal things and that's it so much that's fine on the spot on on legal issues I'm just going back online and hope that this time third time is the charm I hope that you can hear me and if so I'm behind I'll start with you okay thank you uh I'd like to talk about a specific lethal autonomous weapon systems uh because I think when we talk about legal this is important to talk and the laws can operate with varying degrees of autonomy limited to fully autonomous decisions made and the application of the law is requires context specific judgments taking into account the complexity of Naval operations and the unique challenges of the maritime domain a legal requirements must be continuous assessed and updated to address technological advancements and involving operational realities also in the open sea on the surface and the waters Bell the assessment of three most important principles of the law of charging team distinction proportionality and precaution may not be the same for engagements and urban scenarios in view of the remote possibility of Civilian presence in the former scenario for instance Naval engagements at Sea including air combat represented small risks to civilian but the principle of human control assessed that human must retain ultimate control and responsibility of our decisions evolving the use of force it's important human judgment ethics and accountability are essential to ensure compliance with legal and ethical standards ensure a meaningful human control in the context of laws is crucial to establish mechanisms that ensure meaningful human control these mechanisms may include pre-programming the Rules of Engagement real-time human oversight and the ability to intervene or deactivate autonomous systems when necessary another point is to ensure compliance with legal and ethical standards robust accountability mechanisms must be in place these mechanisms should attribute responsibility for actions taken by laws and hold individuals or States accountable for violations of lower in the end the responsibility for the effects of military operation including those resulting from the use of this kind of weapons will at least fall under the commanders at sea and another point is the introduction of AI raises challenges in allocating legal responsibility for actions and consequences traditional legal Frameworks attributes responsibility to human actors based on notion of intent causation and foreseeability however AI driven decision-making questions arise regarding the allocation of legal liability thank you um Abhijit come to you next thank you look uh notwithstanding this compelling narrative that surrounds artificial intelligence uh it does pose some legal ethical and doctrinal dilemmas uh let me talk a little bit first about the the ethical dilemma you know the development of artificial intelligence is characterized by what some have uh called a predisposition to certain kinds of data uh and what the implying is that there's a bias in the way the data is collected and that data is exposed to the machines so it's almost as if you're telling the machine to act in a certain way given our own uh I mean hypothetically or or thematically given our own prejudices and so we we force the machine to act in in ways that we want the machine to act and so it's not really in a in a matter of speaking independent in the way it makes it makes decisions so the critics of AI also LH that these probabilistic outcomes that AI comes up with and AI acts on actually probability it doesn't have a human's brain it's got a lot of data it sees probability it then comes it it it it identifies one solution as the most probable most likely solution but the manner in which it reaches that conclusion it infers that this is the most optimal solution that critics say is flawed and that can muddle even human decision making and it is a detriment for the for the commander it can undermine combat Solutions these are all allegations but I'm just putting out the debate out there for you for you to consider and for you to think about second AI automates weapons in ways that are inconsistent with the laws of four that's the point that has been alluded to in the past but let me just underline that because technology is clinical technology does not understand laws of war it does not understand you know issue you know Humanity uh necessity proportionality Etc uh it just does what has it has been trained to do and that can sometimes be a problem but the bigger problem from a maritime uh or rather from a naval perspective a Navy's being Armed Forces is that there is a doctrinal dilemma which is that how do you incorporate AI fuel Solutions into machines when the technology that is being used is in the nascent stage of development uh now the technology that we are dealing now have been fully matured and yet we we uh expect AI to deliver that can be a problem Doctrine is actually premised on a traditional understanding of conflict and Bohr is a normative construct and in this normative construct There Are Rules there are codes there are there are uh there are regulations that ethical standards that need to be followed and AI it is it's going to be very hard for AI to live up to these expectations lastly the legal questions uh the legal question is as follows let me just give you a hypothetical uh sort of a scenario if you have an AI enabled unmanned drone outside the territorial Waters often of any state and that drone depending on its own and say it's uh it's peace time but it's uh it's it's semi peace time uh and depending on its own appreciation of threats it engages a warship which is uh which is uh within the territorial Waters of uh of another of of another another country uh what are the legal what are the legal ramifications going to be uh uh is is that drone that unmanned drone legally uh correct in in engaging a Target within territory what is another ship and what if the ship which is within the territorial water is actually responded to the Mothership and took the on the mothership which is again outside the territorial Waters of the ship so the so the point I'm trying to make is that when it comes to the legal Dimension when it comes to unclose there are many gray areas about the operations of um unmanned vehicles and a lot of these unmanned vehicles are going to be powered by AI in the future so we've got to be very clear about how this artificial intelligence what the rules around these artificial intelligence are going to be developed and only then can we take a decision on whether it should we should enable AI to take the decisions that so far human beings have taken command themes have taken and that's I think uh that's I think the the the the core of the anxiety that uh at least the Strategic community in in a number of countries and the decision maker space and I'm going to stop here thanks thank you um Jennifer I'll come to you thank you and um look I guess when we're talking about uh legal considerations of AI and Naval Warfare is certainly a really fascinating space I mean the law of Naval Warfare itself for those Geeks anyway is a fascinating space and I think um my colleagues uh touched upon it when you're talking about the concept of the law of Naval Warfare you need to remember that the law of Naval Warfare has not effectively been updated in over 100 years um you know some people refer to the the samrira Emmanuel but but that is not International at all um so of course when you're thinking about the law of noble Warfare um the law of arms conflict uh even unclosed these things did not contemplate some of the technological advancements that we're experiencing today um and AI is one of those things that didn't contemplate but it's also important to remember that that doesn't mean that international law the law of uh conflict the law of Naval Warfare Uncle doesn't apply to these capabilities it does and there are some key considerations that that need to be thought of when you're thinking about the application these capabilities and I think uh um let's say Cooper touched upon the need for Weapons review so you know under the additional protocols there is a requirement for any weapon that is introduced to be reviewed and that's still applicable to a a weapon system where AI is integrated into that um there's Concepts like you know the law of targeting for example um under additional protocol one um again you know principles such as distinction proportionality uh and the obligation take feasible precautions need to be factored into AI um and there are some views that uh certainly elements of that could be written into uh AI algorithms and you know and that that may well be replaced the case and that's something that would need to be considered um during the during the Weapons review um it is difficult to see how the element required though for precautions in attack could be confined with a with a fully um AI weapon system um you know and specifically records uh and I'm not going to have the exact words here but you know under the requirement precautions of attack there is a requirement for a commander to be able to stop the engagement um if the target is deemed not to be a proper military objective so there are some challenges under current law of arm conflict about the use of AI and so things that probably can't be overcome in the near future I think when you look at specific capabilities so one of my colleagues here has touched upon the the use of uncrewed surface vessels and some sort of uh autonomous mode we're laying relying on AI there's also really specific considerations around uh are they a belligerent um you know under uh under uh International or any warships can be belligerents but could an uncrewed surface vessel be a warship uh you know when you look at the definition of Warship it needs to be commanded by a naval officer are they commanded by by a military officer are they in fact vessels is a fascinating conversation that the IMO is going through right now looking at unclosed and in could you call an uncrewed surface vessel a vessel uh by net by definition hard because definitions vary um so I think there's a there's a lot of fascinating elements um that don't necessarily prohibit AI into future but certainly are considerations on the design and the implementation so you know the person on the loop the person in the loop or fully autonomous and I think that um you know that there is certainly a view of a lot of commentators out there that you know international law needs to evolve to take into account these Technologies and and whilst that is entirely true um I think we are we are unlikely to be into a geopolitical circumstance in the near term that would allow the re-litigation of Concepts such as enclos or the law of noble Warfare I think um those remarkable times in history where you had agreeance between a large number of states it just not reflecting our current circumstance so I think in understanding some of those challenges and moving ahead I think it's important for states to put out their declaratory policy on how they view these capabilities onto an international law it's important for discussions like this to try and establish an understanding of norms of how they are and how they will be operated um and I think that is really key one of one of the concerns with the integration of emerging technology into uh conflict into into Naval Warfare is not understanding how an adversary will react work because there is no Norms it's not clear how they're viewed under international law so putting out views and countries establishing how they view these capabilities I think is really important to kind of influence how they'll be reacted to foreign thank you thank you very much for these observations we still have a few a few minutes before um our lunch break and I do have a couple of follow-up questions both for for you colleagues in the room and for those joining us online um so first um back to you guy who comes to you know you specialize in image recognition in image processing Etc and uh um in the network domain and also linking to what uh Jennifer was just mentioning about the principle of Distinction and how important it is to be able to distinguish potential targets Etc in the in the network domain you know image recognition even human-led image recognition still plays a key role uh today well it's been a while uh since I left but you know Lookouts people on the bridge are always given silhouettes of vessels of Interest potential uh adversary ships potential adversary aircrafts because ultimately there is no render screen yet at least that can that can match the human eye when it comes to the positive identification of a Target right so from the from the command center they will always ask for the do we have the positive identification of the Target now this seems like an area a potential area where technology is developing at a pace that will soon be allowed to significantly improve the the capabilities of of doing that so my question to you is you know how is the technology in that field evolving potentially in an environment like the novel domain that isn't as cluttered as it could be the land as we've heard we've heard before and then we'll potentially a follow-up for you Shannon from a legal perspective is a positive identification of a Target by a machine equally uh viable you know or what would be um kind of the quote-unquote legal consequences of considerations of a captain taking action based on a positive identification of a Target made by an artificial intelligence system yes so just one word about your last ask a question so I think that when you have autonomous entity it doesn't matter if it's a vessels or submarines at the end of the day there is a soldier that probably just finished his course yesterday so people will use it the autonomous abilities so we actually need to see how to supervise it's very difficult now as I see the picture for the future and today we're talking about that we always connected everything is connected and all the time so there is a lot of synchronous um between the battleship airplanes land so actually and of course with the image processing capabilities we can analyze and protect a lot of objects on the sea and operate different tools like aircraft weapons land weapons so I think is filled with good future that we will be able to operate different tools different weapons from different sites so this is what I'm thinking so in response to your question about positive identification and the use of AI capabilities the law the international law does actually contemplate the use of capabilities for example on course as capability agnostic but for Hot Pursuit for example the use of an uncrewed system to continue the visual identification is acceptable and international law and likewise the use of an intelligence product available for a commander at the time they can use all relevant and available and reasonable information to positively identification to identify a particular Target that is a military objective there's no prohibition or international law from the use of that particular capability to do so I think we spoke about this morning from a previous panel there's no blind Reliance on the use of an AI capability it will always have some kind of human command and control whether that is originates from the decision of a commander to employ a particular capability as opposed down to the operator level when there is application with the use of force so the law itself is technology agnostic and it's no prohibition or prevention from using an AI capability for positive identification because it will not be the only consideration thank you okay I think we are fortunately time to wrap up this very interesting panel I could go on hours and hours talking about novel domain and I want to speakers online and extend my apologies for all the technical difficulties I will keep working on these numbers so thank you so much I'm your Hein Mr Singh and Mr Parker and Shannon here and guy on stage for sending us your insights and thank you again and we'll see you after the lunch break thanks [Applause] thank you thank you questions we can yes thank you what happens okay so some names this is here we'll stop GPS message and let me write it here foreign thank you last time yeah we would like to thank you thank you foreign sit there
Info
Channel: UNIDIR — the UN Institute for Disarmament Research
Views: 4,029
Rating: undefined out of 5
Keywords:
Id: KEF6jKatW2I
Channel Id: undefined
Length: 233min 57sec (14037 seconds)
Published: Tue Jun 27 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.