The geopolitics of transatlantic policymaking on Artificial Intelligence

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hello I'm Fran Burwell distinguished fellow at the Europe center of the Atlantic Council in Washington D.C good morning to those of you in the United States and good afternoon to our European colleagues welcome to everybody welcome to this discussion of the geopolitics of transatlantic policy making on artificial intelligence for the past few months the policy Community has been buzzing about AI it's opportunities its challenges and the many results of individual explorations of its capabilities that have shown up online including the Pope in a designer jacket but for a number of years now ai and the capacity to develop it has been seen in the foreign policy world as a key element in National Security not only because of the capabilities it may bring but also for what it says about the strength of technological innovation in any particular country two countries have stood out in this race for AI the United States and China both have sought to lead in the development and deployment of AI both in the Commercial and Military spheres but there's another actor on the AI stage the European Union which is very likely to become the first major regulator of AI yet the EU and European companies have not yet been major developers of AI nevertheless from the beginning of the Biden Administration AI has been a priority topic of consultations between the U.S and Europe we have heard a lot about trustworthy and human-centric Ai and this brings up the topic of values which in turn brings up the topic of alliance's Partners and do we deal differently with AI in a values-driven partnership than in a relationship dominated by National Security considerations in this panel we're going to explore this triangle of the U.S EU and China and consider whether and how cooperation on AI should lead to some form of multilateral governance we're not going to focus so much on the technology itself but rather its impacts and the questions it poses in the foreign policy World especially for our the transatlantic partnership and our like-minded partners we have a wonderful panel to discuss these issues first dragoste is a member of the European Parliament representing Romania most importantly he has been co-repertor of the EU AI act so very much involved in the Parliamentary changes in that act and the current negotiations in fact I think he has a try log later today Quran Bhatia is the global head of government Affairs and public policy for Google he has held a number of senior U.S government positions including his Deputy U.S trade representative Elizabeth Kelly serves as the Special Assistant to the president for economic policy at the White House National Security Council of a National Economic Council sorry where she handles the financial regulation and Technology portfolios she previously worked on the Biden Harris transition team and in the Obama white house but she's also worked in the financial sector Kenton Tebow is a resident fellow here at the Atlantic council's digital forensic research lab where she focuses on Chinese influence operations she's also worked in the private sector on government relations for companies operating in China I want to remind our audience or let our audience know that your questions are very much welcomed please go to ask ac.org that's ask AC for atlanticcouncil.org and it will allow you to select for the event and to put your question into the mix we look forward to hearing from you so let's start with uh dragons tutoracha um you have been as co-repator on the AI act uh Central to getting the draft to where it is now what are your expectations for the final shape of the ACT uh one about when will we see it by the end of the year um but also one of the key principles that was initially voiced was that it should be the uses of AI rather than the system or the actual AI itself that should be determined what was high risk and how it would be regulated but um after the release of chat gbt you seem to have shifted a bit in discussing how to regulate uh foundational or generational AI would you explain a bit about why you made that that shift but also give us a brief overview of how you think this is going to turn out over to you well thank you thank you very much friends thanks to the attorney Council good morning to to those in on the U.S side and good afternoon to everyone else here in the European side um starting maybe with a simple part which is where we are in the process and and what are the expectations of for calendar uh you rightly said that we have uh later today uh what is the second try log which is the second political negotiation between the three institutions Council Parliament as college and and the European commission has initiated with legislation it is the first in fact first substantial discussion we have the first one uh just just two weeks ago was was just the opening show and we do expect to finish by the end of the year which is certainly not going to be an easy uh process nothing has been easy about this text so far I don't expect it to get easier now uh but I am confident uh not only myself Michaela Porter as well uh we are confident that we can actually wrap it up by the end of the year because we feel the sense of urgency here just like to mentioned everyone the whole world is the buzz with with uh Ai and and setting some sort of God ways in place and since we we have the legislation on the table we said we want to finish it and since we also have elections next year we all agree that this is something that we have to get done before uh elections and campaign so again there is a shared understanding in the shared commitment across the three institutions uh and the political establishment here in Brussels that we have to wrap this up by November December at the latest um and there are of course uh several big uh debates to have still at this stage we as part of it have gone quite further on a number of points from the initial proposal of the commission not only on the list of prohibited practices where there has been a lot of Buzz about Biometrics about law enforcement taxes in general and the deployment of the AI and law enforcement context but not only um and then going all the way to of course the flavor of the day which is generative AI foundational models which was not at all in the scope of The Proposal two years ago but at least we empowerment felt that this was something that we should simply that we can simply not afford to to let out of the text and this brings me to uh the very concrete question you ask why is it that we deviated somehow from the from the overall philosophy of of the Air Act because indeed um for those that have been following a bit um what what we were doing up to now uh but also for those that did not um our approach is to look at the risks of actual use cases of AI not a regulated technology itself you said it from the very beginning not only that it would be impossible technology constantly changes so you can't really capture it and and attach rules to it uh but also because it would be you could be wrong technology is neutral it doesn't do anything per se it is how you use it that that makes it good or bad and therefore you have to focus on those uses and focus on the risks so that's the the in in in simple talk that's the fundamental approach of the text and it's true that with foundational model we have somehow changed that but we have done that after very uh thoughtful consideration of the options so first as with everything we we looked at the option of doing nothing and we asked ourselves the question politically can we afford to do nothing like the commission did nothing for the council after actually considering it for for two presidencies in a role under the civilian presence in the French presidency eventually they decided to do nothing this was before chargept and I think it's an important remark to make um we in Parliament have have thought okay we again can we afford not to do anything about it and our answer to that was no simply because it was on not only us many experts many in fact companies themselves developing generative AI who are saying listen the potency of this technology is such that it can really go in directions that will we as developers would not want to see it go but in order to prevent that from happening we need some standards because simply relying on our good faith or moral compass to to keep us from actually unleashing this technology to go in those directions is not enough uh we are in a competitive market uh if my uh fellow competitor is doing more by decreasing the ethical standard then why would I be doing something different um if I want to stay competitive so again we've had quite a number of developers and traditional models saying we want a race to a standard rather than a race to the bottom uh so that's that's number one and number two we looked at impact uh we looked at the impact and we said listen uh again it's it's too impactful of a technology too versatile too potent again uh to to to not try to put some at least basic rules about transparency about accountability in the development of this technology and and we have deviated in the sense that we've said listen yes okay it is a type of AI that has some characteristics that need to be curtailed uh in in the way they use the data sets the amount of data that they use to be trained uh the kind of content that it can generate in all directions um and and on that we have imposed uh I'll I'll stop in a second friend uh and on that we've said okay let's imagine some rules that are specific to these uh features of generative AI let's not label it because this was also an option we had many who said this and just labeled it as Iris it's the simplest thing to do but we did not want to do that because again I think it would have been wrong there are a lot of uh generative AI that that is not high risk in itself um so why would we necessarily characterize it as such but again we looked at what is it that in our view at least any developer of a generative AI would need to do and and we we started transparency it stopped at some basic accountability rules that we thought uh uh uh not not too cumbersome that they are also based on state of the art um and that we believe companies could could live with could could also put on the table for their customers to gain trust confidence in the use of this technology so that's a bit where where we've been and where we are right now in in approaching the negotiations and sorry for maybe taking longer than I should have that's okay it's a complicated subject um let me go over to Quran Bhatia Quran Google recently released Bard in Europe after a few days delay which I understand was to deal with some privacy uh related issues and European privacy regulation what is as you move towards the public release of of AI systems what is Google's approach to regulation in Europe um how do you see the impact on you of the AI Act and I understand that there was a meeting between Google Executives and high levels at the commission there is talk about this the guidelines uh to basically bring the AI actor what the final result of it not into Force but to provide guidelines for the two years that it will take to implement um the AI act after it's passed is Google looking forward to participating in that how how are you approaching regulation in Europe great well Fran first of all uh great to see you and and everyone on the panel and thanks to the Atlantic Council for hosting this and it's certainly uh as you say a very topical discussion uh you know I think we've seen globally uh and just an explosion of of interest in particularly spurred by generative AI into the topic of of Regulation I I maybe start by those stepping back a little bit and pointing out that artificial intelligence isn't new it's not new for us at Google it's it's something that we've been building into our our products and services uh for years now uh those of you who who utilize say Google search and you know we'll start typing a query in and have the remainder sort of suggestions uh show up or what we call autocomplete that's AI in practice in in action or when you were searching through Google photos for a picture of a loved one or or some particular scenery that is AI in action or when you go and check your Gmail and realize that a lot of spam has been filtered out that's also AI in action the the point being I want to don't want to leave people with the sense that this is the beginning of how we uh or other companies operating the space have been thinking about this we've been we've been doing that for quite a while but there's no question that we are at a extraordinary moment here in part because of the amount of regulatory and policy Focus that this is rightfully getting in Europe and around the world you know your question was how are we thinking about regulation um or the intersection between government and the private sector here and I think you know we the way we think both sides need to think about it is with is three words right bold responsible and together so bold uh is you know simply a recognition of the extraordinary opportunities that artificial intelligence uh is going to offer uh economies governments countries in individuals to to better themselves to grow uh I mean we're already utilizing this tool in ways to address some really fundamental challenges things like climate change um human health um uh food security and and we're just going to see those uh replicate and grow in number and uh and and so I mentioned this because it is so important as we think about regulation which always inevitably is going to be a balancing act that we give ample focus and and Credence to the to the opportunities the incredible opportunities that this technology is creating and will create so that's the Bold side I think responsible look I mean we as a company from the outset from when the AI act for instance in Europe was in its Inception have supported the existence of Regulation uh you know our my boss our CEO Sunder pachai said AI is too important not to regulate and too important not to regulate well um and and look in the absence of Regulation and I think MEP tutoration had this exactly right it's been left to companies to figure out what those guard rails are and so we for instance going back to 2018 adopted what we call our AI principles at Google um and they are effectively a governance structure that dictates what we will and will not do with AI and as a result of the application of those principles and and literally There is almost like an internal judicial system that works where you know ideas are presented and the council sort of thinks about well that could create this risk or that risk as a result of that process we have not rolled out technologies that could be very you know uh helpful in some settings because we are worried about what the opportunities uh for for Mischief or were nefarious uses might be so clearly it would be better if we could agree on those guard rails and that takes us to point number three together right that that this is um Beyond any individual government or even small groups of governments I think uh to deal with this is really going to have to be done by virtue of Simply how uh broadly distributed the technology already is you know it is going to have to be done in a multilateral multinational setting and equally importantly it is going to have to involve many stakeholders this is not a government-only Endeavor uh and cannot be effectively it is going to have to involve the private sector it is happening going to have to involve Civil Society um what what we have said is that this is a 21st century technology that requires 21st century ideas of how to regulate and so to to your last question about how do we feel about things like voluntary codes of conduct we're quite open to them and indeed we've been engaging in a discussion around such a code with the U.S government I know as you reference there's been uh discussions around that uh kind of approach being adopted in addition to the AI act in Europe we'd certainly be interested in participating in those discussions but I do think that sort of multi-stakeholder process is going to be a key feature of how this effectively gets regulated going for it I'll stop there great thank you very much thanks for for putting all that on the table um Elizabeth Kelly uh the Biden Administration has put forward some executive orders and other announcements about AI most notably the blueprint for an AI Bill of Rights there's also been the nisk risk man risk management framework which was recently released um and there have also been some other uh steps measures that the Administration has taken what are the priorities for this Administration in addressing AI especially since to date we lack any legislation um what are the administration's thoughts about the opportunities for AI but also the challenges that it may present thanks Fran um grateful to be here today and thank you to the Atlantic Council as well as an Administration we firmly believe that AI is one of the most powerful tools of our time and will bring extraordinary benefits so we'll help solve challenges like disease climate change through to charity and holds the potential to improve productivity and drive growth but as the president has said to seize ai's opportunities we must first mitigate its risks we must make sure that you ask the leader not only in developing this technology but also in managing both the current and potential risks that AI poses to safety security human and civil rights privacy jobs and Democratic Values the Biden Administration has a company to mitigate these risks and ensure that we have responsible innovation as you mentioned last October we proposed an AI Bill of Rights followed by nist AI risk management framework this January to help make sure that core protections like cyber security data privacy and anti-discrimination are built into AI systems from the start in February present bind issued executive order directing federal agencies to root out bias in their design and use of AI and just this may the president announced new AI r d efforts to drive breakthroughs in critical areas like climate agriculture energy public health education cyber security of course we recognize there is much more to be done and we are engaging with stakeholders across industry Academia labor and Civil Society to gather the best ideas for how we can work to support responsible Innovation while managing the risks of this revolutionary technology we wholeheartedly agree this is not a government only endeavor at the same time we must ready our government for the changes that AI will bring and we must set rules of the road that further protect Americans rights and safety for the risk of AI systems uh President Biden and vice president Harris are committed doing their part including the potential advancing new regulations and supporting new legislation so that everyone can safely benefit from these technological innovations so thank you for that um we have seen actually quite a bit of cooperation between let me just follow up with you we've seen quite a bit of cooperation between the U.S and the EU in fact as an outsider watching uh I've seen a I'd say a strong convergence at least on the rhetorical level in terms of the documents that have come out of the U.S Administration and how they often reflect wording that we've seen in Europe um although not on not specific regulations or anything like that but just in terms of values and ambitions um we were talking about the eu's AI act before and there's been a lot of discussion of AI in the US EU trade and Technology Council the last TTC there was actually mentioned by not in the statement but by Executive Vice President vestiga of a U.S EU code of conduct on AEI I don't know if that's a real proposal or if that was a rhetorical flourish but how is the cooperation going and it much of the cooperation to those of us watching seems on a very basic level in terms of definitions taxonomies standards what's the importance of that and how do you see it moving forward thanks Fran um I think we are incredibly heartened and excited about the great collaboration that we have with the EU here transatlantic cooperation the development and use of AI both governments and private sector working together is going to be absolutely critical to striking the right balance between ai's benefits and risks and we have a shared desire to drive responsible trustworth and ethical Innovation um with needed safeguards I think we're all very encouraged by the progress that we've made together um as you note in December the US and EU agreed on the trade Technology Council of joint roadmap for trustworthy Ai and risk management that roadmap focused on advancing shared terminology and taxonomies Leadership of cooperation and international technical standards development and Analysis collection of tools for trustworthy Ai and risk management and monitoring and measuring existing and emerging AI risks and just two months ago the U.S and EU unveiled their first deliverables from the implementation of the joint road map specifically we set up a working group of experts dedicated to work such as identifying standards and effective tools for advancing trustworthy AI and we also committed to extend our work to include foundational models including generative AI systems considering their emergent behaviors and risks at the same time we're working together to deepen our collaboration with other partners you know for example as part of the joint roadmap work our expert working groups will be identifying and developing opportunities for the US and EU to increase contributions to Global standards development writ large thanks very much I think uh the standards development piece actually is one of the most important going forward or could be one of the most important going forward but we'll come back to that uh let me bring in Kenton Tebow now um and we've talked about the EU and the United States um but China is also a huge player in the AI world and has just released its own version of rules governing the use of AI um can you give us a short description of those rules and what you see is the impact on on China's policies both its ability to innovate in the area but also its role in the world in terms of global AI sure yeah and uh thank you so much for having me here it's really a pleasure to be here and to share um the E stage with um all of these uh the the panelists um so this latest document released last Thursday are um these interim measures for the management of General generative AI Services uh this is the next iteration an update from the previous draft of China's rules to regulate AI Services which came out largely in response to the appearance of um large language model based generative slash general purpose AI services like chat GPT so this latest draft actually clarifies that the measures are meant to regulate only public-facing Technologies whereas previous drafts did not have that clarification um and back to the point on Innovation this seems to really be the party responding to Industry and academic inputs into the process um this last draft before this one that just came out was a draft for comments and you see a lot of the comments um and um the uh the suggestions uh for changes actually being implemented in this interim document so it seems the party's really responding to Industry and academic inputs uh to really allow more breathing room for the industry to innovate uh the hurdles for public-facing services um are by Design extremely high uh companies have to register their algorithms with the CAC the cyberspace Affairs commission and training data and model outputs are required to be true and accurate which is a pretty impossible standard for an llm trained on data on the web to meet so the interim measures really follow a series of regulations from 2021 which covered recommendation algorithms and rules from 2022 covering um deep fakes um these regulations are forming a sort of body of work that will later form the regulatory basis for a national AI law uh which is slated to be it was recently announced I think in June um then a slated to be drawn up before the end of the year and submitted for a review to the Nationals people National people's Congress so that's kind of coming down the pipeline soon and in terms of China's ability to innovate it may be first helpful to kind of trace the origin point of these measures as largely stemming from the party's perceived need to control recommendation algorithms and control public opinion um so for example just to kind of illustrate the interconnection between these these these two things bureaucratic ownership of these regulations have so far been under the cyberspace Affairs commission which is responsible for regulating content online and what's referred to as a public opinion management are basically making sure that broader public opinion is shaped and guided in a Direction that's favorable to the CCP um as an illustration of this point the current director of the CAC also serves as the vice minister of the propaganda Department and so we see in these new comments regenerative AI that the requirements apply to you know public facing uh services but they're defined as services with public opinion attributes or social mobilization capacity so China's entry into this regular regulatory space was first very narrowly focused on algorithms and which is born out of this kind of classic authoritarian need for social legibility and the need to control public opinion um and in the documents for all of these regulations you also see requirements relating to Services um needing to maintain the correct ideological orientation uh and such as well so following this kind of narrow entry China began to sort of widen the field away from just algorithmic recommendations and to think about how to kind of develop the industry how to innovate um this was in response to just the natural development of the technology and also in response to Growing restrictions on China's ability to import core Technologies um so they passed a series of reforms which were centered around breaking bottlenecks caused by these restrictions um and also to sort of align industry the bureaucracy and the uh and Academia slash the scientific Community around this goal of really pushing Innovation and there's been a big propaganda push around this on domestic social media with Chinese government accounts heavily amplifying and promoting narratives online touting China's progress and prowess on AI um and equally so however it's heavily censoring discussions of China's Reliance on U.S technology for example in March sensors on Wei below were heavily removing posts stating that China's new AI chat bot um Ernie bot I believe it's called copied us Technologies um and this is in a kind of an effort to downplay um and control public understanding our awareness of the U.S China Tech Gap so in some this new iteration of measures show that while China is still very concerned about being pretty strict on the public-facing side of things due to the social mobilization capacity uh recent moves to both develop and spur the industry and the receptivity with which China is adjusting these regulations in response to Industry feedback and uh feedback from the science community in Academia shows it's really trying to shape the system around promoting innovation so Kenton let me just follow up very briefly um is it possible to maintain this dichotomy between public facing where they're really strict rules and everything has to be you know algorithms have to follow rules and everything must be what was the phrase you used uh true and accurate and then have another space where there can be more exploration or fewer rules in order to Foster Innovation is that dichotomy possible to maintain I don't know if it's possible to maintain I think there's some analogs in the past that has shown that China's pretty good at kind of separating out its um you know business to business or you know government to business sector from the kind of the public-facing Technologies um but I'd say that it's definitely designed around fostering that ecosystem that's more business to business or government to business really pushing Innovation there and really very much controlling the public access to these Technologies right so let me go back to dragoste um in the AI act uh they there are some things that were banned as you mentioned at the at the top there are I don't know that China is actually mentioned in the ACT but it's clear that there are some concerns about the way governments particularly non-democratic governments might use Ai and I wondered if you could say a bit about that um that ambition in putting together the um the AI act you had mentioned before that companies were concerned about a um an ethical race to the bottom with AI but if you have a is it possible that this is this competition is not between government between companies but between governments can you still have an ethical race to the bottom I also want to ask you a question from our audience here um does the EU in developing the AI regulation have the ambition that it may have a de facto an actual extraterritorial effect this is the gdpr question I would phrase it that way so if you could speak to that as well well I can't state for a fact whether the commission when drafting Article 5 which is the article with the prohibitions had China in mind but certainly some of the models that China is using to deploy artificial intelligence in the in the public life are certainly the examples of how you do not want or at least how we do not want to see AI being used uh take social scoring which is one of the applications of artificial intelligence which is expressly prohibited by the AI act well who does social scoring um and who has perfected the idea of social scoring powered by AI well China has and I will very much link it to what what uh Canton was saying earlier uh you have a a party you have a state that is not trying to protect its citizens but it's trying to protect itself from its citizens and from their opinions and from their possible criticism or attitudes towards what the state is doing so I think uh to a large extent this is the best examples and in fact I am using these examples quite a lot to illustrate why is it that it's important to prohibit some some use cases of AI um and I'm using China and saying is that the kind of society that we want to have is this the kind of relationship between the citizens and the state that we want to have the same steps true for public surveillance 24 7 in the streets using AI profiling algorithm it's the same debate about biometric use I'm always saying we're not prohibiting biometric technology in Europe it's it's a false claim uh to put forward it's going to be used as it is used right now from security all the way to using and opening our phones or passing an e-gate in the airport that will continue to be done what we do not want to have in Europe again is AI being used for public surveys on a massive scale which is exactly again what China is doing and I can go on and on uh for predictive policing and and so in fact if we want to understand why is it that we have chosen uh here in Europe to say that some applications of AI are just so detrimental in our view to our values and to the way we understand the rights of our citizens that we simply do not want to have them on our market and yes indeed I think a good example to to understand practically what it means is to look at China now on the extra territorial I I make the link to the question from the audience about our ambition we don't have an ambition as such and I always say that uh at least I don't know how other colleges like of mine feel but I certainly from the very beginning I've said that we do not want to force Brussels effect like like GPR had I think there is a certain inevitability to that by the fact that we are going to be the first ones with legislation uh adopted and we are also a big enough market to matter which means that yes if you want to play on the European market you will have to develop it according to all rules will that have an extratorial effect of course it will have direct to indirect but again you won't be able to play in this market you won't be able to to put your services or your products or this Market unless you comply with the rules um but what we have to do and now I go to some of the to to the very good Triads of issues that that we had earlier about being bold being responsible and doing it together I think that's the the essence of what you're trying to do we don't know the count only on the first Brussels effect we want to develop together a set of standards and I'm glad to say that in fact this afternoon in the trial log the first thing that I hope we're going to agree is the approach of Standards that's a very good thing it shows that we have alignment and I think tonight we will have confirmation of that we have alignment between the college inspectors on the approach of standards and that very much speaks for how we understand term the mean for convergence and how we want to make it practical how we want to take it down to the Earth and and to have a bottom-up approach where those developing AI the companies that know best what AI is and what it isn't participate in the standard making and that that is something that is aligned and is is open to alignment with other like-minded partners particularly the US and stands true for for how we draft definitions the definition that we have inserted in our proposal in the parliament is almost worth for what the same definition has adopted and so on and so forth so again this drive towards doing it together uh is is something that we have put the the heart of our work in Parliament so far and it will remain until the very end of the process so um let me go now to Quran Bhatia Quran as a major corporate player in this world this is clearly a very very complex environment um both within Europe within the United States but also very much globally very very complicated with different rules and guidelines emerging um how do you do you see alignment developing on AI between certain parties um how does a company like Google navigate this type of complicated environment what are the roles the responsibilities of the private sector you mentioned your own internal code earlier um and do you think that is there a path that you see towards some kind of multilateral or Global governance yeah friend it is a complicated time and um you know unfortunately I would say the history of the last you know number of years does not suggest that alignment is easily reached uh particularly in the technology policy space um frankly even in the transatlantic uh area let alone when you expanded out to the entire world you you've seen um often coordination happening uh after action is taken or after sort of Regulation gets pushed forward on both sides by the way I I think the U.S is um as as as much uh guilty of this as as as other countries and institutions and I think you know on the other hand I do think AI potentially offers an exciting possibility here because you know regulation in this space is is new it's nascent and when you are starting from effectively um a whiteboard uh as opposed to bodies of Law and regulation that have evolved sometimes for years or decades it can be easier to reach that common understanding um but look the the reality is you know you've got 190 plus countries in the world you've got AI in one form or another uh operating in in all of them um and you are going to need a very proactive very thoughtful uh approach by I think uh honestly Europe and the United States acting uh together and acting jointly um so we would love to see that happen I will say that um even for Google and we are a big company with a lot of resources to throw at compliance you know the explosion in the number of regulations that we have seen happening not AI specific but just generally you know privacy content uh you name it around the world um is extraordinary I was looking at the numbers just for APAC for the Asia Pacific region the last three years more than 1200 separate regulations adopted in that region applicable to online platforms that's an average of one new regulation a day governing some aspect of the product now I mean I defy any industry to think about readjusting changing its regulations its products you know on a daily basis to meet new so what ends up happening obviously is you know some companies will not cross borders they will not invest in localized products and that can't be the right outcome here um the last thing I'll just mention quickly is we're spending a lot of time rightfully talking about government approaches to sort of placing guard rails on the on the products and again we are fully supportive of that but I also would flag I think there are bodies of Law and Regulation and government activity that is going to focus on The Bold side of the equation in other words countries are already I was just traveling in the Asia Pacific region I was in the Middle East I can tell you there is a lot of interest uh in how to get to the front of the curve the top of the curve of being at the Forefront of the deployment of AI for industrial purposes for agriculture you know to be to be as competitive as possible and that is going to require also some very thoughtful policy making around how do you create the right infrastructure including human infrastructure skills and so forth so I do think that there is going to we we need coordination on multiple fronts to make sure that we don't end up in a uh in a in a in a race that's going to make it harder for frankly the companies that we I think um come from you know a tradition a set of values that we are going to want to see um infatuated in the in the AI products um and we don't hold you know restrain that advancement while opening the door to other uh to other other deployments That We're Not Gonna We're not gonna like as much so so um thank you for that and I actually want to pick up on the way you ended there about companies that come from a Heritage of uh you know from democratic backgrounds Etc um and how they we don't want to see necessarily a race to the bottom with companies that are less concerned about some of those values um and I I'd like to turn to Elizabeth um the Biden Administration has put out a not only the things that we talked about before but there have been some indications of a desire to think about how you do this in terms of multilateral governance when prime minister sunak was here recently there was talk of the UK Summit for safe AI there have been some um talk in the G7 about AI how are you seeing that what is the when the U.S looks beyond the US EU um world what do you what do you see in terms of ensuring that companies that are rooted in a certain set of values and that we are pushing to maintain those values are not um wiped aside by other companies that have fewer restraints um thanks Fran as you know the U.S is very actively involved in the ongoing activity at multilateral horror like the oecd the UN the global partnership for AI the Council of Europe International students organizations and the G7 Hiroshima AI process announced in May and we welcome the prime minister's plans to host a global Summit AI safety in the UK and will be participating in a high level I think the really important thing here is that the core values that underpin the United States and Europe's cooperation are shared by many countries around the world it's part of why we saw such enthusiasm for the commitments and Shrine to the Declaration for the future of the internet which now has many many signatories and just a short period of time you know and I think we're looking forward to working with the EU with other like-minded nations and with industry and Civil Society to develop you know really adaptable approaches to AI regulation that are grounded a full understanding of ai's opportunities and risks to go back to the quran's point I think that we are excited about collaborating on R D to help solve Global challenges like climate like Health like food insecurity and that too is an important multilateral effort and I think that you know we share with many countries a lot of the same areas of concerns around Safety and Security including risk related to cyber security and potential regrets to National Security from intentional unintentional use of these powerful tools um being focused on civil rights equity and privacy you know the vice president and president have met on numerous occasions with leaders in Civil Society to talk about the challenges and opportunities that AI brings um and how we can work to jointly address those challenges um another category is democracy you know we know that AI tools can be used for the purposes of spreading deep fakes which you can get a road trust democracy and public institutions and we share with many countries a desire to combat that risk the last point I'd make that has not come up that I think is deeply important um is the potential impact of AI on jobs and the economy you know while it has the potential to bring significant Economic Opportunity is growing uh the economy increasing productivity and it could also cause labor force disruptions including by automating certain jobs or being used in ways that restrict workers autonomy and I think we are excited to work with our allies and partners around the world to help address all those risks and leverage stock studies so um thank you for that um Canton let me come to you because I think we've heard from the others a portrait of a future world in which AI is regulated or has guard rails to use quran's phrase um where there are certain things that are not done and without getting into those specifics necessarily of those it's a very different vision of the AI that you're describing or the approach to AI that you're describing in China and as we both reach out and try to um bring others into our club if I can put it that way we can certainly see China reaching out and assisting countries that are part of its Network on AI and Technology development and we can see the same from the U.S and the EU potentially so what is likely to be China's reaction to an effort to build more multilateral governance things like this the UK Summit on AI safety and create a place where AI is is used in a responsible way is that something that they will ignore because they can do what they want internally or is it something that is would be of concern sure um I think I'll first speak to a little bit to how China kind of engages in this space a little more broadly so China largely prefers to engage on these issues through uh the United Nations and through its multilateral engagements and forums like bricks um so China's heavily engaged for example in uh standards organizations like the iso and the itu for example and often engages in um you know block voting to get proposals uh that are preferable for its uh companies past um these kind of ways that China engages is important because these discussions on AI as you mentioned go beyond China the US and the EU um the standards and regulations China develops will impact not only the deployment of Technologies in China but abroad as China exports ITS Technologies including those relating to content control um use cases on how to suppress online mobilization Etc uh to countries especially in the global South so China does see itself as a leader of what's known as the g77 in the United Nations and has championed itself as kind of the representative for developing countries and in messaging in global South countries China depicts European and American countries largely as represented by the G7 as using using measures of AI governance and cooperation on AI including for example the US UK Atlantic declaration um as designed to sideline China and prevent it from participating in the formulation of technical standards on artificial intelligence um and they're also quite critical and by they I mean official statements and in messaging to Elites um of the idea of embedding democratic values in the introduction of AI standards and rules saying that it's basically designed to widen the gap between North and South countries and aggravate the unequal distribution of power um and so China's called for itself really to lead in the U.N and promote an alternative uh value to be baked into AI governance standards and so for example China is promoting its own global data security initiative it's china-led um as a china-led multilateral and bilateral mechanism to Foster standards and agreement on security issues related to AI Technologies and one of these standards baked into the GDs is the idea that economic development is the basic human right um this idea is really meant to delegitimize the idea of universal and Democratic Values as the basis for defining human rights so this is how China is engaging on the issue and to quran's earlier comment about countries around the world wanting these Technologies to meet real development needs this messaging as development as a primary right really resonates in many parts of the world um at the same time when thinking about what values are baked into these regulations xinjiang was an r d ground for China's a Chinese AI um it's not a values neutral model as we've discussed it's based on data that was harvested and exploitative and rights violating way um at the same time however China is one of the leaders in AI research including on key questions relating to governance Technologies and the role of human decision making and shaping and controlling use of these Technologies and you know as the examples of the global cell show its actions in the space are already shaping how AI is regulated deployed and develop and continued and will continue to do so um so I want to kind of concluding note in a transatlantic conversation where there are varying competitive and values interests I think on the agenda should be um are there areas of AI where China and the rest of the world can collaborate and have conversations on um and in a conversation on AI and Tech are there ways democracies can do a better job of distinguishing where we should not be having conversations with China for example on the open internet um but you know questions on lethal autonomous weapons are where humans can have decision making power over these things in a society um especially as these uh you know regulations and Technologies are you know spreading an influence in other areas of the world could be perhaps areas to engage so I want to build on that and go to dragos to ask about the AI Act and the EU if confronted with a very different um ambition by China this human the economic development model using AI for that without regard for some of the other more authoritarian uses and whether those are engaged in what can the EU do as the first let's just say the first regulator of AI in a formal sense um is there anything you can do to combat that or to um or to fight back against it and do you see any areas given what Kenton has said do you see any areas where um EU U.S China conversation might be useful on a specific element of AI well maybe I will start from that because I think it's quite an important conversation to have and I think pretty much everyone said that in this endeavor to do things together we have to go past the EU us EU framework um that certainly has to work as well as possible as efficiently as possible because it is important but certainly I think together both us and you have to understand that this conversation needs to get global and truly Global uh not only like-minded Partners like-minded democracies out there understanding values and technology in the same way but certainly also bringing in the likes of China and certainly that huge area in between of of countries let's call them the global South but not necessarily only the global South but those that are looking both at the Democratic model but also uh being tempted by some of the authoritarian perks uh that come from using Chinese built models and approaches um and why is it important because this technology many are uh comparing this technology either with atomic energy or with whatever genetic engineering it is that powerful it is that important for our societies and our economies going forward in the next decades and because it is that important and because it is the same everywhere both in China in the global south over here I mean we simply cannot afford not to have and not to engage in a global uh conversation now what is the right framework for it does it exist or do we have to invent one I think that's something that has to preoccupy the minds of our uh political leaders and not only the political leaders I think it is an important conversation to have also with the businesses um and I'm very much welcome any framework whether it is going to be the UK Summit in the author what is going to be the G7 Hiroshima uh process with a code of conduct whatever framework but let's stick let's pick one and let's start all of us in investing in that framework we can label it as we wish but the important thing that is that again we create the space where we start discussing the the the potential but also the the risks of this technology for the years and decades to come uh as a global effort again that does not mean and now I come back to the to the first part of the question that does not mean that all of a sudden we're going to have alignment with China that's that's it's not going to happen so we we have to to find that that common denominator that brings even the rights of China in that conversation but at the same time then we have to look at what differentiates us and also economically because your question was economical how do we look at the product services that are developed with the Chinese model with databases that are that are China built and that might come and then uh compete on our markets and there first of all we have to make sure that we screen all of these very well and I think also using the TTC as a good framework but I think we have also pretty much built in uh antidotes on both sides uh over the last couple of months and years and I think we have to continue to invest in those antidotes but also I said earlier the rules are going to be the same the rules of for staying in this market are going to be also the same for Chinese companies and that's where rules are import and the standards will be important because operating in an area without rules that also opens up uh the the possibility for uh Chinese companies and products and services to enter the market and play their uh building on uh competitive advantages that they obtain in ways that are not congruent all values so I think it's a it's a serious thought for why regulation is important so we have just a moment left but I'd like to do a quick lightning round with all of you um and ask you what is the next thing just one sentence the next thing that needs to be done between the U.S the EU or in a more multilateral framework how do we move from not being together to all being together and let me start with you Quran sorry um look I think the that dialogue is going to have to happen multilaterally I think it is going to have to happen bilaterally and I think it is going to have to happen as as um tutoracha just said very much involving the private sector as well um bold responsible together and I think let a thousand flowers bloom let's see something is gonna hopefully hopefully come out of it while we continue to innovate right Elizabeth I would agree I think we need to build on our great us EU cooperation for have much broader multilateral and bilateral conversations with a broader group of like-minded countries in other countries as well as really bringing in the private sector and Civil Society to get this right Hampton sure I'd say um maybe figuring out in our conversations like where there is a where there should be more of a global conversation where uh we should pretty much stick to maybe like-minded countries on these issues and I think we've covered that pretty well right at any last moment last word well as a politician I am looking with quite a lot of concern as the political turbulence is ahead of us um we have elections here in Europe there are elections in the U.S there are elections in Mexico there are elections in many many jurisdictions around the world only in the next year alone and I think what we have to do is try to think at anchors that we put that we invest in that will keep us on this course irrespective of how these elections will turn up and I think that's where the the private sector has a crucially important role that's where businesses have a role because they have to remain The Whispers in our years to keep us on a steady course even if for political reasons we might want to be real at some point in time well thank you for that I think that we have a full play ahead of us uh in terms of thinking about Ai and in terms of thinking about how AI Works between within Europe between the United States and Europe and with like-minded partners and perhaps with China even so I want to thank all of you for participating I want to thank our audience and apologize to those for whom I could not uh get your questions but um thank you all so much for joining us today [Music] thank you [Music]
Info
Channel: AtlanticCouncil
Views: 937
Rating: undefined out of 5
Keywords: geopolitics, transatlantic, policymaking, AI, artificial intelligence, atlantic council, europe center, global regulations on AI, AI cooperation, Elizabeth Kelly, Dragoş Tudorache MEP, Karan Bhatia, Kenton Thibaut, Frances Burwell
Id: sErTBUTs6OI
Channel Id: undefined
Length: 61min 32sec (3692 seconds)
Published: Wed Jul 19 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.