Davos 2019 - Setting Rules for the AI Race

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
it's a pleasure to be here for this panel on setting the rules for the AI race the World Economic Forum and wired I'm Nicholas Thompson I'm the editor in chief of Wired and I'd like to start by saying that this panel and conversation has been endorsed by both Angela Merkel and will.i.am you've heard Merkel's speech yesterday very little about tech but a little bit where she said oh we have AI come in we need to set some rules for it and then we'll I am at lunch basically the same thing slightly more poetically so there are two seats I don't know if they're coming but let's keep them open I'm joined here by five seriously of the smartest people you could possibly have in this field they're all CEOs there are also all authors I think actually Jim next year with the badges with some buddies about the CEO and an author you can like a little purple circle around the around the black dot or something cool like that what I'm going to do is I'm going to introduce them and then we'll get going so on my left kai Fuli one of the pioneers of artificial intelligence books some of the first speech recognition software is the chairman and founder of sign Ovation ventures he is also the author of a new book AI superpowers that you should all read it is both brilliant and very emotional and very deeply moving so Kai foo we have on the top cot here's the CEO of the National Institute for transforming India he's also the author of several books including brand in India an incredible story the path ahead transformative ideas for India and if you want to know what he thinks about AI in India you can check is very popular Twitter feed where today he has been tweeting up a storm about it we've Jim Hackman snob a he is the chairman of Siemens and of Maersk he's also on the board of the world economic forum so he has set up this whole thing and I just was looking up Siemens today it's net worth 113 billion dollars which matches the number of social media followers that kayfa has David Segal he's the founder of 2 Sigma which is pink the most successful and profitable algorithmic trading firm in the world no come no it's not not a bad thing but what I also love about him is you go to his office and he's got this just array of old computers and toys it's just it's a wonderland so in an extraordinary extraordinary man with a with a great mind Amy Webb she is the chief executive officer of the future today Institute she's the author of an upcoming book that you can pre-order on the big nine the company shaping AI and she's also run a very successful technology experiment where she famously hacked online dating met her spouse and now has an eight-year-old child proving that understanding technology can lead to the most wonderful outcomes so what we're gonna do here today is we're gonna talk about the debates about how AI could be regulated what the rules should be and there are a lot of questions inside of this so questions about whether algorithm ins algorithms should be explainable questions about how you get competition in the field of AI questions about military applications of AI so the way I want to structure this is to start I'll go around the room each ask of the five to come up with one issue that they think is interesting either narrow one or a broad one then we'll debate the issues then we'll talk a little bit about the geopolitical consequences of those debates and then we'll get into what we can do what are the obligations of the companies what are the obligations of individuals what are the obligations of the state then we will also have audience questions you can type them in and I will get them on my iPad the instructions will be left at CH slash ask about 50 minutes of audience questions at the end I should remind you that contradictorily this is both Chatham House Rules and being live streamed on social media accounts with millions of followers so do with that what you will all right Amy let's begin so quickly what is an issue where we could set rules on AI sure well I think it's useful to just quickly level set there's a tremendous amount of misplaced optimism in fear when it comes to ai ai is a pretty meaningless term that even the AI community itself at this point disagrees on but but they I think the key issue not just within a regulatory conversation framework but just in general for us all to bear in mind is there are nine companies that control the future of AI and as AI is the next era of computing we ought to be paying attention to them so six of them ik are in the United States three of them are in China you know the the three that are in China they're the bat Baidu Alibaba and $0.10 sixth better in the United States I like to call the G mafia that is Google Microsoft Amazon IBM Apple and Facebook and the challenge is that there's a relatively few number of people who are making decisions on behalf of us all and it's not just software at this point they are building the frameworks they are building custom silicon and every single company has to align themselves with one of these big nine and every single consumer at some point is touching one of those companies the G mafia are publicly traded companies and they are as a result of that very much beholden to Wall Street in the United States we have no regulations so there's an antagonistic relationship between DC in the valley and in China though Baidu Alibaba and $0.10 are also independent companies its China so they are tethered to the to the Wills of Beijing and this that's I think humanity and democracy as we know it up for some challenges in the years to come I should note that all the members of that mafia are right now in the strategic partners lounge right there I think Amy will probably need someone else to badger in David well uh you know I'll start with the old joke about AI research from back in the 80s when I first got involved with this stuff and it was basically that once you got it to work in a computer it was no longer a I it was just software and which discouraged us because we'd have something really cool going on and they'd say oh that's nothing that's just some code you wrote and really let's just again to level set AI today is really just code running in a computer and it's very hard for anyone and I think to distinguish between regular software and AI software so I think the issue is not what to do about AI it's a broader issue which is what to do about software and particular the intersection of software and big data because we're really more in my opinion in a Big Data error rather than a error I you know I'm not discounting the incredible capabilities that machine learning has and so forth but I think in the end most of the big issues are actually more focused on data okay fair enough yeah so let me start with a an assumption and and an observation first of all I believe AI is arguably one of the most powerful technologies that we have developed and will continue to be so for a couple of years so we must pay attention to this my observation is that so far we're not very intelligent how we're using artificial intelligence so so you know we're solving irrelevant problems like you know how to beautify a picture of a cappuccino cup that I can then distribute to digital friends selected by some algorithm we allow that to happen with platforms that create data monopolies and steal our privacy and those values that come out of that kids concentrate on very few hands and they don't pay any taxes that is not very intelligent I think we're here in Davos because we have some very significant problems to solve which go far beyond the distribution of a nice picture of a cappuccino and and I think I'm arguing that it's time that we begin to leverage this technology to solve some of those fundamental problems now what is the issue that can lead that we need to basically rule the said rules around or at least principles around for me it is the access to data and how we leverage platforms in open ways because we will need platforms there will be few of them but if we don't allow everyone to access the platform and have equal access to the data we will create a monster and not solve the problems that I am aiming it so that's the issue that we need to solve in my opinion ok vomit on the West where data is ruined by the top six companies and unlike China where it's owned by $0.10 Alibaba in India data is owned by public entities every Indian has a biometric so you have over a billion point three a biometric we have the new Health Scheme which is providing health insurance to 500 million Indians it's all digital we have the new goods and services tax it's all paperless cashless digital the digital payment is all owned by public entities so what we do through because it's not owned by companies we put out public data in public public domain for academicians and researchers to work on we put out data for young startups to work on the challenge for India to my mind is very different from the challenges which the Western world is facing our view is that AI is going to have a very profound impact on lives of citizens it's going to transform the lives of countries in the years to come and therefore you need to use artificial intelligence to transform the quality of life of human beings and therefore you need to use this data on how to use for enhancing productivity of Agriculture depending on weather and soil conditions on a real-time basis if you can give data to your farmers how do you provide better data better images to your doctors to be able to provide better quality of life and better health outcomes or how do you track individual students who are not performing well longitudinally and latitudinal ei like you're tracking goober cars so that you can improve learning outcomes across you know a vast number of states in India so my view is that artificial intelligence needs to be accessed in a very very scientific manner for transforming lives of citizens and if you do that you transform not nearly the lives of 1.3 billion people over India but the 7.5 billion people who will be moving from poverty to middle class in the next decade and this will have a very very profound impact on transforming the world ok I'm gonna challenge two of your assumptions I want to you yes I think first it is that presumably something is terribly wrong so we need to have rules and that something is very wrong is a result of multiple things obviously there are incidents and errors by certain companies there are events that shouldn't have happened but largely there is a misunderstanding of what AI is and there's paranoia and hype all of that and there's nice fiction all of that contributed to a degree of fear that I think is excessive and the if you look at the actual value that AI has created the Canon will create is tremendous are there some things that need to be watched out yes but but basically AI when we talk about AI we're talking about machine learning largely deep learning and what that is is just a tool it's a software tool you pump a lot of data into only works in one domain humans that's the objective function and it does things pretty well but it is not something that is as capable or the errors that are made also are often human errors so it is about governing the people who are you know using this and rather than the algorithm AI is just a tool the other issue I think is the talking about rules presumes we can set rules ah and also presumes that rules don't already exist but rules already exist a is applied to banking there are banking rules AI apply to vehicles there are transportation rules so let's first know that we have those that we can use and that have been proven in decades if not centuries and then moving on to the new rules I agree with David there are some data rules that are probably at the central of this and I think there are an on that issue I think it's important to know that different countries cultures have different views on what rules whether rules should be made if maybe what those rules should be and that consensus I think needs to start with entities that can make the rules that is each country and then I think they can get together and share their findings and improve everything and that's why I'm helping the World Economic Forum with the a Council which I think more about as a mapping and sharing and certainly wife does not have the authority to set the rules but I think we can share so that there's less misunderstanding hoping that more countries can talk more companies can talk leading to a better outcome excellent well I entirely the last point and I was gonna challenge your assumption about my assumption just cuz you want to set rules or discuss rules doesn't mean you think it's bad like I'm glad we have rules about water water is undeniably good I'm glad they have fluoride in it but let's okay let's let's leave that one here so it seems like listening to all five of you one theme that came up a lot is concerned about the centralization in AI and in fact Chi foo in your book you write that there is a possible centralizing force they write to the extent that AI in some forms of a a particular machine learning or based on data more data can lead to better AI which can lead you to getting more data so you could have a centralizing force so if we're basically all worried about that what do we do you Mac Ron has said we should have data sharing so that other companies can challenge the big companies other Europeans have said we should pursue antitrust against the big companies so data is that kind of another meaningless word and here's what I would say oftentimes we talk about if there has been a persistent theme at the Forum this year it has to do with data governance but we're talking about data at a very very top level here's a concrete example of why rules and regulation I'm not in favor of regulation but but why we have to shift our thinking the data the corpora that are used to Train not just the machine learning and deep learning algorithms but the people who are learning those algorithms are a handful of common datasets imagenet is one of them humans created the images that are in this data set and it was a very small group of people with a much smaller statistically world view than a much bigger sample size might have and that image set itself is riddled with bias and is not representative of the whole and newer corpora that are used or being pulled from Wikipedia why because Wikipedia is pre structured data and you don't have to clean it none of nobody in this room may care about all the weeds of the data pieces but here's why it matters it matters because these are the the systems that on a very finite Manute level are being trained to make teeny tiny decisions that govern your everyday lives everybody is looking for some big event horizon where the AI it takes over and things go horribly wrong um we are already living with systems that make these decisions on our behalf I Drive a car when I back that car up into my garage it automatically turns the stereo down because it senses objects around me but I've never been in a car accident I've never hit my garage backing in but I no longer have any decision-making authority when I'm when I'm backing up my car and that is because AI is not about finding a single solution it is about optimization mm-hmm and that is we that is why we have to think about some kind of framework because if we allow it to just be governance to make these decisions on our behalves our governments are also optimizing and and the end result of that is isn't a sort of it's an amalgam where we're honestly nobody wins and all of us lose bit by bit it's like being covered it's like getting a handful of paper cuts that over time you know your whole body winds up in in paper cuts and and we're still alive but we're living much different lives than we did before David you know I I would say in part answer to your question and to broaden it just a bit we should before we start thinking about new rules we took a lose point we should start to really look at whether or not we're using the old rules properly you know you don't want to pile rules upon rules upon rules so for example if we're worried that monopolies are going to form well you know we've dealt with that in the past and we should just you know continue to think about monopolies the way we always have when it comes to you know rules about how how AI should work well look I mean I think what we're really worried about to kind of clarify your opening question in a sense is I think people are becoming uncomfortable with computers automatically making decisions yes okay I think that that's very different than AI and I don't think it's good to complete the - now remember computers have automatically been making decisions for a long time when you hop into an airplane and fly to Davos there's a computer automatically making decisions piloting this aircraft and if the computer isn't programmed correctly you might be dead and when you use virtually any modern technology you know computers these days are increasingly making decisions for you some of them might very well be life threatening others might be more mundane so the question is well how do you handle that well I mean we've handled it very well so with for example the computers controlling airplanes you know there's a very in-depth certification process you can't just roll some software out in an autopilot system and see if it works you know the FAA and other organizations have developed extensive testing protocol for that problem and so as we and what drones were invented right so now drones it turns out have you know you know they can be misused so now people are inventing rules to allow drones which often have AI in them to allow them to fly on their own but it's not the AI or whatever's going on inside the controller and the drone it's specific to a drone in those examples though those the key differences those are systems that make automated decisions this next era is systems making decisions and then learning and then creating a next generation of autonomous decision-making I think you're assuming a is going to be all capable a AI is just a tool that learns on data a lot of other hypotheses have been made but so far all we have is a machine that learns on data it doesn't really invent new capabilities and new learn new concept is not creative and I think the tasks that AI does for us I'm happy to delegate to it I like to think my life has a higher purpose then you know how my car is there we go work together I love the automation I love the auto pilots then I can spend my time creating new algorithms love the people I love and and these are not tasks that we were meant to do so I think automation is great now there are some risks involved and we can look at those risks one one at a time also on the large amount of data I think is very very fixable I think we tend to get fixated on you know Amazon made some mistakes in hiring and some other company couldn't recognize african-americans well those are simple mistakes that are very fixable by having a large data set largeness partly fixes the problem guaranteeing balance and democratic demographic match I think fixes the rest and I think when we challenge a system an AI of saying well you have all this bias we think we should think do humans have less or more bias I'm not condoning AI for having bias but I'm guaranteeing if we right have the right data set on simple decisions there will definitely be less bias by AI systems than the average person doing that you have to fix those corpora those data sets yes and my point is it is fixable yeah nobody's fixing them yeah no I think yeah you know I I'm just a cautious that we we shouldn't be naive about this technology I mean there's kind of two extremes which are concerning the early extreme of a new technologies like we don't know what we're dealing will so we'll make mistakes because we don't know and so when we invented the car suddenly it ran pretty fast and it when it ran into people people died and so we had to figure out how to how to deal with that so now we have autonomous all almost vehicles and we have airbags and what-have-you and and the other extreme is when you use technology and use it for the extreme you want you know access to data and data feeds the intelligence but you want to you lose your privacy and so where's the borderline between giving access to data and losing your privacy you want to have platforms because platforms allows you to you know collect and and and leverage this breadth of the platform and and create the gravity around it well you don't want to create monopolies so again there's an extreme and on the algorithms I think you have a very important point that you know when they begin to self learn do you want to lose control do not think that that's the risk that you know we might have a confined area where we asked you to do a certain thing but eventually you may lose control because you know where's the limitation to what it lands well so yes that's a hard one yeah and you bring in too many rules and regulations ahead of time will stifle innovation yeah and my view is that artificial intelligence is another challenge because you're dealing with individual private data one and you're dealing with a black box of ethics on algorithms and therefore it's very important because you your objective is really to use artificial intelligence to transform humanity and transform for the benefit of all and therefore to ensure that artificial intelligence doesn't remain an elitist force it's very important to build a global alliance cutting across countries cutting across companies corporates individuals academicians researchers much like what was done for particle physics when Sun and Sun really came out with the broad principles the norms the ethics and those were then subsequently followed across by all countries and all corporates so you need some principle some guiding force some broad norms but too much of regulations at this stage will stifle the big force of innovation one point on the right everybody's going at it this is the self learning hypothesis we've been in AI AI was started in 1956 so it's sixty three years into AI there has been one breakthrough deep learning that is huge much of the commercial success is built on that and that was invented arguably ten or eleven years ago there hasn't been anything that is really breakthrough in self learning so to make the presumption that a breakthrough is coming I think is way over optimistic and when we see signs of that we can start to think about it scientist Ines are here AI has been in some form of development for four hundred years there was an AI winter in the 80s there's a resurgence now yeah I was in this so then you know yeah generative adversarial networks is is one way that we've started to push forward and so my photo fake fake videos I'm not talking about so so alpha zero only works in very controlled domain with that today let me back what we're talking about a single algorithm capable now of learning multiple things that wants multitask learning more always all with absolutely concrete definite feedback of right and wrong it doesn't apply in financial markets which is arguably closest to it it doesn't apply in autonomous driving yeah so sorry I think much of the debate can really come down to a disagreement about how rapidly this technology is going to advance that's fair so I would agree that if Robocop or something like that were five years away then we have to all start to work really hard to solve the problem of what do you do about that kind of potentially destructive technology but what I I'm with you which is I think that the advances will be substantially slower than most people think that doesn't mean that we won't be able to make medical breakthroughs with machine learning and maybe partially solve the self-driving car problem with machine learning and so on there will be plenty of great interior disease it's it's all going to be good but we're not going to get to in my opinion maybe probably in our lifetimes to this sort of diabolical state where the machines are taking oh that's I don't think it will happen I want to spend too much time debating how fast this is gonna go cuz that's an awesome debate we'll take our time let me go to one specific question where I think we might have a difference of opinion so one thing that artificial intelligence is really good at is image recognition right we all agree on that one thing that image recognition is really useful for is drone warfare so a drone can identify a person can see who they is so there's the other question of whether a drone with highly powerful artificial intelligence manned entirely as machine without a human should be able to make a kill decision should be able to fire a missile at someone it identifies as someone who was on and killed us various people ash Carter has said no we would never do that United States military macron has said no we would never do that in the French military they made very specific statements about that but it sounds like maybe the two of you think that look there's an ethical question as to whether or not we should use drones to assassinate people but should there should there be a human in the loop if they're being human and there is yes yes that's why I'm saying we can't lose control well there are diamonds in the loop okay if you say yes little right oh yeah no there should be humans in the loop too but all of these humans are writing this dogma you don't have to push the button so we can want humans in the loop there's also something called Guardian algorithms and one of the things we keep getting stuck at immediate practical applications I'm a quantitative futurist my job is to is to model using data model the future so while I'm fascinated by the practical applications that I understand that there's a business incentive I'm thinking much farther down the road and so one of the challenges is we can want humans to be in the loop all we want in order for the advancements to be made to get to those practical applications uncertainty is key and we need to start systems to start behaving in unpredictable ways once we have systems behaving in unpredictable ways we're not entirely sure you know where things might go next and you want to keep a human in the loop but the reality is we're already designing systems that are predicated on creating and maintaining unpredictability so that we can learn from the results of that but back to the drone question specifically so I just to clarify your question who is picking the target though is a target identified by some human and then the drone system is going to figure out who that perfect so it depends okay you know remember I have said that I don't but I just want to be clear to everyone that I think that there are ethical issues of using drones display period and but I don't think it's a question of what is the software doing I think it's a bigger question about should drones be used this way period yeah I know I actually I don't think we disagree the issue we obviously don't want that to happen and I would think is controlled by more laws that manage weapons and assassination okay so let's not know let's flip it if we're all agree need a human in the loop for a kill decision I'm not sure I do know I don't either I'm not sure though because if all the government all the governments can agree to have human the loop what about non-state actors what about terrorists you can throw that yeah well we can't so welcome what what if I can prove to you that drones will be more relaxed just assume that it's we all agree drones are okay to assassinate people yeah so I don't agree with that but but but if we what if I could prove to you through scientific testing that by using machine learning and some software I developed that the it was much less likely that the let's call the AI based system would make a mistake killing innocent people what it was then how which one would you pick that would I mean that makes it even more complicated you can flip it around and say what if it's for missile defense right what if the AI will definitely better it can be for missile defense and save lives as opposed to killing people you have very different moral question but it does seem like we disagree slightly on this in interesting ways so I'm gonna go back to something that Amitabh sex I think this is one of the key things in this conversation which is about data sharing and cooperation so let's talk about we also seem to all agree that there can be ways that data should be shared data sets can be opened up how could this work across companies across countries what is the role of government well I think data sets work best when this built-in closed-loop I'm just stating us a technical issue so just getting a lot of faces and speech isn't going to push you ahead way ahead of the others the reason the Amazon de that the Facebook data are so powerful is that it's taken in context and used in context for business so that's kind of one thing we need to be are aware of that's there that's these nine companies advantage in business but having said that I think sharing data in a way that doesn't affect privacy is a great thing because if we want these nine companies not to dominate we want to give the smaller entrepreneurs the universities a chance so collecting data sets following some you know rules that we don't hurt people's privacy I think well Advanced Research and that would be a good thing and how do we do you know I think that's a very important point and I think this one which is the positive conversation here rather than the you know killing of people there so it is the fact that we are collecting data today and I totally agree with you once it's in context the data gets so much more valuable at Siemens we took the data scientists and and put them in the factory where we actually design and build the trains and because they set next to the people who actually design and build the trains the speed in which we understood the data was dramatically improved it became much more relevant and today we can predict you know with 98 percent likelihood you know ten days before door jams that is going to jam because the engineer was not of the equation so I totally agree that the point I'm making is that if we want to accelerate innovation we argue that AI can solve many of the problems of this world you know sustainable energy water what-have-you most of the SDGs have a a technical solution where AI plays a role in there then we want to accelerate innovation to solve those problems improve humanity then we gotta share the data so as soon as we collect more data on physical things than most companies I'm sure you know Google would give an arm and a leg to get to our data mm-hmm but I would like to share it with everyone and not monopolize it because if I do that then more companies can build on top of my data and better my solutions is there any data that you wouldn't want to share well there's a privacy issue so I do want to make sure that if I share the data I don't create a privacy issue but if so for instance in healthcare we have enough data on people's health I would like to share that so that we can improve our our therapy but I don't want to share it with the information about who's the patient and what's the record or the history of that patient I think so but I would and I don't see a reason why we would wanna peláez the access to data now being a businessman that might sound crazy but I actually believe it's super important that we set a set of rules where although also small companies get access to data because otherwise we will kill all the small stuff small stuff so just giving you an example of India where data was actually a lot of iteration has been done in terms of data because which has helped by public entities Aadhaar which is the biometric of all individuals or digital payments all the banks are connected through the unified payment interface but we've opened up we've allowed Google to come in and do Google pay we've allowed the whatsapp to come in to a payment interface with that so allow our belief is that you need public it's like a public road data is like a public road allow private sector to come in and innovate on that allow startups to come in and no wait on that you can't start you can't allow your 400 startups working on artificial intelligence in India to be starved of data so you open up all that data to them or allow it cognitions and researchers to work on that so for us our belief is that all data is helped by public entities but allow innovation to take place on that so you're saying all public data government data records should be opened up in some enemy's way should the government so we write to private companies saying you move your data to so on images I think that is necessary because of the network effect of the digital world and where is the where am I in all this so so so um so the ultimate beneficiary is the consumer ultimate beneficiary when you're providing medical records in an anonymous manner for data come tarah Medical Science Institute if which is going to do further research on this the ultimate beneficiary is the consumer the citizens I need a pause for a second to make a PSA which is if you are in the audience or you're on the web we move to audience questions soon as you go to WEF dot c h dot ask they'll appear my iPad you can vote up people's questions now back to regular scheduled programming yeah I mean just one of it when it's so a couple things this is where country restrictions come into play so in the United States you know we have some companies working on tremendous systems machine learning systems that can detect cancers for example strange cancers that nobody knows about the problem is that that data those data are locked up in M which are electronic medical record systems so we while we do not have GDP our issues in the United States we have HIPPA compliance regulations in the United States and everybody has their own proprietary system where that data where those data are locked up as a result researchers have had to develop synthetic datasets because it's somebody can't you got to train these things so that they continue to learn those data are created by people and we've seen lackluster results at the end of that but at the end of the day we are the ones generating that data and the average person has app just by virtue of being alive in the year 2019 we are all of us sitting in this room we're all generating data right now every single one of us and we are all smart people and most of us are not aware of what all of those data are how they are being mined refined product eyes and monetized and ultimately a lot of the big players are funneling us into a system where they own all the data so it's we are moving into a future in which we have personal data records a single data set that is likely to be owned again in the future not today by one or two entities and and we are not the custodians we individual consumers those data are not heritable but they can be changed David well I I don't think that you can answer this question in a very general way I think it depends on the data so the current AI deep learning one form of it requires very carefully labeled data for training that's how the technology works so to create the data sets for certain AI applications is extremely expensive so imagine a company spends a hundred million dollars building a specialized data set to train an AI network ok and now they have a great system for identifying something-or-other maybe you know cancers in in in patient x-rays who knows what so now what should the company have to contribute this data to the general public after it's spent 100 million bucks building the data set well if that's the way it works no one will invest in this or ork or could one new paradigm going forward if we were to offer some solutions be that a rule I hesitate to say but a rule is that part of that investment must include shoring up right creating those new data sets and some risk and oversight and and modeling to see what next order implications must look like I know that to investors that's not a popular idea but it's one hedge against some challenges that we could be facing in the future it sounds a little like socialism so on the how I think health is a good area to discuss because we all want that to improve right and I'm a cancer survivor I will donate my data to any researcher anonymized or not and actually 90% of cancer survivors are willing to donate that data but in many countries that's not possible I think in the u.s. is very difficult and as a result there's just not enough data so in the world of countries that have rules like HIPAA start to find ways so that when people are willingly forgo privacy and don't they they they they let them do so if they don't China India will provide better solutions so there's there's a principle you just mentioned that principle which i think is a very fundamental one you choose of course yeah and that's where I have a problem with today's world you don't choose you don't even know yes and that's are unacceptable and we got to grow up so I think there is one paradigm when it comes to consumer that the consumer actually decides you can decide to choose that aliens we have a 40,000 cars basically connected with telemetry we know how people drive when they drive you know we know how fast they accelerate where they drive how dangerous is that but they all opted in for that they get a monthly discount if they drive well they don't get one if they don't try it well it's their choice if you don't want to be part of this no problem you have the nominal and that's I think it's a principle call it a rule or principle but if we would apply principles like that we would have a better use of data and AI so like I gotta cut you off cuz we could stay on this forever but I got to move to audience questions people we got a lot of questions coming in and we only have 20 minutes and 52 seconds so what are the consequences this is from Johan what are the consequences of outsourcing our moral decisions to AI does this undermine our making moral decisions very complicated question I want to take a crack at it without challenging the assumptions well I okay I think there is a role in the efficacy and the results so yes one has to consider what is things you want to delegate and we have a debate but I we also cannot ignore the outcome so suppose we have an autonomous vehicle yes that makes very different decision from from people and hits a lot of people that humans would never hit but it doesn't hit many many more people that humans would hit and the results in 50% fewer life saved but in every case that hits someone the human would say wow that was really dumb why did you do that mm-hmm so I think it is a worthy debate to have rather than write off the possibility of using AI in that case so I have a solution to the problem actually I believe in algorithm and should also always be able to tell how did I come up with that conclusion may I explain ability this is gonna get feisty keep going and then well this is key because then we actually don't lose control in many of the examples we had earlier the airplane and so on there was actually a pilot as well who could take over if we wanted to and and we are not in our minds yet at this stage where we could only giving up control not giving up control is about understanding how did you come to that conclusion if you understand the algorithm yeah you can fool any AI and therefore you've got to be able to ask the question at any result you get from AI how did you come with that conclusion what was your algorithm and the likelihoods that cost you to come to that country I have a few things to respond on that one is that well then you have to basically give up on deep learning and we have it today because it's no one note well no one it's beyond the state of the art right now to have explained ability in these systems it's just technically known it's a research problem we can fund AI research and hopefully we'll we'll find a way to do it but who knows that might not exist it is impossible no you can't approximate something well but who are the humans no but does not today exist a form that you would that it does not exist is it impossible anything's possible okay the other is what that let's be careful about a double standard so humans generally cannot explain their decision-making our brain is very complicated and most of the time things just pop into your head the words that I'm saying to you right now just popped into my head I have no idea where they came from really this is the miracle of the human mind it's amazingly sophisticated and and so now of course I can overfit the day so after I make a decision I can tell you why I made the decision but that's just you know some kind of rationalization so even we can't explain most of the decisions we make the problem with these questions about morals because there's like five big questions that everybody loves to talk about when they talk about AI and morals is one of them and the trolley problem often comes up which is who do we kill the problem is that you have to uh not if you want to come to some kind of concrete plan for the future we have to unpack these questions and get to a much more granular level of conversation and to the question about morals I am an American I've lived in I lived in Japan for many many years I lived in China for a while and I can tell you that you get a group of computer scientists from those three countries together I can guarantee you everybody's gonna say in the trolley problem situation let's figure out a way not to kill people however when we're talking about optimization right what one and and I you know you can't generalize so here's here's a here's a really quick one I had a very very close friend in Japan come up to me one day and say and she said to me you look like you gained some weight and how much do you weigh I was like what are you talking about yeah you know and and the reason she was asking me was not to chide me it was a sign of affection this was her showing me I'm a nerd about me she was concerned that I was sick if you did that you can't even ask a woman in the United States if she's pregnant right these seem like sort of meaningless conversations to be having but those kinds of questions are the question to get down to that level of detail when we're talking about autonomous decision-making and morals and the problem is there's a ton I mean there's a ton to be thinking about and it's an important conversation to have but we've got to get to some means of granularity when we have those conversations but should there be a principle that we should at least try to think through how to make algorithms make whoever created the algorithm be able to explain what it did or the algorithm itself explain why I made a decision we not try that so we should try it definitely try we should try but you think it's art there's also an IP but there's an IP area this is not something that people on the investment side this is not a conversation they'd like to have all right but but I would argue that there's a tangible value for value investors for people on the investment side there is a way to make there could theoretically be in the future be a way to pursue explain ability without a bleeding IP let me ask another audience questions which falls exactly on that who gets to judge the robustness of explain ability the courts that's a good one well if we think the explain ability is hard I think let's try to do something first before we know to judge it for what so in a liability case so again I try to look at existing laws existing rules and regulations so you have to consider normal product liability law if a product is malfunctioning how would that be handled today so if you bought if you buy a car today and your cruise control fails and you go ramming into a vehicle in front of you you know how is that handled and and so idle do you have sectoral rules and regulations existing already in different countries and we should allow them to play around and I don't think you could you need to have at this point of time Universal norms and rules and regulations allow the sector who evolved to grow I think it's a 30 sorry I think it's fair - I think the first step is to make sure have a way of examining there's not a software error or machine error those things are doable yes but once it gets down to there are no other errors but it just made that decision explain it I think that's that's a hard problem I'm gonna move to an even easier audience question directed at you Caillou what are the results of algorithmic decision making for humanity and democracy so explain let's just let's leave humanity out was settled on democracy mmm-hmm do you think a world the world we are heading into in which a I becomes much more powerful as much greater economic impacts do you think it makes democracy more likely in the world oh do you think it makes democracy less likely or is it neutral not thought about the problem I think one can make a case either way right you can argue that because some entity where the government or company becomes too powerful makes democracy more difficult yeah you could also argue you have great tools to give people greater access transparency and understand how people think and then people can have better platforms understanding using the AI to have a better democracy I don't I mean I think it's sort of - a little bit distant things but it's an important question to ask but let me let me let me rephrase cuz part of the reason I don't know what the audience members in Central is but one question that is often asked is it looks like the United States is doing quite well in AI but China is really prioritizing it doing it really well so a world in which AI is a extremely powerful economic engine is a world in which China Rises China is not democratic does that have an effect on the way the world is governed well that's two independent issues right Chinese AI became powerful not because of the form of government in China but because of the large amount of data very competitive environment winner take all lots of venture capital that's what caused it and I think to draw a conclusion of the strength in AI leading to a complete dominance of a country and what form of government you're adding a lot of things to the equation and we can I've a situation where you know data misused from Facebook led to influencing elections is that good for democracy and it comes back to my point about having an understanding of where is and how is theta being used and a level of transparency on how algorithms come to conclusions if because at the end of the day democracy depends on trust and if we fail to have trust because this is going out of control yeah we will lose democracy as well that's why I think the limitations you know India is a great example of a very lively why burn democracy huge debate and we've had a massive debate about privacy of data it went up writing the highest court about biometrics about use of individual individual data on other and that was that has been legally held valid by the Supreme Court so my belief is any democratic form of government is is because the pressure of the government to perform is enormous you know on a five-year term is going to use technology to change the lives of citizens and the people in that country and I find I find in our country state government central government or all pushing for the use of technology and artificial intelligence to change agriculture to change health to change education all of them so the pressure on democratic forms of government to use technology will be much more we don't have an answer to that question however we have optimistic pragmatic and catastrophic scenarios and a challenge when we think about any form any technologies that we tend to think of them in silos so if we think about China we would have to think about the future of AI as it relates to the Bri and the 58 countries that are currently part of the Digital Silk Road this is the belton Road initiative and so we've got a company heavily investing in things like actual 5g not the made-up 5g that we talked about in the u.s. at fiber right and and also the means for collecting mining refining and privatizing data and if we think about that with an a even broader sort of framework 20 49 is the hundred year anniversary of the CCP we have a president in China who I think is brilliant and who is also effectively president for life because of so many rule changes we have significant investment at the very top levels and ultimately you know I think one direction that things could head is that there are there's a new alignment in a new world order of different countries and different interests who are moving in one direction and a bunch of countries moving in it in a different direction with autonomous decision-making and some of this data mining and refining as the connective tissue we don't know what it all looks like but this is why we have to ask the questions we're not gonna have answers right now but we can model out what could be plausible I would real quick I would worry about democracy I don't think it's AI that is gonna undermine it I think it's just the Internet broadly as it's being used so you know the Internet's of course wonderful and you know there's no going back but one thing that the internet did was dramatically lower the cost of communication so basically anyone can be a publisher and you can connect one person too many adversely no cost through various platforms on the internet so the Internet has just made it easy for you know all sorts of information real in between and totally made up to be spread throughout the planet Earth which is leading to a loss of credibility for the news media and you know no one knows what to believe and and this is what's undermining you know not only democracy but any system Jim let me ask you a question about Europe so the conversations we've been having about different ways of regulating our thinking about AI the strongest advocates tend to be the European leaders when it comes to arguments about data sharing when it comes to arguments about explain ability when it comes to arguments about breaking up tech companies is it possible that Europe will have well in the near future in the next 5-10 years pass the whole series of rules and regulations on AI that slows the industry down so much that there is never a large AI company from here because you notice the nine companies none of them is in Europe well I think it's it's fair to say that Europe is in a pretty bad state when it comes to this problem first of all we have a until recently 26 different rules of data protection and it is still a country-by-country decision on what data can you share with whom actually until reason it was even not allowed to take it a Tesla and and drive over the border from Germany to other countries because then you would be actually taking data that was produced in Germany into another country so so this has now been changed but but still we have a situation where the u.s. arguably has the biggest platforms offering AI algorithms if you donate your data we have China with the biggest data pool and probably more central governments around it and so Europe is the loser in that game I think there will be a need to say pull back in terms of setting some rules or principles for how data is used and how its yet that could be Europe's comeback but you can't regulate then you kill innovation in Europe and that would be a disaster so this is a dilemma that Europe had but I think the world has that dilemma and and I don't think it's you know we use the world you know the word rules and so on I think it's simple principles I actually I'm looking for the adult principles it's like do not misuse data don't take people's privacy be responsible around how you you know go about using these data create transparency and what you do and eventually I hope will grow up and trust the companies that have adult ways of dealing with this and kill those that don't so you've seen the power of technology to really truly transforming in there you know a billion mobile a billion biometric a billion bank account India used to be a very inefficient country just 15 per cell 15 percent used to reach the ultimate beneficiary suddenly because you're transferring money straight government money straight into the bank account to the beneficiary and he's were drawing it using his biometric through his mobile it's transforming lives of citizens and therefore digital payments I mean India is not using debit card credit card it's all mobile banking and therefore to my mind bringing in rules and regulations too early in the game will really hamper tech economic growth it'll hamper technology it'll ampere the lives of citizens all right we have five minutes left what I'd like to do is to go around circle one minute on what you think we should be thinking out all the things we've discussed what should we think about in the next year cuz surely we're gonna be having this conversation again next year what is the thing that is stuck out for you from this conversation that we should continue to think about and continue to try to think through we'll go around the room and that'll be a rapid yeah I think we need to really broadly educate people what a is capable of and not capable of what are the things we might be able to have rules and not have rules I think there's too much hype and paranoia out there and once that is done I think there can be a more rational progress I also think we have to understand and accept that there are different views not just among us but among countries as amy said and that trying to find one universal answer isn't the solution but it's about bringing in new ideas letting each country and each company try things and share best practices so in a year hopefully both at the web ái Council as well as the news forum will have some good news and some good practices that people are willing to share because ultimately a company or a country can't be forced to do something yeah it has to see efficacy benefits or benign goals in order to self-discipline to do it Wow end it on the second OMA Thomas so I think you need a great great global alliance great global alliance of countries corporates academicians research us to start debating some of these issues learning from the best practices I think it requires a whole it can't be done on one televised debate and discussion yeah it requires a whole lot of debate and discussion on issues of privacy of data on ethics and morality and on technology and I think learning from best practices of each other will evolve to learn on how to make AI the really truly transformational fail forced to change the lives of citizens so I am truly excited about the technology and when I'm cautious it's because I see how far it could go potentially I do will have a great respect for exponential curves we tend to overestimate things inning but then underestimate at the end we still have a chance to actually get this one right for me it's about using the technology right and that is not what technically possible it's what's desirable from a societal point of view and therefore my hope is number one we started using this technology to solve relevant problems we talked about healthcare there's so many other relevant problems to solve let's accelerate that I would say secondly we should try and use this technology to enhance human capability not replace human capability we haven't even talked about the job implications if we just replace humans instead of enhance humans and finally I hope to see more responsible leadership around this technology the data and the platforms associated with it I think it will be hard to find agreement on many of the topics that we talked about but one I think that we should be able to agree on is healthcare saving lives clearly diseases so I think that the world should get together and just decide that this is a terrific application for machine learning and established rules where we every country agrees to contribute their health data to a global repository that AI researchers can openly use and you know this would be motivating to the world and this is something that machine learning technology I'm convinced is very applicable toward and so we should just together form an alliance globally and get the job done save millions of lives we can do this in a decade so all of my colleagues on the panel have you've now heard them say that the practical applications are still a ways off and that is absolutely true this AI may be here in some form but this is like a multi-decade journey the challenge is that we whether you're regular you know you're in the government or you're a startup or you're an investor or you're a large company we can't continue to put these conversations off and you've heard of lively debate so the best possible thing that you can do while the World Economic Forum is sorting this out over the next year is to take what you've heard back to your respective organizations and your spouses and your children and your friends and try to advance your own thinking on this topic because 1999 is the 30th anniversary of the worldwide that the kernel of idea of the World Wide Web we're now a lot of those people are looking back 30 years later and saying maybe we should have as we were developing this thought through our implications so the best possible thing everybody could do is to think out the longer-term risk and opportunity scenarios in a concrete way that is a marvelous note to end on has been a lively informative feisty panel if anybody sees Chancellor Merkel or will.i.am tell them they missed something special thank you everybody who came and thank you for all these amazing panelists [Applause]
Info
Channel: World Economic Forum
Views: 5,602
Rating: undefined out of 5
Keywords: World Economic Forum, Davos, WEF2019, Davos 2019, politics, finance, economy, news, leadership, democracy, education, 4IR, technology, tech, AI, automation, work, future
Id: Lzqw5c0Myqw
Channel Id: undefined
Length: 60min 18sec (3618 seconds)
Published: Sat Feb 09 2019
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.