Should You Be Afraid Of AI?: Yann LeCun And AI Experts On How We Should Control AI

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
can we play video number one all open- Source AI must be banned because it threatens the profits of monopolistic tech companies number two universities should stop doing AI research because companies are much better at it and number three to be honest I don't care about all of this AI safety stuff it doesn't matter let's focus on Quantum Computing instead all right let's give it up for our panel there are many many current challenges with AI of course that we need to deal with deep fakes is very relevant this year because of Elections and fraud and all sorts of other things if you want to learn more about it we have a deep fake demo where you can deep fake yourself back there and share your ideas for what to do about it but that's not what we're going to talk about now because we are going to look a little farther into the future the ultimate goal AI from the beginning has always been to really solve intelligence and figure out how to make machines that can do everything humans can do ideally much better that is both exciting and terrifying and I have taken the prerogative as moderator of sorting the panelists from uh least worried to most worried I hope you feel I oh wait you switched you should switch with Stuart please please switch places yeah we're not seting you by your deep fake opinions now but by the by the real ones the PO the main goal we have here is not to have just yet another debate about whether we should worry or not but rather to brainstorm about Solutions in the M Spirit okay and I have the very radical and heretical belief that you all actually agree on a lot more things than the average U Twitter user probably realizes and we're going to seek see if we can find some of those shared um those things that you agree on that we can all all go out and do so I'm gonna warm you up just with some lightning round questions where you can basically just say yes and no or no okay so who is who here first question are you um excited about improving AI in ways that that so that it can be our tool and really complement and help humans yes or no yes yes yes yes all right next question do you believe that AI in a few years will be a lot better than it is now yes yes yes yes all right now okay let's make it a bit harder so if we Define artificial general intelligence as AI that can do all basically all cognitive tasks at human level or better we I think do you feel that we already have it now no absolutely not no no okay four NOS um do you think uh we're probably going to have it within the next thousand years maybe sure yes yes do you think we're probably going to have it within the next did I say thousand 100 next 100 years maybe quite possibly very probably bearing nuclear catastrophe yes all right so if you were to put a number on like how many years we're going to need to wait until we have a 50% chance of getting it would anyone what year would you guess guesstimate uh not anytime soon M uh decades decades uh a lot less than I used to think okay and you 5.4 5.4 years okay lot of precision there so I think I think you'll all agree we put them in the right order and you can see that the level of alarm they have is is correlated with how quickly they think we're going to have to deal with this so clearly if you have some of the world if you have leading world world experts who think it might happen relatively soon we we have to take seriously the possibilities so the question is how do we make sure that this becomes the kind of tool AI that we can control so we can get all the upside and and not the downside one thing that has really struck me here in Davos actually is that the vast majority of what I hear people being excited about in AI all medical breakthroughs eliminating poverty helping with the climate making great new business opportunities doesn't require AGI at all so I'm I'm actually quite curious if if uh somehow there a away where we could just agree that say listen do all the great AI stuff but maybe not do um super intelligence until 2040 at the earliest or something like that is that something you all feel you could live with or or do you feel that there is a great urgency to try to make something super intelligent as fast as possible what would you we all go in this direction this time what would you say can live with that can you say it again I can live with that you can live with that what about you Stuart uh you can elaborate a bit more this time I I could live with that but it's not actually relevant what I think what's relevant is what are the economic forces driving it and if AGI is worth uh as I've estimated 15 quadrillion dollars it's kind of hard to tell people no you can't go for that Yan what about you so first of all there is no such thing as a GI because uh we can talk about human level AI but human human intelligence is very specialized so we shouldn't be talking about AGI at all uh we should be talking about what kind of intelligence can we observe in humans and animals that current I systems don't have and you know there's a lot of things that current AI systems don't have that your cat has or your dog and and they don't have anything close to general intelligence so the problem we have to solve is how to get machines to learn as efficiently as humans and animals that is useful for a lot of applications uh this is the future because we're going to have ai assistance that you know we talk to help us in our daily lives we need those systems to have human level intelligence so uh you know that's that's why we need it we need to do it right danela well I'm with Yan um but let me first say that I don't think it's feasible to say we're going to stop science from developing in uh in One Direction or another and so I think knowledge has to continue to be uh invented we have to continue to push the boundaries and this is one of the most uh exciting aspects of working in this field right now uh we do want to improve our tools we do want to develop better models that are closer to Nature than the models that we have right now we want to try to understand nature um in as greater detail um as as possible and I believe that it's um the the feasible Way Forward is to start with a simpler organisms in nature and work our ways up to the more complex um uh creatures like like humans steuart so I I want to take issue with something uh there are there's a difference between knowing and doing and that's an important distinction but I would say actually there are limits on what is a good idea for the human race to know is it a good idea for everyone on Earth to know how in their kitchen to create an organism that will wipe out the human race is that a good idea Daniela no of course not of course not right so we accept that there are limits to what is it what is a good idea for us to know uh and I think there are also limits on what is a good idea for us to do right should we um should we build nuclear weapons that are large enough to ignite the entire atmosphere of the earth we can do that but most people would say no it's not a good idea to build such a weapon okay so there are limits on what we should do with our knowledge and then the third point is is it a good idea to build systems that are more powerful than human beings that we do not know how to control well stor I have to respond to you um and I will say that every uh technology that uh that has been invented has positives and negatives we um we invent the knowledge and then we find ways to ensure that the inventions are used for good not for bad and there mechanisms for doing that and there are mechanisms that the world is developing for AI with respect to your point about whether we have um we have machines that are more powerful than humans we already do we already have robots that can move with greater Precision than you can we have uh we have um uh robots that can lift more than you can we have machine learning that can process much more data than we can and so we already have machines that can do more than we can no but those machines are clearly not more powerful than humans uh in the same way that gorillas are not more powerful than humans even though they're much stronger uh than us and horses are much stronger and faster than us but no one feels threatened by horses I think there is a big fallacy in all of this so first of all uh we do not have a blueprint for a system that would have human level intelligence it does not exist the research doesn't exist the science needs to be done this is why it's going to take a long time and so it's if we're speaking today about how to protect against uh intelligence systems you know taking over the world it's or or the dangers of it regardless of what they are it's as we as if we were talking in 1925 about the dangers of crossing the Atlantic at near the speed of sound when the turbo jet was not invented um you know we don't know how to make those systems safe because we have not invented yet them yet now once we have a blueprint for a system that can be intelligent we'll have a blueprint probably for a system that can be controlled as well because I don't believe we can build intelligence systems that don't have controlling mechanisms inside of them we do as humans Evolution uh you know can it buil built us with certain drives we can build machines with the same drives so that's the first fallacy the second fallacy is it is not because an entity is intelligent that it wants to dominate or that it is necessarily dangerous it can solve problems you can tell it you can set a goal the goals for it and it will uh fulfill those goals um and the idea that somehow the system is going to come up with you know its own goals and take take over humanity is is just Preposterous it's ridiculous what is concerning to me is that the danger from AI does not come from any like bad property that it has an evilness that must be removed from the AI it's because it's capable it's because it's powerful this is what makes it dangerous what makes a technology useful is also what makes it dangerous the reason that nuclear reactors are useful is because nuclear bombs are dangerous this is the same property as technology progresses over the decades and the centuries we have gotten access to more and more powerful Technologies more energy more control over our environment what this means is is that the best and the worst things that can be happen either on purpose or accidentally grow in tandem with the technology we built AI is a particularly powerful technology but it is not the only one that could become so powerful that even a single accident is unacceptable there are technologies that exist today or will exist at some point in the future let's not argue about whether it's now in 10 years or 20 my kids are going to be alive in 50 years and I want them to live in a world where not a single accident can be the end if you have a technology whether it's AGI future nuclear weapons bioweapons or something else you can build weapons or systems so powerful that a single accident means game over and our civilization is not set up in how we currently develop Technologies to be able to deal with technologies that don't give you retries this is the problem if we have retries if we can try again and again and we fail and some stuff blows up and you know maybe a couple people die but it's fine yeah then I agree with Yan and danela that I think our scientists got this I think Yan's lab will solve this I think these people will solve it but if one accident is too much I don't think they will to that point and to the point that Stuart and Conor just just mentioned you can imagine an infinite number of scenarios when all of those things will go bad you can do this with any technology you can do this with AI obviously sci-fi is full of it um you can do this with turbojets turbojets can blow up um there is lots and lots of ways to build those systems in ways that would be dangerous wrong uh they'll kill people Etc but as long as there is at least one way to do it right that's all we need and so for example there's technology that was developed in the past that was developed at a prototype level and then was decided that it should not be deployed because it would be too dangerous or uncontrollable nuclear powered cars people were talking about this in the 50s there were prototypes it was never deployed nuclear powered spaceships same thing so there are mechanisms in society uh for to stop the deployment of Technology if it's really dangerous and to not deploy it but there are ways to make AI safe I actually do agree that it's important to understand the limitations of uh today's technology and understand and and set out to develop Solutions and for some cases we can get develop technological solutions and so for instance we've been talking about a bias problem in machine learning we actually have Technology Solutions technological solutions to solve that um we are talking about size we're talking about interpretability um there the scientific Community is working on addressing uh the the challenges with today's uh Solutions and also uh seeking to invent new approaches to AI new approaches to machine learning that have other types of properties and in fact I um at MIT a number of research groups are are really um aiming to push the boundaries to develop solutions that can be deployed on safety critical systems and on edge devices this is very important and there are there is really excellent progress so I very bullish about uh using machine learning and AI in safety critical applications so I would say I agree with one thing that Stewart said but also with a lot of the the observations Yan shared several of you independently say that we need new architectures new Technical Solutions so to wrap up I would love if if some of you want to share just very briefly some um thoughts on this like what kind of new architectures do we need that are more promising to make it the kind of AI that complement us rather than replaces us do you want to go first sure yeah I can't really uh give you a working example of it because this is this is work in progress but these are systems that are gold driven and at inference time they have to satisfy fulfill a goal that we give them but also satisfy a bunch of guard rails and so they cannot be uh they're planning their answer as opposed to just producing it Auto regressively one word after the other and uh they uh they cannot be jail broken uh unless you you hack into them or things like that so this would be an architecture that I think would be considerably safer than the current types that we are talking about and those system would be able to plan and reason remember perhaps understand the physical world all kinds of things that current llms cannot do um so future system will not be on the blueprint that we currently have and they will be controllable because they'll be objective driven liquid networks which are brain inspired by uh the brains of small creatures and uh they are provably causal uh they are compact uh they are interpretable and explainable and they can be deployed on edge devices and since we have these these great properties we also have control I'm also excited about connecting some of the tools we're developing machine in machine learning with tools from control theory and so for instance combining machine learning with tools like barrier net um and control barrier functions uh to ensure that the output of a machine Learning System is safe the actual Tech technology that I think is most important is social technology it is very tempting for tech people Tech nerds like all of us here on this panel to try to think of solutions that don't involve going through humans but the truth is that the world is complicated and this is both a political and a technological problem and if we ignore the technical and the social side of this problem we will fail reliably so so it is extremely important to understand that techno optimism is not a replacement for humanism great and so let's thank our wonderful panel here for provoking us and I hope you also take away from this that even though they don't agree on everything they all agree that we want to make tools that we can control and that compliment us and that they're all very nerdy and have exciting technical ideas for doing this thank you all
Info
Channel: Forbes
Views: 36,279
Rating: undefined out of 5
Keywords: Forbes, Forbes Media, Forbes Magazine, Forbes Digital, Business, Finance, Entrepreneurship, Technology, Investing, Personal Finance, Davos, AI, AI limit, latest AI developments, afraid of AI, are robots taking over, robots and AI, is AI moving to fast
Id: Wb_kOiid-vc
Channel Id: undefined
Length: 18min 54sec (1134 seconds)
Published: Fri May 24 2024
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.