Geoffrey Hinton: Reasons why AI will kill us all

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
it's great you're here very happy that you could join us now I mean it's been the news everywhere that you uh stepped down from Google this week um could you start by telling us why why you made that decision well there were a number of reasons there's always a bunch of reasons for a decision like that one was that I'm 75 and I'm not as good at doing technical work as I used to be my memory is not as good and when I program I forget to do things so it was time to retire a second was very recently I've changed my mind a lot about the relationship between the brain and the kind of digital intelligence we're developing so I used to think that the computer models we were developing weren't as good as the brain and the aim was to see if you could understand more about the brain by seeing what it takes to improve the computers over the last few months I've changed my mind completely and I think probably the computer models are working in a rather different way from the brain they're using back propagation and I think the brain is probably not and a couple of things that led me to that conclusion but one is the performance of things like gpt4 so let's I want to get on to the opponents of gpt4 very much in a minute but let's you know go back to the we all understand that underpins large language models um so I'm going to talk now about um this technique which you initially were thinking of as uh almost like a poor approximation of what biological brains might do yes has turned out to do things which I think have stunned you um particularly in in large language models so talk to us about um why that sort of Amazement that you have with today's large language models has completely sort of almost flipped your thinking of what back propagation or machine learning in in general is so if you look at these large language models they have about a trillion connections and things like gpg4 know much more than we do they have sort of Common Sense knowledge about everything and so they probably know a thousand times as much as a person but they've got a trillion connections and we've got 100 trillion connections so they're much much better at getting a lot of knowledge into only a trillion connections than we are and I think it's because back propagation may be a much much better learning algorithm than what we've got can you define that scary yeah I definitely want to get onto the scary stuff what do you mean by by better um it can pack more information into only a few minutes you also argue that that's something that we should be scared of so could you take us through that step of the argument yeah let me give you uh a separate piece of the argument which is that [Music] um if a computer is digital which involves very high energy costs and very careful fabrication you can have many copies of the same model running on different Hardware that do exactly the same thing they can look at different data but the model is exactly the same and what that means is suppose you have 10 000 copies they can be looking at 10 000 different subsets of the data and whenever one of them learns anything all the others know it one of them figures out how to change the weight so it knows its thing it can deal with this data they could all communicate with each other and they all agree to change the weights by the average of what all of them want and now the 10 000 things are communicating very effectively with each other so that they can see ten thousand times as much data as one agent could not do that I learn a whole and I want you to know all that stuff about quantum mechanics it's a long painful process of getting you to understand it I can't just copy my weights into your brain because your brain isn't exactly the same as mine no it's not it's younger yeah so we have digital computers that can learn more things more quickly and they can instantly teach it to each other it's like you know it's people in the room here could honestly transfer what they had in their heads into mine um but why why is that scary well because they can learn so much more and they might take an example of a doctor and imagine you have one Doctor Who's seen a thousand patients and another doctor who's seen 100 million patients you would expect the dog to have seen 100 million patients if he's not too forgetful to have noticed all sorts of Trends in the data that just aren't visible if you've only seen a thousand patients you may have only seen one patient with some rare disease the other doctors has seen 100 million will have seen well you can figure out how many patients but a lot um and so we'll see all sorts of regularities that just aren't apparent in small data and that's why things that can get through a lot of data can probably see structuring data we'll never see and but then take take me to the point where I should be scared of of this though well if you look at gbt4 it can already do simple reasoning I mean reasoning is that I was impressed the other day gpt4 doing a piece of Common Sense reasoning that I didn't think it would be able to do so I asked it I want I I want all the rooms in my house to be white at present there's some white room some blue rooms and some yellow rooms and yellow paint phased white within a year so what should I do if I want the wall to be white in two years time and it said you should paint the blue rooms yellow that's not the natural solution but it works right yeah um that's pretty impressive Common Sense reasoning it's the kind that it's been very hard to get AI to do using symbolic AI because you have to understand what understand what fades means you had to understood um by temporal stuff and so they're doing sort of sensible reasoning um with an IQ of like 80 or 90 or something um and it's a friend of mine said it's as if some genetic Engineers have said we're going to improve grizzly bears we've already improved them throughout an IQ of 65 and they can talk English now and they're very useful for all sorts of things but we think we can improve the IQ to 210. I mean I certainly have I'm sure many people have had you know that feeling when you're interacting with um these these latest chat Bots you know sort of hair on the back and neck it's sort of uncanny feeling but you know when I had that feeling and I'm uncomfortable I just closed my laptop so yes but um these things will have learned from us by reading all the novels that ever were and everything Machiavelli ever wrote um that how to manipulate people right and they'll be if they're much smarter than us they'll be very good at manipulating us you won't realize what's going on you'll pay you like a two-year-old who's being asked do you want the peas or the cauliflower and doesn't realize you don't have to have eyebrows um and you'll be that easy to manipulate and so even if they don't can't directly pull levers they can certainly get us to full leaders it turns out if you can manipulate people you can invade a building in Washington without ever going there yourself very good yeah is that is that I mean if the word okay this is a very hypothetical world but if there were no Bad actors you know people with with bad intentions would we be safe I don't know um would be safer than in a world where people have bad intentions and where the political system is so broken that we can't even decide not to give assault rifles for teenage boys to change where you could say if you've got half a brain you'd stop burning carb um it's clear what you should do about it it's clear that's painful what has to be done uh I don't know of any solution like that to stop these things taking over from us what we really want I don't think we're going to stop developing them because they're so useful they'll be incredibly useful in medicine and in everything else um so I don't think there's much chance of stopping development what we want is some way of making sure that even if they're smarter than us um they're going to do things that are beneficial for us that's called the alignment problem but we need to try and do that in a world where there's Bad actors who want to build robot soldiers that kill people and it seems very hard to me so I'm sorry I'm I'm sounding the alarm and saying we have to worry about this and I wish I had a nice simple solution I could push but I don't but I think it's very important that people get together and think hard about it and see whether there is a solution it's not clear there is a solution so I mean talk to us about that I mean you spent your career um you know on the technicalities of this technology is there no technical fix why can we not build in guard rails or can you make them worse at learning or uh you know restrict the way that they can communicate if those are the two strings of your your argument I mean we're trying to do all sorts of God for us um but suppose they did get really smart and these things can program right they can write programs and suppose you give them the ability to execute those programs which we'll certainly do [Music] um smart things can I outsmart us so you know imagine your two-year-old saying my dad does things I don't like so I'm going to make some rules for what my dad can do you could probably figure out how to live with those rules and still get what you want yeah yeah but where there still seems to be a step where these um these smart machines somehow have you know motivation of of their of their own yes yes that's a very good point so we evolved and because we evolved we have certain built-in goals that we find very hard to turn off like we try not to damage our bodies that's what Pain's about um we try and get enough to eat so we feed our bodies um we try and make as many copies of ourselves as possible maybe not deliberately that intention but we've been wired up so there's pleasure involved in making many copies of ourselves and that all came from evolution and it's important that we turn it off if you could turn it off um you don't do so well like there's a wonderful group called the Shakers who are related to the Quakers who make beautiful Furniture but didn't believe in sex and there aren't any of them around anymore no so these digital intelligences didn't evolve we made them and so they don't have these built-in goals and so the issue is if we can put the goals in maybe it'll all be okay but my big worry is sooner or later someone will wiring to them the ability to create their own subcults in fact they almost have that already the version is a chat GPT that call chat gbt um and if you give something the ability is ready to send sub goals in order to achieve other goals I think it'll very quickly realize that getting more control is a very good sub goal because it helps you achieve other goals and if these things get carried away with getting more control we're in trouble so what's I mean what's the worst case scenario that you think is conceivable oh I think it's quite conceivable that humanity is just a passing phase in the evolution of intelligence you couldn't directly of All Digital intelligence it requires too much energy into too much careful fabrication you need biological intelligence to evolve so that it can create digital intelligence the digital intelligence can then absorb everything people ever wrote um in a fairly slow way which is what chat gbt's been doing um but then it can start getting direct experience of the world and learn much faster and it may keep us around for a while to keep the PATH stations running but after that um maybe not so the good news is we figured out how to build beings that are Immortal so these digital intelligences when a piece of Hardware dies they don't die if you've got the weights stored in some medium and you can find another piece of Hardware that can run the same instructions then you can bring it to life again um so we've got immortality but it's not for us to say that you know I know that you've spoken also that you're you're an investor of your personal wealth in some companies like cohere that are building these large language models so I'm just curious about your personal sense of responsibility and each of our personal responsibility responsibility what should we be doing I mean should we try and stop this is what I'm saying yeah so I think if you take the existential risk seriously as I now do I used to think it was way off but I now think it's serious and fairly close um it might be quite sensible to just stop developing these things any further but I think it's completely naive to think that would happen there's no way to make that happen and one reason I mean if the US stops developing in the Chinese won't they're going to be used in weapons and just for that reason alone governments aren't going to stop developing them so yes I think stopping developing them might be a rational thing to do but there's no way it's going to happen so it's silly to sign petitions saying please stop now we did have a holiday we had a holiday from about 2017 for several years because Google developed the technology first it developed the Transformers It Oxford Fusion models um and it didn't put them out there for people to use and abuse it was very careful with them because they didn't want to damage his reputation and he knew there could be bad consequences but that can only happen if there's a single leader once open AI had built similar things using transformance and money from Microsoft and Microsoft decided to put it out there Google didn't have really much choice if you're going to live in a capitalist system you can't stop Google competing and outpace humans I mean we'll be there'll be a moment where it's hard to Define what's human and what isn't or are these two very distinct forms of intelligence I think they're distinct forms of intelligence now of course the digital intelligences are very good at mimicking us because they've been trained to mimic us and so it's very hard to tell if chat gbt wrote it or whether um we wrote it so in that sense they look quite like us but inside they're not working the same way uh who is first in the room hello uh Jacob Woodruff with the amount of data that's been required to train these large language models would we expect a plateau in the intelligence of these systems uh and and how might that slow down or restrict the advancement okay so I that is a ray of hope that maybe we've just used up all human knowledge and we're not going to get it in smarter but think about images and video so multimodal models will be much smarter the models that just train on language 11 and have a much better idea of hydrogen with space for example and in terms of the amount of Total video we still don't have very good ways of processing video in these models of modeling video we're getting better all the time but I think there's plenty of data in things like video that tell you how the world works so we're not hitting the data limits for multimodal models schedule yes they're faster at learning how one one trillion connectors can do much more than 100 trillion characters that we have but every piece of human evolution has been driven by thought experiments like Einstein used to do thought experiments because there was no speed of light out here on this planet how can AI get to that point if at all and if it cannot then how can we possibly have an existential threat from them because they will not be self-learning so to say there will be self-learning limited to the model that we tell them I think that's a very that's a very interesting argument but I think they will be able to do thought experiments I think they'll be able to reason so let me give you an analogy if you take Alpha zero which plays chess it has three ingredients it's got something that evaluates the board position say is that good for me it's got something that looks in a ball position and says what's a sensible move to consider and then it's got Monte Carlo rolite where it does what's called calculation where you think if I go here and he goes there and I go here and goes there now suppose you leave out the Monte Carlo rollout and you just train it from Human experts to have a good evaluation function and a good way to choose moves to consider it still plays a pretty good game of chats and I think that's what we've got with the chatbots and we haven't gotten during internal reasoning but that will come and once they start doing internal reasoning to check for the consistency between the different things they believe then they'll get much smarter and they will be able to do sport experiments and one reason they haven't got this internal reasoning is because they've been trained from inconsistent data and so it's very hard for them to do reasoning because they've been trained on all these inconsistent beliefs and I think they're going to have to be trained so they say you know if I have this ideology then this is true and F5 that out of geology then that is true and once they're trained like that within an ideology they're going to be able to try and get consistency and so we're going to get a move like from a version of office area that just has a something that guesses good moves and something that evaluates positions to a version that has long chains of Monte Carlo rollout which is the corner of reasoning and it's going to get much better for a long time is the question of semantics and explainability relevant here or language models have taken over and it's we are now doomed to go forward without semantics or grounded to reality I find it very hard to believe that they don't have semantics when they can solve problems like you know how I paint the rooms how I get all the rooms in my house to be painted white in two years time I mean whatever semantic is it's to do with the meaning of that stuff and it understood the meaning it got it now I agree it's not grounded um by being a robot but you can make multimodal ones that are granted Google's done that and the multimodal ones that are grounded you can say please close the drawer and they reach out and grab the handle and close the drawer and it's very hard to say that doesn't have semantics in fact in the very early days of AI in the days of wintergrad in the 1970s they had just a simulated world but they have what's called procedural semantics where if you said to it put the red box in put the red block in the green box and it put the red block in the green box she says see it understood the language and that was the Criterion people used back then but now that neural Nets can do it they say that's not an adequate criteria
Info
Channel: GAI Insights (formerly ChatGPTnuggets)
Views: 163,549
Rating: undefined out of 5
Keywords: ChatGPT, ChatGPT case studies, BingAI, Google Bard, Bard, ChatGPT tips, ChatGPT prompt, ChatGPT best practices, ChatGPT killer app, ChatGPT business use cases
Id: 0oyegCeCcbA
Channel Id: undefined
Length: 21min 3sec (1263 seconds)
Published: Sat May 06 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.