“Godfather of AI” Geoffrey Hinton Warns of the “Existential Threat” of AI | Amanpour and Company

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
our next guest believes the threat of A.I might be even more urgent than climate change if you can imagine that Jeffrey Hinton is considered the Godfather of A.I and he made headlines with his recent departure from Google he quit to speak freely and to raise awareness of the risks to dive deeper into the dangers and how to manage them he's joining Hari sreenivasan now Cristiano thanks Jeffrey Hinton thanks so much for joining us um you are one of the more celebrated names in artificial intelligence you have been working at this for more than 40 years and I wonder as you've thought about how computers learn did it go the way you thought it would when you started in this field it did until very recently in fact I thought if we built computer models of how the brain learns we would understand more about how the brain learns and as a side effect we will get better machine learning on computers and all that was going on very well and then very suddenly I realized recently that maybe the digital intelligences we were building on computers were actually learning better than the brain and that sort of changed my mind after about 50 years of thinking we would make better digital intelligences by making them more like the brains I suddenly realized we might have something rather different that was already better now this is something you and your colleagues must have been thinking about over these 50 years I mean what was there a Tipping Point there were maybe there were several ingredients to it like a year or two ago I used a Google system called Palm it was a big chat box and it could explain why jokes for Funnies and I've been using that as a kind of litmus test of whether these things really understood what was going on and I was slightly shocked that it could explain that jokes were funny some with one ingredient another ingredient was the fact that things like chat gbt know thousands of times more than any humans in just sort of basic Common Sense knowledge but they only have about a trillion connection strengths in their artificial neural Nets and we have about 100 trillion connection strengths in the brain so with a hundredth as much storage capacity it knew thousands of times more than us and that strongly suggests that it's got a better way of getting information into the connections and then the third thing was very recently a couple of months ago I suddenly became convinced that the brain wasn't using as good a learning algorithm as these digital intelligences and in particular it wasn't as good because brains can't exchange information really fast and these digital intelligences can I can have one model running on ten thousand different bits of hardware it's got the same connection strengths in every copy of the model on the different Hardware all the different agents running on the different Hardware can all learn from different bits of data but then they can communicate to each other what they learned just by copying the weights because they all work identical and brains aren't like that so these guys can communicate at trillions of bits a second and we can communicate it hundreds of bits a second by sentences there's such a huge difference and it's why chat GPT can learn thousands of times more than you can for people who might not be following kind of what's been happening with open Ai and chat GPT and Google's product barred explain what those are because uh some people have explained it as kind of the autocomplete feature finishing your thought for you but what are these artificial intelligence is doing okay um it's difficult to explain but I'll do my best um it's true in a sense they're all too complete but if you think about it if you want to do really good autocomplete you need to understand what somebody's saying and they understand what you're saying and they've learned to understand what you're saying just by trying to do autocomplete um but they now do seem to really understand so the way they understand isn't at all like people in AI 50 years ago thought it would be in old-fashioned AI people thought you'd have internal symbolic Expressions a bit like sentences in your head but in some kind of cleaned up language then you would apply rules to infer new sentences from old sentences and that's how it all work and it's nothing like that it's completely different and let me give you a sense of just how different it is I can give you a problem that doesn't make any sense in logic but where you you know the answer intuitive and these big models are really models of human intuition so suppose I tell you that um you know that there's male cats and female cats and male dogs and female dogs but suppose I tell you you have to make a choice either you're going to have all cats being male and all dogs being female or you can have all cats being female and all dogs being male now you know it's biological nonsense but you also know it's much more natural to make all cats female and all dogs male that's not a question of logic what that about is inside your head you have a big pattern of neural activity that represents cat and you also have a big pattern of neural activity that represents man and a big pattern of neural activity that represents women and the big pattern for cat is more like the pattern for woman than it is like the pattern for man that's the result of a lot of learning about men and women and cats and dogs um but it's now just intuitively obvious to you that cats are more like women and dogs are more like men because of these big patterns of neural activity you've learned and it doesn't involve sequential reasoning or anything you didn't have to do reasoning to solve that problem it's just obvious that's how these things are working they're learning these big patterns activity to represent things and that makes all sorts of things just obvious to them you know what you're describing here ideas like intuition and basically context those are the things that scientists and researchers always say well this is why we're fairly positive that we're not going to head to that sort of Terminator scenario where you know the artificial intelligence gets smarter than human beings but what you're describing is these are these are almost um Consciousness sort of emotion level decision processes okay I think if you bring sentience into it it just clouds the issue so lots of people are very confident these things aren't sentient but if you ask them what do you mean by sentient they don't know and I don't really understand how they're so confident they're not sentient if they don't know what they mean by sentient but I don't think it helps to discuss that when you're thinking about whether they'll get smarter than us I am very confident that they think so suppose I'm talking to a chatbot and I suddenly realize it's telling me all sorts of things I don't want to know like it's telling me it's writing out responses about someone called Beyonce who I'm not interested in because I'm an old white male and I suddenly realized it thinks I'm a teenage girl now when I use the word thinks there I think that's exactly the same sense of thinks is when I say you think something um if I were to ask it am I a teenage girl it would say yes if I had to look at the history of our conversation I'd probably be able to see why it thinks I'm a teenage girl and I think when I say it thinks I'm a teenage girl I'm using the word think in just the same sense as we normally use it it really does think that give me an idea of why this is such a significant Leap Forward I mean to me it seems like there are parallel concerns for in the 80s and 90s blue-collar workers were concerned about robots coming in and replacing them and not being able to control them and now this is kind of a threat to the White Collar class of people saying that there are these Bots and agents that can do a lot of things that we otherwise thought would be something only people can yes I think there's a lot of different things we need to worry about with this with these new kinds of digital intelligence and so what I've been talking about mainly is what I call the existential threat which is the chance that they get more intelligent than us and they'll take over from us they'll get control that's a very different threat from many other threats which also severe so they include these things taking away jobs in a decent society that would be great it would mean everything got more productive and everyone was better off but the danger is that it'll make the rich richer and the poor poorer that's not ai's fault that's how we organize Society um there's dangers about them making it impossible to know what's True by having so many fakes out there that's a different danger that's something you might be able to address by treating it like counterfeiting governments do not like you printing their money and they make serious it's a serious offense to print money it's also a serious offense if you're given some fake money to pass it to somebody else if you knew it was fake that's a very serious offense I think government's going to have to make similar regulations for fake videos and fake voices and fake images it's going to be hard as far as I can see the only way to stop ourselves being swamped by these fake videos and fake voices and fake images is to have strong government regulation that makes it a serious crime you go to jail for 10 years if you produce a video with AI and it doesn't say it's made with AI that's what they do for counterfeit money and this is a series of threat is going to fit money so my view is that's what they ought to be doing I actually talked to Bernie Sanders last week about it and he liked that view of it I can understand governments and central banks and private Banks all agreeing on certain standards because there's money at stake and I wonder is there enough incentive for governments to sit down together and try to craft some sort of rules of what's acceptable and what's not some sort of Geneva Convention or Accords it would be great if governments could say look [Music] um these fake videos are so good at manipulating the electorate that we need them all marked as fake otherwise we're going to lose democracy the problem is that some politicians would like to lose democracy so that's going to make it hard so how do you solve for that I mean it seems like this Genie is sort of out of the bottle so what we're talking about right now is the genie of being swamped through fake news yeah and that clearly is somewhat out of the bottle it's fairly clear that organizations like Cambridge analytica by pumping out fake news had an effect on brexit and it's fairly clear that um Facebook was manipulated to have an effect on the 2016 election so the June is out of the bottle in that sense we can try and at least contain it a bit but that's not the main thing I'm talking about the main thing I'm talking about is the risk of these things becoming super intelligent and taking over control from us I think for the existential threat we're all in the same boat the Chinese the Americans the Europeans they all would not like um super intelligence to take over from people and so I think for that existential threat we will get collaboration between um all the companies and all the countries because none of them want the super intelligence to take over so in that sense that's like Global nuclear war where even during the Cold War people could collaborate to prevent them being a global nuclear war because it was not in anybody's interests sure and so that's one in a sense positive thing about this existential threat it should be possible to get people to collaborate to prevent it but for the all the other threats it's more difficult to see how you're going to get collaboration one of your more recent employers was Google and you were a VP and a fellow there and you recently decided to leave the company to be able to speak more freely about AI now they just launched their own version of kind of a GPT of Bard back in March so tell me here we are now what do you feel like you can say today or will say today that you couldn't say a few months ago um not much really I just wanted to be if you work for a company and you're talking to the media you tend to think what implications does this have to the company at least you ought to think back because they're paying you um I don't think it's sort of honest to take the money from the company and then completely ignore the company's interest um but if I don't take the money I just don't have to think what's good for Google and what isn't I can just say what I think it happens to be the case that I mean everybody wants to transmit the story as I left Google because they were doing bad things that's more or less the opposite of the truth um I think Google is behave very responsibly and I think having left Google I can say good things about Google and be more credible I just left so I'm not constrained to think about the implications for Google when I say things about singularities and things like that do you think that tech companies given that it's mostly their engineering staff that are trying to work on developing these intelligences are going to have a better opportunity to create the rules of the road then say governments or third parties I do actually I think there's some places where governments have to be involved like regulations that force you to show whether something was AI generated but in terms of keeping control of a super intelligence what you need is the people who are developing it to be doing lots of little experiments with it and seeing what happens as they're developing it and before it's out of control and that's going to be the mainly the researchers in companies I don't think you can leave it to philosophers to speculate about what might happen anybody who's ever written a computer program knows that getting a little bit of empirical feedback by playing with things quickly disabuses you of your idea that you really understood what was going on and so it's the people in the company is developing it who are going to understand how to keep control of it if that's possible so I agree with people like Sam Altman at open AI that this stuff is inevitably going to be developed because there's so many good uses of it and what we need is as it's being developed we put a lot of resources into trying to understand how to keep control of it and avoid some of the bad side effects back in March there were more than I'd say a thousand different folks in the tech industry including leaders like Steve Wozniak and Elon Musk who signed an open letter asking essentially to have like a a six-month pause on the development of artificial intelligence and you didn't sign that how come I thought it was completely unrealistic the point is these digital intelligences are going to be tremendously useful for things like medicine for reading scans rapidly and accurately it's been slightly slower than I expected but it's coming um they're going to be tremendously useful for Designing new Nano materials so we can make more efficient solar cells for example they're going to be tremendously useful or they already are for predicting floods and earthquakes and getting better climate getting better weather projections they're going to be tremendously useful in understanding climate change so they're going to be developed there's no way that's going to be stopped so I thought it was maybe a sensible way of getting media attention but it wasn't a sensible thing to ask for it just wasn't feasible what we should be asking for is that comparable resources are put into dealing with the bad possible side effects and dealing with how we keep these things under control as are put into developing them so present so 99 of the money is going into developing them and one percent's going into sort of people saying oh these things might be dangerous it should be more like 50 50 I believe when you kind of look back at the body of work of your life and when you look forward at what might be coming are you optimistic that we'll be able as Humanity to rise to this challenge or are you less so I think we're entering a time of huge uncertainty I think one will be foolish to be either optimistical pessimistic we just don't know what's going to happen the best we can do is say let's put a lot of effort into trying to ensure that whatever happens is as good as it could have been it's possible that there's no way we will control these super intelligences and that humanity is just a passing phase in the evolution of intelligence that in a few hundred years time there won't be any people it'll all be digital intelligences that's possible we just don't know um predicting the future is a bit like looking into fog you know how when you look into fog you can see about a hundred yards very clearly and then 200 yards you can't see anything there's a kind of wall and I think that walls at about five years Jeffrey Hampton thanks so much for your time thank you for inviting me [Music] foreign
Info
Channel: Amanpour and Company
Views: 521,201
Rating: undefined out of 5
Keywords: interview, CNN, PBS, Christiane Amanpour, world news, news anchor, news show, news, public affairs, late-night TV, journalist, Chief International Correspondent, Geoffrey Hinton, Hari Sreenivasan, AI, Aritificial Intelligence, Google, Godfather of AI
Id: Y6Sgp7y178k
Channel Id: undefined
Length: 18min 9sec (1089 seconds)
Published: Tue May 09 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.