He helped create AI. Now he’s worried it will destroy us.

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
Jeffrey Hinton is sometimes called Godfather of artificial intelligence is trying to explain its risks to the world this is just something somebody sent me from Twitter which means explaining something very real that can sound entirely made up I'm lost I don't know he's eager for good communication of the urgency and look who might be offering it Snoop Dogg it's blowing my mind because I watch movies on this as a kid I heard the dude that the old dude that created AI someone this is not safe because the AIS got their own minds and these don't start doing their own I'm like is we in a movie right now or what the man do y'all know that's what people are saying he's like are we in trouble he gets it he's smart you know what Affinity called you the old dude no I call myself the old dude now imagine working your entire adult life to build a better future I think we're gonna see the learning methods we've got already have dramatic effect on many Industries and solve lots of problems where computers and machine learning make life better for humans these chat Bots answer complicated questions draft emails and speeches then realizing that the creation is nearing a point where it could do a type of harm that cannot be undone that's the realization that hit Hinton when he was working for Google because of his job he couldn't talk so he quit his job and now he talks I caught up with him in London a few days ago I think people are ultimately excited to hear from you and a bit afraid should they be I think there are things to be worried about there's all the normal things that everybody knows about but there's another threat that's rather different from those which is if we produce things that are more intelligent than us how do we know we can keep control and what tends to happen when well if we're talking about Evolutions all these species are evolving and what tends to happen is it doesn't go well for the less intelligent species the other one kills it not necessarily ants look after aphids because they produce honey um but answer in charge answer in charge yes ants in this analogy in case that wasn't ominously clear enough are not the humans it made me realize that these digital intelligences have something we don't have that makes them much better when one of them knows something it can tell all the others that's what we don't have with people so imagine you had 10 000 people and imagine if when one person learns something everybody knew it you could learn a lot more stuff right right and that's why things like chat GPT knows like 10 000 times as much as any one person it's because when you train it there's lots of different copies looking at different bits of the data and learning stuff and they can all combine what they learn instantly with a bandwidth of like trillions of bits so can they think yes so imagine the following scenario I'm talking to chatbot and we talk for a bit and the answers it's giving me seem a bit strange to me and I suddenly realize that it thinks I'm a teenage girl and I say what demographic do you think I am and it says it thinks I'm a teenage girl um so the question is when I said it's I suddenly realized it thinks I'm a teenage girl was that a metaphorical use of the word think or was that just the same way as we use think and I strongly believe that use of the word think when I said it thinks I'm a teenage girl was exactly the same way of using thinkers we do with people and so that was enough to make you say what this has accelerated beyond my comfort level I suddenly realized maybe they already are better and making them more like real neural let's isn't the point they're already better than us now in a better way of doing learning and if we make them bigger they'll get much smarter than us they already know more than any one person I I understand that things could go awry but I still think that people hear the notion of danger and they dismiss it as hyperbole I thought it was hyperbole for a long time because I thought these things were a long way off I thought there will eventually be danger but I thought um focusing on it now is unnecessary because it'll be 30 to 50 years before these things get more intelligent than us but this combination of realizing that they might have a much better way of learning than we have because they can share knowledge instantly and seeing things like chat gbt or Palm at Google that can explain why a joke is funny made me realize these things are already pretty intelligent and if they've got a better form of intelligence than ours then it gets to be much more urgent probably still hard to see the threat right some changes are clear as chat GPT for example gets smarter as AI gets more advanced yes some jobs will disappear and some may shift foreign there can be pluses for example an AI doctor may have data from hundreds of millions of patients so far more knowledge than an actual human but what if that machine that AI doctor stops recommending treatment for people with a low chance of recovering that can happen with humans too but as machines learn and supersede human learning it is the unintended consequences that haunt can we give these machines a moral code as a a code of ethics can't kill people you can't hurt people it would be nice if we could do that but just remember that one of the main players in developing these machines is defense departments and defense departments I mean Isaac Asimov said if you make a smart robot the first rule should be do not harm people well I don't think that's going to be the first rule in a robot Soldier produced by a defense department right but is there not some language we can give them so that they can police themselves how does it work out when things please themselves not well where's your mind going in this conversation is it going to that terrible place of past Creations that threatened Humanity the nuclear bomb for example it's not a bad example because it's so terrified that fear motivated a type of global togetherness treaties that have kept the threat at Bay until now this says Hinton is that was this not where we say China Russia we can't stand each other the all these countries they're they're angry but we we have a common concern exactly for the super intelligence is taking over not for all the other things but for that we're all in the same boat it's like a global nuclear war we all lose and so that's the situation in which warring tribes cooperate an external enemy that's bigger than them will force them to cooperate because they get the same payoff as each other and so this thresh is like that do you think China understands her yes what makes you think that there's researchers in China who are talking about this do the Americans understand it they're beginning to I think yes senior political leaders in the states are paying attention now and they're getting very interested in so it's not just things like fakes and job losses which are these sort of immediate concerns they're also becoming interested in this existential threat of how do we stop these things taking over the White House is indeed talking about a moral obligation for tech companies to consider the risks of AI not just the benefits where the planet agrees on so little just maybe it can agree on this
Info
Channel: CBC News: The National
Views: 56,430
Rating: undefined out of 5
Keywords: The National, AI, artificial intelligence, geoffrey hinton, geoffrey hinton ai, godfather of AI, geoffrey hinton google, google ai, artificial intelligence risks, AI risks, dangers of AI, ai risk to humanity, AI threats, existential threats, geoffrey hinton interview, geoffrey hinton adrienne arsenault, chatgpt, palm, ai programs, ai robot, ai soldier, CBC, CBC News
Id: CkTUgOOa3n8
Channel Id: undefined
Length: 8min 8sec (488 seconds)
Published: Wed May 10 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.