In conversation with the Godfather of AI

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
foreign [Music] to be here with Jeffrey Hinton one of the great minds and one of the great issues of our time a man who helped create artificial intelligence was at the center of nearly every revolution in it and now has become perhaps the most articulate critic of where we're going so an honor to be on stage with you thank you he's earned the moniker Godfather of AI one of the things that AI has traditionally had problems with his humor I asked AI if he could come up with a joke about the Godfather of AI and it actually wasn't that bad it said he gave AI an offer it couldn't refuse neural networks it's not bad okay that's not bad it's good for AI so let's begin with that what I want to do in this conversation is very briefly step a little back into your foundational work then go to where we are today and then talk about the future so when you're building and you're designing neural networks and you're building computer systems that work like the human brain and that learn like a human brain and everybody else is saying Jeff this is not going to work you push ahead and do you push ahead because you know that this is the best way to train computer systems or you do it for more spiritual reasons that you want to make a machine that is like us I do it because the brain has to work somehow and it sure as hell doesn't work by manipulating symbolic Expressions explicitly and so something like neural Nets had to work also for Neumann ensuring believe that so that's a good start so you're you're doing it because you think it's the best way forward yes in the long run the best way forward because that decision has profound effects down the line but let's okay so you do that you start building neural Nets you push forward and they become better than humans at certain limited tasks right at image recognition at translation some chemical work I interviewed you in 2019 at Google I O and you said that it would be a long time before they could match Us in reasoning and that's the big change that's happened over the last four years right they still can't match us but they're getting close and how close are they getting and why it's the big language models that are getting close and I don't really understand why they can do it but they can do little bits of reasoning so my favorite example is I asked gpg4 a puzzle that was given to me by a symbolic AI guy who thought it wouldn't be able to do it I made the puzzle more difficult and it could still do it and the puzzle was the rooms in my house are painted blue or yellow or white yellow paint Fades to White within a year in two years time I want them all to be white what should I do and why and it says you should paint the blue rooms White and then it says you should do that because blue won't Fade to White and he says you don't need to paint the yellow rooms because they will Fade to White so it knew what I should do and it knew why and I was surprised that it could do that much reasoning already and it's kind of an amazing example because when people critique these systems or they say they're not going to do much they say they're mad libs they're just word completion but that is not word completion to you is that thinking yeah that's thinking and when people say it's just autocomplete there's a lot of a lot goes on in that word just autocomplete if you think what it takes to predict the next word you have to understand what's been said to be really good at predicting the next word so people say it's just autocomplete or it's just statistics now there's a sense in which it is just statistics that is every but in that sense everything's just statistics it's not the sense most people think of Statistics as it keeps the counts of how many times this combination of words occurred and how many times that combination it's not like that at all it's inventing features and interactions between features to explain what comes next okay so if it's just statistics and everything is just statistics is there anything that we can do obviously it's not humor maybe it's not reasoning is there anything that we can do that a sufficiently well-trained large language model with a sufficient number of parameters and a sufficient amount of compute could not do in the future if the model is also trained on vision and picking things up and so on then no but is there anything that we can think of and any way we can think in any cognitive process that the machines will not be able to replicate we're just a machine we're a wonderful incredibly complicated machine but we're just a big neural net and there's no reason why an artificial neural net shouldn't be able to do everything we can do are we a big neural net that is more efficient than these new neural Nets we're building or are we less efficient it depends whether you're talking about speed of acquiring knowledge and how much knowledge you can acquire or whether you're talking about energy consumption so an energy consumption we're much more efficient we're like 30 watts and one of these big language models when you're training it you train many copies of it each looking at different parts of the data so it's more like a megawatt so it's much more expensive in terms of energy but all these copies can be learning different things from different parts of the data so it's much more efficient in terms of acquiring Knowledge from data and it becomes only more efficient because each system can train each next system yes so let's get to your critique so the the best summarization of your critique came from a conference at the Milken Institute about a month ago and it was Snoop Dogg and he said I heard the old dude who created AI saying this is not safe because the ai's got their own mind and those going to start doing their own [Laughter] accurate is that an accurate summarization um they probably didn't have mothers [Laughter] [Applause] but the rest of what Dr Dog said is correct hang on yes all right so explain what you mean or what he means and how it applies to what you mean when they're going to start doing their own what does that mean to you okay so first I have to emphasize we're entering a period of huge uncertainty nobody really knows what's going to happen and people whose opinion I respect have very different beliefs from me like janakan thinks everything's going to be fine they're just going to help us it's all going to be wonderful but I think we have to take seriously the possibility that if they get to be smarter than us which seems quite likely and they have goals of their own which seems quite likely they may well develop the goal of taking control and if they do that we're in trouble so okay so let's let's go back to that in a second but let's take yon's position so Jan lacun was also one of the people who won the Turing award and is also called The Godfather of AI and I was recently interviewing him and he made the case he said look Technologies all Technologies can be used for good or ill but some technologies have more of an inherent goodness an AI has been built by humans by good humans for good purposes it's been trained on good books and good text it will have a bias towards good in the future do you believe that or not I think AI that's been trained by good people will have a bias towards good and they are being trained by bad people like Putin or somebody like that will have a bias towards bad we know they're going to make battle robots they're busy doing it in many different defense departments so they're not going to necessarily be good since their primary purpose is going to be to kill people so you believe that the risks of the bad uses of AI are whether they're more or less than the good users of AI are so substantial they deserve a lot of our thought right now certainly yes for lethal autonomous weapons they deserve a lot of our thought well let's okay let's stick on lethal autonomous weapons because one of the things in this argument is that you are one of the few people who is really speaking about this as a risk a real risk explain your hypothesis about why super powerful AI combined with the military could actually lead to more and more Warfare okay I don't actually want to answer that question um there's a separate question even if the area isn't super intelligent yeah if defense departments use it for making battle robots it's going to be very nasty scary stuff and it's going to lead even if it's not super intelligent and even if it doesn't have its own intentions it just does what Putin tells it to um it's going to make it much easier for example for rich countries to invade poor countries a present there's a there's a barrier to invading poor countries willy-nilly which is you get dead citizens coming home if they're just dead battle robots that's just great the military industrial complex would love that so you think that because I mean it's sort of a similar argument that people make with drones if you can send a drone and you don't have to send an airplane with a pilot you're more likely to send the Drone therefore you're more likely to attack if you have a battle robot it's that same thing squared yep and that's your concern that's my main concern with battle robots it's a separate concern from what happens with super intelligent systems taking over for their own purposes before we get to super intelligent uh systems let's talk about some of your other concerns so in the Litany of things that you're worried about you obviously we have battle robots there's one you're also quite worried about inequality tell me more about this so it's fairly clear it's not certain but it's fairly clear that these big language models will cause a big increase in productivity so there's someone I know who answers letters of complaint for a Health Service yeah and he used to write these letters himself and now he just gets chat gbt to write the letters and it takes one-fifth of the amount of time to answer a complaint so he can do five times as much work instead of only five times fewer of him um or maybe they'll just answer a lot more letters but they'll answer more letters right or maybe they'll have more people because they'll be so efficient right more productivity leads to more getting more done I mean this is maybe not this is an unanswered question but what we expect in the kind of society we live in is that if you get a big increase in productivity like that the wealth isn't going to go to um the people who are doing the work or the people who get unemployed it's going to go to making the rich richer and the poor poorer and that's very bad for society definitionally or you think there's some feature of AI that will lead to that no it's not to do with AI it's just what happens when you get an increase in productivity particularly in a society that doesn't have strong unions but now a there are many economists who would take a different position and say that over time and if you were to look at technology right we went from horses and horses and Buggies and the horses and Buggies went away and then we had cars and oh my gosh the people who drove the horses lost their jobs and ATMs came along and suddenly bank tellers no longer need to do that but we now employ many more bank tellers than we used to and we have many more people driving Ubers than we had people driving horses so the argument might an economist would make to this would be yes there will be chairing and there will be fewer people answering those letters but there'll be many more higher cognitive things that will be done how do you respond to that I think the first thing I'd say is a loaf of bread used to cost a penny then they invented economics and now it costs five dollars so I don't entirely trust what economists say particularly when they're dealing with a new situation that's never happened before right and super intelligence would be a new situation that never happened before but even these big chat Bots that are just replacing people whose job involves producing text that's never happened before and I'm not sure how they can confidently predict that more jobs will be created than the number of jobs lost I'll just have a little side note that in the green room I introduced Jeff to I have two of my three children are here Alice and Zachary they're somewhere out here and uh he said to Alice he said are you going to go into media and then he said well I'm not sure media will exist and then Alice was asking what should I do and you said Plumbing yes now explain um um I'm all I mean we have a number of plumbing problems at our house would be wonderful if they uh were able to put in a new sink explain what jobs a lot of young people out here not just my children but thinking about what careers to go into what are the careers they should be looking at what are the attributes of them I'll give you a little story about being a carpenter if you're a carpenter it's fun making furniture but it's a complete dead loss because machines can make furniture if you're a carpenter what you're good for is repairing furniture or fitting things into awkward spaces in old houses making shelves in things that aren't quite Square so the jobs that are going to survive AI for a long time are jobs where you have to be very adaptable and physically skilled and Plumbing's that kind of a job because manual dexterity is hard for a machine to replicate it's it's still hard and I think it's going to be longer before they can be really dexterous and get into awkward spaces um that's gonna take longer than being good at answering text questions but should I believe you because when we were on stage four years ago you said reasoning as long as somebody has a job that focuses on reasoning they'll be able to last doesn't isn't the nature of AI such that we don't actually know where the next incredible Improvement in performance will come maybe it will come in manual dexterity yeah it's possible so actually let me let me ask you a question about that so do you think when we look at Ai and we look at the next five years of AI the most impactful improvements we'll see will be in large language models and related to large language models or do you think it will be in something else I think it'll probably be multimodal large models so they won't just be language models they'll be doing Vision um hopefully they'll be analyzing videos so they were able to train on all of the YouTube videos for example and you can understand a lot um by from things other than language and when you do that you need less language to reach the same performance so the idea they're going to be saturated because they've already used all the language there is or all the language is easy to get hold of that's less of a concern if they're also using lots of other modalities I mean this gets at one of the another argument that Yan your fellow Godfather of AI makes is that language is so limited right there's so much information that we're conveying just beyond the world in fact I'm gesturing like mad right which conveys some of the information as well as the lighting and all this so your view is that may be true language is a limited Vector for information but soon it will be combined with other vectors absolutely um it's amazing what you can learn from language alone but you're much better off learning from many modalities small children don't just learn from language alone right so if you were if your principal role right now was still researching AI finding the next big thing you would be doing multimodal Ai and trying to attach say visual AI systems to text-based AI systems yes which is what they're doing now at Google Google is making a system called Gemini but fortunately has talked about a few days ago and uh you're allowed it's a multi-mode layout yeah well let me talk about actually something else at Google so while you were there Google invented the Transformer Network or invented the arc Transformer architecture generative pre-trained Transformers when did you realize that that would be so Central and so important it's interesting to me because it's this paper that comes out in 2017 and when it comes out it's not as though firecrackers are left you know shot into the sky it's six years later five years later that we suddenly realized the consequences and it's interesting to think what are the other papers out there that could be the same in five years so with Transformers it was really only a couple of years later when Google developed Birch so but made it very clear Transformers were a huge breakthrough um I didn't immediately realize what a huge breakthrough they were uh and I'm annoyed about that it took me a couple of years to realize well you never made it clear the first the first time I ever heard the word Transformer was talking to you on stage and you were talking about Transformers versus capsules and this was right right after right after it came out let's talk about one of the other critiques about language models and other models which is soon I mean in fact probably already they've absorbed all the organic data that has been created by humans if I create an AI model right now and I train it on the Internet it's trained on a bunch of stuff mostly stuff made by humans but a bunch of stuff made by AI right yeah and you're gonna You're Gonna Keep training AIS on stuff that has been created by AIS whether it's text-based language model or whether it's a multimodal language model will that lead to the inevitable Decay and Corruption as some people argue or is that just you know a thing we have to deal with or is it as other people in the AI field the greatest thing for training AIS and we should just use synthetic data in AI okay I don't actually know the answer to this technically um I suspect you have to take precautions so you're not just training on data that you yourself generated or the some previous version of you generated um I suspect it's going to be possible to take those precautions although it'd be much easier if all fake data was marked fake um there is one example in AI where training on stuff from yourself helps a lot so if you don't have much training data or rather you have a lot of unlabeled data and a small amount of label data you can train a model to predict the labels on the label data and then you take that same model and train it to predict labels for unlabeled data and whatever it predicts you tell it you were right and that actually makes the model work better how on Earth does that work um because on the whole it tends to be right I it's complicated it's been analyzed much better in many years ago from acoustic modems they did the same trick so so let me so listening to this I've had this realization on stage you're a man who's very critical of where we're going Killer Robots income inequality you also sound like somebody who loves this stuff yeah I love this stuff how could you not love making intelligent things so let me get to maybe the the most important question for the audience and for everyone here we're now at this moment where a lot of people here love this stuff and they want to build it and they want to experiment but we don't want negative consequences we don't want increased income inequality I don't want media to disappear what is the what are the choices and decisions and things we should be working on now to maximize the good to maximize the creativity but to limit the potential Harms so I think to answer that you have to distinguish many kinds of potential harm so I'll distinguish like six of them for you please there's bias and discrimination yep that is present now um it's not one of these future things we need to worry about it's happening now but it is something that I think is relatively easy to fix compared with all the other things if you make your target not be to have a completely unbiased system but just have a system that's significantly less bias than what it's replacing so a person you have old white men deciding whether young black women should get mortgages and if you just train on that data you'll get a system that's equally biased but you can analyze the bias you can see how it's biased because it won't change its Behavior you can freeze it and then analyze it and that should make it easier to correct for bias so okay that's bias and discrimination I think we can do a lot about that and I think it's important we do a lot about that but it's doable the next one is battle robots that I'm really worried about because defense departments are going to build them and I don't see how you could stop them doing it um something like a Geneva Convention would be great but those never happened until after they've been used with chemical weapons they didn't happen until after the first World War I believe and so I think what may happen is people who use battle robots will see just how absolutely awful they are and then maybe we can get an International Convention to prohibit them so that's two I mean you could also tell the people building the AI to not sell their equipment to the military you could try try Okay number three the military has lots of money number three there's joblessness yeah you could try and do stuff to make sure the increase in productivity some of that extra Revenue that comes from the increase in productivity is going goes to helping the people who remain jobless if it turns out that there aren't as many jobs created as destroyed mm-hmm that's a question of social policy and what you really need for that is socialism we're in Canada so you can say socialists [Music] um number four would be the warring Echo Chambers due to the big companies wanting you to click on things and make you indignant and so giving you things that are more and more extreme and so you end up in this Echo chamber where you believe these crazy conspiracy theorists if you're in the other Echo chamber or you believe the truth if you're in mayaku chamber um that's partly to do with the policies of the company so maybe something could be done about that but that would I mean that is a problem that exists it existed prior to large language models and in fact large language models could reverse it maybe I mean it's an open question of whether they can make it better or whether they make that problem worse yeah it's a problem to do with AI but it's not to do with large language oh is it a problem it's a problem to do with AI in the sense that there's an algorithm using AI trained on our emotions that then pushes Us in those directions okay all right so that's number four um there's the existential risk which is the one I decided to talk about because a lot of people think is a joke right so there's an editorial in nature yesterday where they basically said I'm fear-mongering about the existential risk is distracting attention from the actual risks so they can paid existential risk with actual risks implying the existential risk wasn't actual um I think it's important that people understand it's not just Science Fiction it's not just fear-mongering it is a real risk that we need to think about and we need to figure out in advance how to deal with it um so that's five and there's one more and I can't think what it is how do you have a list that doesn't end on existential risk I feel like that should be the end of the list no that was the end but I thought if I talked about existential risk I'd be able to remember the missing one but I couldn't all right well let's talk about existential risk what exactly explain exactly existential risk how it happens or explain as best you can imagine it what it is that goes wrong that leads us to Extinction or disappearance of humanity as a species okay at a very general level if you've got something a lot smarter than you that's very good at manipulating people just at a very general level are you confident people will stay in charge and then you can go into specific scenarios for how people might lose control even though they're the people creating this and giving it its goals and one very obvious scenario is if you were if you're given a goal and you want to be good at achieving it what you need is as much control as possible so for example if I'm sitting in a boring seminar and I see a little dot of light on the ceiling and then suddenly I noticed it when I move that dot of light moves I realize it's the Reflection from my watch the sun is bouncing off my watch and so the next thing I do is I don't start listening to the boring seminar again I immediately try and figure out how to make it go this way and how to make it go that way and once I got control of it then maybe I'll listen to the seminar again we have a very strong built-in urge to get control and it's very sensible because the more control you get the easier it is to achieve things and I think AI will be able to derive that too it's good to get control so you can achieve other goals wait so you actually believe that getting control will be an innate feature of something that the AIS are trained on us right they act like us they think like us because the neural architecture makes them like our human brains and because they're trained on all of our outputs so you actually think that getting control of humans will be something that the AI is almost aspire to no I think they'll derive it as a as a way of achieving other goals I think in us it's innate I think I'm very dubious about saying things are really innate but I think the desire to understand how things work is a very sensible desire to have and I think we have that so we have that and then AIS will develop an ability to manipulate us and control us in a way that we can't respond to right that the manipulative AIS and even though good people will be able to use equally powerful AIS to counter these bad ones you believe that we still could have an existential crisis yes it's not clear to me I mean yeah makes the argument that um the good people will have more resources than the bad people um I'm not sure about that and that good AI is going to be more powerful than bad Ai and good AI is going to be able to regulate bad Ai and we have a situation like that at present right where you have people using AI to create spam then you have people like Google using AI to filter out the spam and at present Google has more resources and the Defenders are beating the attackers but I don't see that it'll always be like that I mean even in cyber warfare where you have moments where it seems like the criminals are winning and sometimes where it seems like the Defenders are winning so you believe that there will be a battle like that over control of humans by super intelligent artificial intelligence it may well be yes and I'm not convinced that um good AI That's trying to stop bad AI getting control will win Okay so all right so before this existential risk happened before bad III does this we have a lot of extremely smart people building a lot of extremely important things what exactly can they do to most help limit this risk so one thing you can do is before the airline gets super intelligent you can do empirical work into how it goes wrong how it tries to get control whether it tries to get control we don't know whether it would but before it's smarter than us I think the people developing it should be encouraged to put a lot of work into understanding how it goes might go wrong understanding how it might try and take control away and I think the government could maybe encourage the big companies developing it to put comparable resources maybe not equal resources but right now there's 99 very smart people trying to make it better and one very smart person trying to figure out how to stop it taking over and maybe you want it more balanced and so this is in some ways your role right now the reason why you've left Google on good terms but you want to be able to speak out and help participate in this conversation so more people can join that one and not the 99. yeah I would say it's very important for smart people to be working on that but I'd also say it's very important not to think this is the only risk there's all these other risks and I've remembered the last one which is fake news um so it's very important to try for example to Mark everything that's fake as fake whether we can do that technically I don't know but it'd be great if we could governments do it with counterfeit money they won't allow counterfeit money because that reflects on their sort of central interest um they should try and do it with AI generated stuff I don't know whether they can but I give so give one we're out of time give one specific to do something to read a thought experiment one thing to leave the audience with so they can go out here and think okay I'm gonna do this AI is the most powerful thing we've invented in perhaps in our lifetimes and I'm going to make it better to make it more likely it's a Force for good in the Next Generation so how could they make it more likely be a force for good yes one one final thought for everyone here I don't I actually don't have a plan for how to make it more likely to be good than bad sorry um I think it's great that it's being developed because we didn't get to mention the huge numbers of good uses of it yeah like in medicine in climate change and so on so I think progress now is inevitable and it's probably good but we seriously ought to worry about mitigating all the bad side effects of it and worry about the existential Threat all right thank you so much what an incredibly thoughtful inspiring interesting phenomenal mark thank you to Jeffrey Hinton thank you thank you Jeff so great
Info
Channel: Collision Conference
Views: 49,169
Rating: undefined out of 5
Keywords: CC22
Id: CC2W3KhaBsM
Channel Id: undefined
Length: 30min 3sec (1803 seconds)
Published: Thu Jul 20 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.