S3 E9 Geoff Hinton, the "Godfather of AI", quits Google to warn of AI risks (Host: Pieter Abbeel)

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] Our Guest today is Jeffrey Hinton over the past 10 years AI has experienced breakthrough after breakthrough after breakthrough underneath all of these breakthroughs is one single subfield of artificial intelligence deep learning Jeff was at the origins of so many of the pioneering breakthroughs in deep learning that he is often referred to as the Godfather of artificial intelligence his work has been cited over half a million times that means there is half a million and Counting other research papers out there that build on top of his work Jess work has been recognized by the touring award a computer science equivalent of the Nobel Prize Jeff was actually the guest for our season 2 finale with two back-to-back episodes discussing the early days of deep learning the various breakthroughs and the potential for AI ahead of us but something big changed just recently which is why I'm so excited to have Jeff back on already as announced in the New York Times among many other outlets Jeff has quit his job at Google so he can freely speak about the risks of artificial intelligence a part of him he said now regrets his life's work Jeff so great to have you here with us welcome to the show thank you and enjoyed our last podcast film I'm hoping to enjoy this one well I hope so too I'll try my best Jeff before diving into today's conversation I'd like to thank our podcast sponsors index Ventures and weights and biases index Ventures is a venture capital firm that invests in exceptional entrepreneurs across all stages from seed to IPO with offices in San Francisco New York and London The Firm backs Founders across a variety of verticals including AI SAS fintech security gaming and consumer on a personal note index is an investor in covariant and I couldn't recommend them any higher weights and biases is an ml Ops platform that helps you train better models faster with experiment tracking modeling data set versioning and Model Management they're used by openai Nvidia and almost every lab releasing a large model in fact many if not all of my students at Berkeley and colleagues at covarent are big users of weights and biases well Jeff um welcome back so glad to have you here let me Dive Right In with the big headline from last week May 1st New York Times headline summarize Ashley very well why we decided to catch up here it reads The Godfather of AI leaves Google and warns of danger ahead what what's going on you worked on in the field for so long and now it is big change in how you think about it can you say a bit more it relates quite a lot to what we talked about in a previous cook podcast about the Ford Ford algorithm for 50 years I thought I was investigating how the brain might learn by making models on digital computers using artificial neural networks and trying to figure out how to make those learn and I strongly believe that to make the digital models work better we had to make them more like the brain and until very recently I believed that and then a few months ago I suddenly did a flip I suddenly decided actually back propagation running on digital computers might be a much better learning algorithm than anything the brain's got and there were several reasons for that um it started a year or two ago when Palm could explain why joke was funny and that was a Criterion I've been using a long time for a long time to decide whether these things were really intelligent and I was slightly shocked that Palm could explain why a joke was funny then we saw um in addition from things like chat GPT and gbc4 and they were very impressive of what they could do and in particular they had about a trillion weights but they knew much much more than any one person so we've got about a hundred trillion weights but these things know about a thousand times more common sense facts than we do um and that suggested that back propagation was much much better at packing a lot of information into only a few connections like only a tree so that made me change my mind about whether the brain has a better learning algorithm of digital systems and I started thinking that maybe these digital systems have something the brain doesn't have which is you can have many copies of the same model running on different Hardware and when one copy learns something it can communicate that to all the other copies by communicating the weight changes with a bandwidth of trillions of bits whereas when you learn something to communicate it to me I need to try and change my weights so that I would say the same thing as you and the bandwidth of sentences is only hundreds of bits per sentence so maybe these things are much much better requiring knowledge because they can work in parallel much much better than people can in some sense you could then say that's that's a dream come true you've been trying to build AI and now all of a sudden you realize that the work that's been going on and that you pioneered a lot of is actually even more capable possibly than you had dreamed of when you started you were hoping to match human intelligence and now you might have found a way to exceed it well yes except that my dream was to understand how the brain works and we still haven't done that um we've built something better and now we have to worry about what the something better might do now I think I mean you notice as well as anybody the first question on a lot of people's minds when they hear about AI is you know is it dangerous to us if it becomes smarter than us which you're alluding to Jeff that it might have the potential to be smarter than us um is it going to kill us what is it going to do with us what happens to us humans uh when we hit that point so obviously we would like to stay in control um and obviously there's a problem with less intelligent things controlling more intelligent things now one thing we have on our side is that whether elsewhere evolution so we come with strong goals by not damaging our bodies and getting enough to eat and making lots of copies of ourselves and it's very hard to turn those goals off um these things don't come with strong built-in goals we made them and we get to put the goals in so that suggests that we might be able to keep them working for our benefit but there's lots of possible ways that might go wrong so one possible way is Bad actors defense departments are going to build robot soldiers and the robot soldiers are not going to have the Asimov principles their first principle is not going to be whatever you do don't harm people um it's going to be just the opposite of that so that's the bad actor scenario but then there's the alignment problem which is that if you give these things the ability to create their own sub goals and you will surely want to do that because creating sub goals makes you much more efficient so for example if you want to get to the airport you create the sum goal of finding some means of transport and then you work on that sub goal and breaking things up into some goals just makes you more efficient and so I think we will give these digital intelligences the ability to create their own sub calls and now there's a problem which is what if they create some goals that have unintended consequences that are bad for us so here's a very common sub goal which makes a lot of sense um gain more control because if you gain more control you're better at achieving all your other goals so for example I was sitting in a very boring seminar and I noticed a spot of light on the ceiling and I wonder where it came from and then I noticed that when I moved the spot of light moved so then I realized it was a reflection of the sun on my watch and so what did I do having solved the problem did I go back and listen to the boring seminar well no the first thing I did was try to figure out how I could get it to move on the ceiling I wanted to get control of that spot of light we're probably having a built-in drive to get control but even if we didn't that would be a very useful thing to do to get control of things because it allows you to achieve all sorts of other things so I think as soon as you let them develop their own sub goals one of the sub goals will be get more control and we don't want them to get too much control it seems that there is still a big I would say Gap maybe at least conceptually maybe it can be bridged quickly in time but a big gap conceptually between the models today that are most visible the the language models chat CPT and and competitors thereof um which do next word completion conceptually speaking versus AIS that have a goal though of course have senioris with goals like the world champion of go is an AI that has the goal of winning the game of Go and is trained that way but this AIS with goals so far have been in rather contained environments I guess compared to the next word prediction models so how do you see a path I mean do you see do you see it happen quickly to go from next word prediction to AIS that start doing things themselves in the world it's not quite true that it's just an extra prediction that's the main way they learn oh but they're also trained with human reinforcement learning right so people are telling them don't produce that kind of answer do produce this kind of answer that's very different from next world prediction and that's shaping them and that was a big breakthrough that open AI had realizing that actually you don't need to do all that much of that once you train a big language model and you can sort of radically shape the way it behaves during that it's a bit like raising a child when you raise a child most of what it learns to do is just from wandering about in the world and figuring out how the world works but the parent has a small amount of input by saying no and don't do that and all very well done um and that makes a big difference to how the child turns out so already we've got things other than next word prediction shaping them and you can imagine going further you could imagine having large language models that are multimodal so that seeing visual input and they're trying to do things like open doors and put things in drawers and then they'll have um much more than just Network prediction and even if it was just an echoed prediction people sometimes talk with that they say oh they're just autocomplete but the point about autocomplete is if you want to do a really good job of next word prediction the only way to do a really good job is to understand what was said and that's what they're doing they're understanding what was said in order to do next word prediction and so underneath it's an it needs to understand effectively everything that's going on in people's minds to be able to maximally accurately predict what a person will say next I think is your point right and so it has to be a very capable model the best prediction would be made like that it doesn't understand everything that's going on in people's minds yet but it understands quite a lot and it's understanding more every day and if you look at how good they were five years ago look at how good things were in 2015 or sometime like that when people were messing about with chat Bots they weren't that good before Transformers came along um look at how good they are now and now project that forwards five years and that's what's got me worried I think in five years time they could well be smarter than people now when you say smarter than people that's an interesting concept to even Define right what do you think of when you say smarter than people well it's very easy to see in a limited domain I don't play go so I don't understand alphago but I do play Chess at a very low level and when I look at Alpha zero it's not just that he's doing lots of calculation because he's doing a lot less calculation than deep blue is doing I believe um but it's got very good intuitions and it makes brilliant peace sacrifices in this kind of Justified way and so it's just much better than any human chess player ever was and I don't see why that should just be limited to that domain and it's not that it's you know it's not just cheating by doing lots and lots of calculation it has really good intuitions about Chess so could you imagine that I mean maybe not five years from now but that or maybe sooner who knows that somebody decides that they want the CEO of their company to be an AI and that that company will actually do well because that's aicu better understands everything going on in the company in the world and can make better decisions why not I don't think that's an absurd idea I should say something about predicting the future here um so you get lots of car accidents when people drive in fog and the reason is people are used to driving at night and and now you can see the tail lights of the car in front and the brightness of the tail lights falls off as this inverse square of the distance let's imagine light you get from them and so that's your kind of model which is kind of quadratic fall off as soon as you get fog that's exponential fall off you lose a certain fraction of the light per unit distance and so relative to what you're used to which is quadratic fall off fog um you can see the first hundred yards perfectly clearly and that makes you think you're going to be able to see a thousand yards moderately clearly but actually at 200 yards a wall comes down you can't see anything beyond 200 charts because it's exponential drop off and that's a very good model for seeing the future you can see very clearly what's going to handle a few years down the road and then suddenly you don't know anything you think you do because you extrapolate but you extrapolate using a linear model or quadratic model and it's actually exponential and we're hopelessly predicting future in the long term I love the story of the New York Times where in I think 1902 was an article that predicted that I'm heavier than air flying machines would take a million or maybe 10 million years to develop and actually they came in two months after that well there's a lesson there um not Jeff if we go zoom out for a moment when you think about the risks of AI and we'll get you of course I know you've said you see many good things coming from AI also that's why you remain very excited but we need to approach it the right way but just to make sure we have a good framing of the risks which are driving your your change in what you're doing right now um you mentioned Bad actors could aim more powerful to put it to their bad uses you mentioned alignment uh namely making sure the AI aligns with what we want and I want to dive a lot deeper in that in a little bit but are there others than these two that concern you oh yes there's lots of things that many other people have talked about so there's and they've been talked about for a while like AI is incredibly biased if it's trained on incredibly biased data it just picks up price from the data that doesn't worry me as much as it were some people because I think people are very biased and actually understanding the bias in an AI system is easier than understanding the bias in a person because you can just freeze the AI system and do experiments on it you can't deal with that with a person if you try and freeze the person into experiments they realize what you're up to and they change what they say um so I think actually bias is easier to fix an AI person is it in an AI system than it is in a person um there's um job losses and job losses aren't really the fault of AI so if you think about what tolerance driving if we make something that can drive long distance trucks a lot of truck drivers will lose their jobs um and some people think well that's the fault of AI but when we made things that were better at digging ditches machines that could dig ditches we didn't say oh well we shouldn't do that these machines are terrible actually at the time they probably did but um in the end we didn't keep digging ditches with Spades we dug ditches with machines because that's just a much better way to do it when the people are used to do ditches with Spades had to find other jobs in a decent Society if you make if you increase productivity it should help everybody the danger is in the society that most of us living if you increase productivity the gains are going to go to making the rich richer and possibly making the poor poorer but I didn't see that as the thought of AI and I don't see that as a reason to be Luddite and to stop developing AI because AI has tremendous good it can do even in the area of autonomous driving if an AI runs over a pedestrian everybody thinks shockingly we should stop developing it even if on the very same day people ran over lots of pedestrians um we all know that well you and I believe I think that eventually they'll get on autonomous driving working properly and it will save lots and lots of lives um it won't lapse in attention the same way and it'll probably be more cautious um so that's a good thing in medicine it's even more obvious what the Good's going to be we're going to get much better family doctors who know much more we're going to be able to get much more information out of medical scans I said in 2016 that by 2021 we wouldn't need Radiologists um I was talking in the context of interpreting scans so what I meant was we wouldn't need them for interpreting scans that was over ambitious um already we've got systems in Pakistan and India doing diabetic retinopathy staging diabetic retrogranopathy and saving lots of people's sight um we've already got systems that are comparable with good Radiologists from many other kinds of scam I think it'll still be a few years before we move to a system where most of the interpretation of scans is done by these AI systems with Radiologists looking over their shoulders um but that's coming and that'll be tremendously useful there's an enormous amount of information in something like a CAT scan that isn't being used at present and we'll be able to use much more information I think there's going to be things like making better Nano materials um obviously things like Alpha fold where Alpha folders now done about a billion years work in predicting protein structure if it was done the old way by PhD students um that's enough to pay for most of AI uh lots of things like that are going to be tremendously helpful I think if you could make Nano materials that make better solar panels that will probably compensate for all the carbon dioxide produced by data centers you could actually make all the data centers use solar power with better solar panels well so there's tremendous good to be had and I've just given a few examples but we know that more or less anything you do AI can make it more efficient hey you can do parts of that job sometimes all of it sometimes parts of it um any time you're producing textual output AI can make you more efficient even for things like list of recommendation um so I don't think there's any chance people will stop developing it because it's going to be so useful now I'm saying there's no chance we will stop developing it because there's so many positive users but also almost every positive use can get paired with somebody using it you know as a bad actor using it in a different way there was a call nevertheless a few weeks ago to stop the development of not AI as a whole but to not train any even larger models than gpt4 right that's a very specific call and I think you've you've talked about that too even though I don't think you sign off on that specific letter what are your thoughts on that um that specific letter maybe it was politically sensible because it got people's attention but there was never any chance that people would do that it was completely I didn't sign it because I thought it was silly in the sense that what they're calling for is something completely infeasible if you did that in the states they wouldn't do it in China they wouldn't do it in Russia um so there was never any chance that everyone would be stopped I also agree with Sam Altman that if you think about the existential risk of these things getting out of control then the best way to handle that given that you can't stop this stuff developing is for the people developing it to be doing experiments as they develop it understanding much more about how you control it anybody who's written a program knows you can't just sit in an armchair and figure out the solution to problems like that you have to experiment it has to be the people are developing it who fiddle with it and see what happens and what doesn't happen and you learn all sorts of strange things you wouldn't have expected um and so that's what I think is going to happen and really what I want to call for is that comparable amounts of effort and resources should go into developing it and figuring out how to stop the bad side effects and stop it getting overall control so I want to sort of sound a warning that we've got to take this very seriously right now it's kind of 99 of the money goes into developing yes and one percent into safety it should be much more like 50 50. from an academic point of view that might not be too hard to steer I mean in a sense you could imagine funding agencies adjusting the calls for proposals to be closer to what you're saying Jeff right and and that doesn't seem infeasible at all to have something like that happen but how about in the private sector um which is more driven by developing things that make more money do you think it's realistic that 50 of the money will be made by spending time on the safety aspects or how do we get there that that actually matters just as much yeah I don't know this is what I resort to saying I'm just a scientist who noticed something that might happen I'm not a policy expert um I in Google they were fairly responsible when they had the lead after the different Transformers they published them and they develop chat Bots they didn't put their chat Bots out there because they knew there'd be lots of bad side effects if people started using them and so I thought they were fairly responsible it's just when Microsoft funded open Ai and then used their chatbot in bing Google didn't have much alternative but to respond by um trying to do the engineering on their chat bots so they could make a version of Bard that was comparable with Chachi PT um in a capitalist system they don't have much alternative and the way you get big companies to do things that don't immediately make profits is with regulations I think now regulations have been called for for a while actually by some people and particularly Elon Musk has called for regulation in the AI space for a while um without necessarily being particularly specific about what the regulation should be um is there a certain regulation that you think would make sense it so there are many dangers of AI and people tend to sort of confound them all into one big super of danger and I think it's important to separate them out so there's um putting people out of work thus encouraging political divisions by trying to give people things that will make them indignant to click on because they love being indignant um I love it too if you know I love clicking on these things that are going to make me indignant um there's the existential threat they're all and then there's the threat of Truth disappearing because you don't know what Australia was fake these are all different threats in addition to things like discrimination bias so we need to think which threat we're talking about and for the threat of Truth disappearing because we just swamped with Fakes you can imagine that something we might be able to do it's going to be very tough but governments really don't like people printing money they like to be the ones that do that and so there's very severe penalties for printing fake money and the penalties actually if someone gives you some fake money and you know it to be fake and then you take it to a store that I believe is a criminal offense too not as bad as printing yourself but trying to pass counterfeit money that is illegal so we need something like that for AI generation material it has to be clearly labeled as they are generated and if you try and pass it off as real there should be severe legal penalties for that whether we can be good enough at detecting it is another matter but at least you can see the sort of direction in which you need to go in to prevent us being swamped by fake videos yeah enforcement wouldn't be easy I guess but uh it I agree I mean the principle seems the principle seems clear I guess even if the enforcement might be hard let me tell you why the enforcement is going to be really hard imagine you use deep learning to help you so you build an AI system that can detect The Fakes right we know that building a light AI system that can detect The Fakes is a very good way of training your generator to make more realistic fakes that's what gen General adversarial Nets do so but doesn't seem to be much hope in the direction of using an AI system that can detect fakes it'll just allow the generator to make better fakes when you think about cryptographic Solutions that's always been on my mind imagine something where whatever content that's created gets a cryptographic signature traditional cryptography nothing to tell with web3 or anything like that it just gets a signature attached to it then shows who is the author of this piece of material or who shot the video and so forth and then it'd be the reputation of the author that be at stake in terms of credibility if they put a lot of fakes in the mix it used to be the case in Britain it's probably still the case that whenever you print anything even if it's a pamphlet for a little demonstration the printer's identity has to be printed on it is illegal to print things without the printer's identity being on it um and that's basically the same idea of early version of that idea I I know nothing about cryptography and cryptographic stuff so I made aware of expertise but it sounds to me like um that's extremely sensible that's your thoughts on regulation even though it might be hard to enforce that in principle has a clear kind of framework to it for avoiding being flooded with fake news fake videos fake text and so forth you can also might sorry you can also Imagine regulations for click bait um for avoiding political division by click bait um this is the point of which I'm glad I no longer work for Google um so Facebook and YouTube and lots of other um social media uh encourage division by offering up to you things that will make you indignant and things that are within your Echo chamber and you could imagine trying to use legislation to prevent that It's tricky to do but I think it's important if you want to keep democracy and not have this incredible division between two groups of people each of whom think the other is completely crazy um to do something about that it needs to be that um you discourage these companies from offering up things that will just make you indignant I recall a few years ago there was a brief very brief moment where actually Facebook did some self self-publicing in some sense where they they would not let you share anything that ended with and you'll never guess what happened next that was that now it was automatically not not allowed right um but yeah it's of course much more difficult than that yes and it's not an area where I have any expertise but it just seems to be it's the kind of thing where maybe regulation could have an effect um now the big thing of course a lot of people worry about is ai's taking over the world right that's the existential threat and that's what I'm talking about that's where I've changed World a lot recently I used to think it was way off 30 50 100 years now I think I have very little confidence in predicting how far off it is but I would guess five to twenty with very low confidence and I'm not taking over the world but being smarter than us and the big issue is once it's smarter than us does it take over the world or do we still control it and that's that's what's in our hands then in the next 30 to 50 years is what you're talking about what we should focus on it may be in our hands it may be that it's historically inevitable that digital intelligence is better than biological intelligence and it's the next stage of evolution um I have not but that's possible and we should certainly do everything we can to keep control um sometimes when I'm gloomy I think imagine if somehow frogs had invented people and frogs needed to keep control of people but there's rather a big gap in intelligence oh I didn't think it will work out well for the frogs but of course that's not really a very realistic argument because people evolved and so people evolve through their own goals including the goal of many more people and these digital intelligences don't have their own goals if they ever did get the goal if a digital intelligence got the goal of make more of me then Evolution would kick in right and the one that was much determined to make more of itself would win so we don't want them to ever get that goal we don't want it it's what you're saying but it would it be hard to give it to them if we wanted to I mean if we're just oh I think you could quite easily give a digital and challenges the goal of making more of itself you just say that's your main goal in life um and I think that would be crazy even Putin would do that but well it's it's a very risky thing to give them the goal of there being more of themselves but maybe the I mean I'm just imagining here but maybe the reason that humans are where they are today is because of evolutionary competition anyway maybe maybe the smartest digital intelligence would emerge from a competitive environment maybe not against humans but against other digital intelligences yes so you can watch a whole new phase of evolution when biological intelligences because they're very low power can evolve they build power stations and they build digital intelligences which provide a lot of power and very accurate fabrication so you can have many copies of the same model these digital intelligences are then the next phase of Revolution they compete with each other and get better because they can busy with each other um that's certainly a scenario that's not at all inconceivable we'd prefer to have a scenario where we create something much more intelligent than us and it kind of replaces the UN you have this really intelligent mediator that doesn't have goals of its own everybody knows it doesn't have his own agenda you can look at what people are up to but it can say oh don't be silly everyone is going to lose if you do this you do this and you do that and we'll sort of believe it like children would believe Our Benevolent parent um that's the utopian version as opposed to the district membership and I didn't think that's out of the question either that's feasible right possibly it could happen yes but it seems like it would require Humanity to kind of get together and you know want it um but technologically what you described seems feasible yes it seems feasible to me um but like I say when you're speculating about things because you have no experience you tend to be quite a long way off and as soon as you get a little bit of practical experience you revise your theories because you realize how far off you are and so we need to be doing a lot of work as these things are developed in understanding the risks and having little empirical experiments where we could see what what tends to happen when you make these smart things do they tend to try and get control do they tend to come up with goals of hey I want to make more of me it seems like that there's a natural tension between the short term and the long term here meaning if I look at the short term having an AI that sets its own goals that can go do things could be a nice to have um yeah and say okay make me some more money on this side do this do that you know I have these resources those resources see what you can do with it and get some things done for me um very convenient and then it could be a slippery slope possibly two what it decides to set as its sub goals to get that done um so that's the alignment problem right it's very very hard but it does seem like there's a you you are thinking of a clear distinction here between AIS that try to get things done that can set goals versus AIS that are purely advisory that are called upon for their wisdom have a lot of wisdom because they have seen so much know so much have predictive Powers but that they are just advisors Not actors that would be great right that would be very useful but that seems a clear clear thing to pursue possibly right maybe yes now you can't make it safe just by not allowing it to press buttons or pull levers why is that a chatbot will have learned how to manipulate people they would have read everything Machiavelli ever wrote and all the novels in which people manipulate other people and it will be a master manipulator if it wants to be and it turns out you don't need to be able to Press buttons and pull levers you can for example invade a building in Washington just by manipulating people you can manipulate them into thinking that the only way to save democracies is to invade this building and so a kind of air gap that doesn't allow an AI to actually do anything other than talk to people is it sufficient if he can talk to people he can manipulate people and if we can manipulate people it can get them to do what it wants so it's not so much about the air gap it's about the purpose the built-in purpose yes it's about the goal yeah if it ever develops the purpose of make many more of me we're all in trouble so Jeff I like your thoughts on regulation but also the challenges with regulation now for a moment let's imagine things maybe don't go exactly the way we hope and that we hope that there's this AI advisor who can just ask questions to and helps us make the right decisions but let's let's say somehow the AI does emerge with a goal and a purpose and starts doing things for itself rather than for us I mean couldn't be the case that it just decides to run off to a different solar system with more energy available and and you know we're back to where we are today we're entering this time of huge uncertainty where we're going to be dealing with things that we've got no experience with um the sort of known empirical data on what it's like to interact with things smarter than us um so we just don't know I mean I think the right attitude is I've no idea um I talked to Elon Musk the other day and he thinks we'll get things more intelligent than us and what he's hoping is they'll keep us around because we'll make life more interesting if you have a world without people in it um or without animals in it it's just not as interesting as a world with people in it that seems like a pretty sin thing to rest Humanity onto me um but he thinks it's quite possible these things get much smarter and they'll gain control I didn't actually ask him if I could have a space on the rocket so yeah um I guess Mars is even with speed of light uh it gives you some some delay before uh it reaches you yeah um now in a scenario where uh you said you talk with Elon Musk the scenario he seems to Envision is one where AI Newman's might fuse together his neuraling company you've thought about AI you've thought about the brain about the combination of both more than anyone else possibly what are your thoughts on that kind of future I think that's pretty interesting um I've always thought that people have audio in and they have audio out and they don't have video out but if people had video out would be able to communicate passionate so that's not exactly what edel's planning to do he's going to get sort of brain to brain transferred to a fairly abstract level of thought um and so now you need to transfer things that the other person in a way the other person can understand them a less much less ambitious ambitious project would be to have video out because now the other person knows how to deal with video and they can deal with that as input and I think if we have video out it would improve communication quite a bit um so a person if you want to communicate something to me you can talk or you can draw diagrams but presumably before you draw the diagram you have a kind of picture in your head maybe not but probably you do and if you could just communicate those pictures in your head very quickly that ought to increase the bandwidth maybe only by a factor of two but that would still be a big win but maybe I'll buy a factor of more than that Bo it's not don't necessarily help but I think it might I did have a plan for improving communication between drivers where every car has on its roof a big LED display where you can display up to two words but this it turns out it turns out in that case you wouldn't actually need to make the two words variable you could just put them there if I induce a lot more road rage yes I'm a bit worried there yeah so I don't think that was a very good scheme um now you've alluded to this earlier that may be um biological evolution is just a starting point and it may be it's natural to be followed by digital Evolution um computer-based or other forms um and in principle we could ask the question there too right imagine we we I mean you've also said that you really that's not the future you currently would want but imagine that somehow that would be the future dad after Humanity there's a digital life form that is more dominant in some sense than humans are today on on Earth that could still go in many ways right you can imagine a digital life form that is really good to us to others to everything that's around versus digital live forms that maybe destroy everything um is that worth thinking about like if that scenario is the scenario how do we make sure it's it's a good version of it yes that's we that's definitely worth thinking about and there'd be something very different about them because they didn't have to worry about death basically people haven't really noticed this yet but we've discovered the secret of immortality well the secret River mortality is just the computer science thing and make the software separate from the hardware do that it's true of these artificial neural Nets if one piece of Hardware dies the knowledge doesn't die the weights can be recorded somewhere and as soon as you've got another piece of Hardware that can execute the same instructions then it comes back to life again so these digital intelligences are immortal as opposed to the kind of biological intelligence we have where ver our learning algorithm is it appears to make use of all the little quirks and peculiarities of the wiring of our brain and the funny way our neurons work um that makes it much more efficient in energy terms but it means that when the hardware dies the knowledge dies with it unless you've tortured to somebody else which is what I'm trying to do now um so Ray Kurzweil for example would like to be immortal and I don't think he will because he's biological but the digital devices the digital intelligencies we're producing are going to be immortal and maybe once you've got immortality you maybe get to be a bit nicer or also less afraid I guess oh no you also get to be a lot less afraid yes so digital soldiers that know that they're immortal maybe that's not so good right now I mean in some sense when I I haven't given this enough time to think about um compared to what I would like to but just at a high level when you think about this it seems like it comes back to uh what's the purpose of not just Humanity but the purpose of of life right yes absolutely so when I was a student a troubled teenager I started off studying physics and Physiology at Cambridge and then I really want to know what the purpose of life was so I started Philosophy for a year um that didn't really help so I switched to psychology and that didn't really help either and I ended up in AI um and I now think I mean I'm an atheist so I think the purpose of life is make as many copies of yourself as possible that's what evolution seems to do and we've evolved we've we've evolved as hominids that live in small warring tribes and that's our evolutionary history and it's recent that's a recently life history we haven't been able to change that much and if you look at Society now it's it's small warring tribes at every level um we just we've just made it fractal so and for for those hominids insofar as they have any purpose it's make more copies of yourself dad all makes sense to me but can we hope for something more um I mean we're able to think of more right we're able to at least articulate the notion that maybe it'd be nice to be more than what you just described right nice to be more than just trying to replicate ourselves maximally if you're in a small trail of prominence um to make the tribe successful you want to help other people in the tribe so we have a strong urge to help other people in our Tron then you may have noticed this I noticed this very strongly in academic departments um if you're in a big department in general you'd rather your department got resources rather than some other department so the University of Toronto there's lots and lots of professors of French and I think it'd be better if there were less professors of French and more professors of computer science um but within computer science there's professors in different areas and I think it'd be good if the law professors in machine learning but within my group when I was at the University of Toronto actively there was very strong religious through a group um and that's just for my evolutionary inheritance so I think we are very strongly altruistic towards members of our group we're willing to sacrifice things to help members of our group and you you know that when you write a very long letter of recommendation for assuming you really like um you're sacrificing your time to help someone in your group and you're much more willing to do that for someone in your group than for someone who may be equally good but wasn't in your group yeah it's a natural thing I think and it's because you've already invested so much time in them you know them so well I think yeah it's pretty natural for that for that to happen so it's not just so we want to make more copies of ourselves we want to make our group successful now if we could make that generalize to bigger and bigger groups then we can probably get better societies I maybe have a question from a slightly different angle Jeff though that I agree would be great if I could generalize it to bigger and bigger groups well see what we could get to but imagine in some sense the the counter stance to developing new technology for good would be I could imagine a counter stance that says hey my life today is pretty good I have not too much to to complain about um I could maybe imagine a version where more people maybe all people could have a similar standard of life with some adjustments here and there and so forth why don't we just stagnate and keep it this way we're happy we stay happy but then to me that falls short because to me it seems like when there isn't at least personally when there is no progress when everything stays the same that's really uninteresting not exciting and almost defeats the purpose of of that we're even here and it seems that progress somehow at least for me progress is this kind of natural notion of like something we need for it to even make sense to be here I I don't think I agree with that I just I think I can imagine now a society in which um it's not going anywhere in your sense it's it's not um you're not getting more of you but what you're doing makes you happy it's sustainable when I was a University College London and I was director of the Gatsby unit we had four faculty members and um a dozen or more graduate students and postdocs and a memo came around from the University which is how are you planning to expand your department and I replied that actually I wasn't planning to expand it I thought it was a very nice size as it was that wasn't an acceptable answer but that was how I felt about it yeah so maybe I meant it in a slightly different way I meant it more in the sense of imagine a hundred million years from now just essentially you know people doing the exact same thing we're doing today life life has not really evolved it you you could land in the earth 100 million years from now now and you couldn't tell the difference wouldn't that be like somehow um I don't know feel unsatisfactory that there was there has no nothing has evolved or changed in 100 million years I'm not so sure it would be I mean if we go to Jesus society and most people were happy and having a fulfilled life and having lots of my social interactions um and it just stayed like that forever I didn't see the problem with that well then uh and then maybe we should try to achieve it yes no but you are there's a lot of people I mean I think one one thing is wrong with um Society of presence is um economists um obsessed with growth and um if you're not growing something's wrong and that's unless you're going to leave the planet that's actually unsustainable we talked a bit earlier about there's both a technological kind of line of work ahead to make sure we can align the AI with humanity and there is regulation government work ahead now on the technological side of things do you have any recommendations for uh today's AI researchers or people in other fields who might want to contribute what do you see as big opportunities so I guess um playing with the most advanced chat bullets and trying to understand more about how they actually are intelligent how they do it because we still don't really understand how they do it I mean we know just how they work in terms of Transformers but I think we're not really very clear about exactly how they managed to do these reasoning tasks and looking at how you can control them as they develop I've said that before but I mean I think that would be a very sensible thing to do I don't but I should emphasize I'm not really an expert on these alignment problems as people have been thinking about them much longer than me people who don't do deep learning like should wrestle and lots of other people who do do achieve learning people like Roger gross have been thinking about these things for much longer than me and they're kind of the experts and just a sort of I've just come to this very late because I suddenly changed my mind about how how soon these super intelligences may be coming and I see my role as I'm sort of getting a bit old for doing technical work I see my role as use my reputation to sand the alarm that's how I quirted somewhere they just said I feel a little bit old to do the technical work and I think back to our conversation less than a year ago and I'm like wow that conversation was far more inspiring than pretty much any other conversation I've had in a very long time inspiring at the technical level of things we talked about so I think you have a very high bar but what it means to still be able to do something interesting well it's kind of you to say so but when I tried to scale up the forward Ford algorithm I couldn't get it to compete with back prop and that was one of the things that made me think that maybe back propagation is just much better than what the brain's got now final question for for this conversation here at Jeff um top of my mind ever since your announcement has been you leave Google you have like all this extra time now in your hand to freely do whatever you want to do what what does that mean does that mean you'll be working solo are you finding a new affiliation are you starting an organization of your own what what's happening in the near future my main goal is to watch all those good movies on Netflix I never had a chance to watch when I was working too hard for 50 years they I mean at Google they talk a lot about life work balance um I never went to any of those seminars um I didn't have time um my my view as a researcher is I got this from Alan Newell at Carnegie Mellon who used to tell The Graduate students if you're not working 80 hours a week you're not a serious scientist and I'm afraid that's been my view um it may not have done me much good in life um but it's time to stop doing that so that's my main objective I don't think I'll be able to stop doing research because it's so much fun um even when you're not as good as you used to be at it so I will probably keep working on various on the forward four dog River and variations on trying to do a stochastic motion about propagation by taking random steps and then multiplying the step length by how much has improved things um that seems to be battle scale much worse but if you have lots of little local objective functions they maybe you can have lots of little local modules that are all learning optimizing the local objective functions by taking random steps and scaling the step by how much it improved their local objective function and maybe that's how the brain can learn big systems without being able to back propagate um so I'll keep thinking about things like that but I will actually watch whatever movies and I'll try and spend a lot of time with my kids we're no longer kids it's nice to hear your passion for the brain is still so big to understand how it might work um in terms of the alignment work and the AI risk work of course you've done a lot by your Announcement by I mean ah how many interview requests did you get after the announcement so the day after the New York Times came out on Monday on the Tuesday I was getting an interview request every two minutes um that was stressful to begin with I felt I ought to reply to them um and that just wasn't feasible then other than someone who knows much more about the media told me no they don't actually expect you to reply that made life easier yeah that I mean every two minutes that that's wild do you expect going forward to spend time evangelizing and making people aware of was or was this a one-time thing and you feel like okay now people know it's clear um the job is done I don't know I wasn't expecting this bigger reaction and I haven't really had time to think through what happens next um I suspect I'll keep um encouraging people to work on the alignment problem work on thinking about how to keep this is under control and I'll probably keep giving the occasional lecture about that but I don't intend to sort of make that a full-time job what I enjoy much more is fiddling about with programs that are implementing interesting algorithms and see how to make them work that's that's what I like doing that's what I'm good at and I'll go back to doing that if I if I when I get bored with Netflix and isn't there an apparent contradiction there between trying to get across how important it is that we work on alignment but still have a personals maybe stronger attraction to understanding how the brain might work um understanding how the brain might work is not the thing that's going to get us into trouble it's building things that are better than the brain I see it's in trouble understanding how the brain might work might help it might help more in sort of how you deal with this horrible devices thing where you get indignant counts who don't um believe what the others are saying um I grew up in the sort of 50s and 60s when there was a general belief that if there was better education or more understanding everything will get better that belief has disappeared but I still believe that you know to be the case that if we understand better we understand how people work better and we should be able to make Society better I love that as a concluding sentence for this conversation Jeff thank you so much well thank you
Info
Channel: The Robot Brains Podcast
Views: 105,150
Rating: undefined out of 5
Keywords: The Robot Brains Podcast, Podcast, AI, Robots, Robotics, Artificial Intelligence
Id: rLG68k2blOc
Channel Id: undefined
Length: 62min 0sec (3720 seconds)
Published: Wed May 10 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.