The Problem with Human Specialness in the Age of AI | Scott Aaronson | TEDxPaloAlto

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] [Applause] [Music] okay uh thanks a lot so uh after um you know a career spent mostly in Quantum Computing I'm now sort of Moonlighting at open AI uh they asked me to uh think about how theoretical computer science could be used to help uh uh prevent AI from destroying the world uh I haven't figured it out yet uh I do still have another 6 months uh so um but uh you know I F I find myself thinking more and more uh not just about uh uh uh how do we prevent this from going wrong but also about uh uh what what if it goes right what if it goes exactly like it's supposed to and uh can just produce any you intellectual product as well as we can or better and you know what are we for in the resulting world uh so um well you know I don't I don't have to belabor for this audience I think you know what has happened in AI over the past uh few years uh you know we now uh uh to some approximation you know have the science fiction machine from Star Trek right you talk to it in English you ask it what to do some percentage of the time it does it and you know this is uh uh uh despite you know how unlikely this seemed to almost all of us 5 years ago you know it's so unlikely that that many many people are still in denial about it okay but uh I think the even more surprising thing than what has happened is how it has happened uh so you know what maybe not everyone appreciates is that the core ideas uh that are powering the current AI Revolution uh uh are things that have been known for Generations okay so uh uh I mean neural networks uh back propagation gradient descent uh uh prediction via depression you know I learned all this stuff when I was an undergrad in computer science in the 9s okay but we also learned then that you know neural Nets were just not that impressive they didn't work that well and uh all of the wisest people uh said uh uh all said well you know if you just take something that doesn't work and scale it up by a factor of a million it's still not going to work you know the the the the true key to AI is going to be to deeply deeply understand the nature of intelligence and you know once we've done that then we can uh uh we'll be able to see why uh uh a human level AI could have fit on a floppy disc uh and um you know there were just a few Nut Cases like Ray kwi who would go around showing these graphs that would say well look uh the amount of compute that you can do you know per second per dollar is on an exponential trajectory right that's one form of mors Law and uh uh if you just extrapolate forward then by the 2020s or so uh there should be about as much compute available as some crude estimate of what the human neocortex is doing and that is when we should expect that magic will happen okay and and and computers will suddenly understand language and be intelligent and uh almost all of us said you know that sounds like the stupidest thesis that we've ever heard like you have no you know theoretical principle to believe that you know just the the sheer amount of compute is is is is is alone sufficient uh now I'm airm believer that uh um you know that like one of the the the key uh dicta of science is that you let the world tell you when you're wrong okay and you don't make up uh some elaborate justification for why you actually weren't wrong so I think that's the situation here uh so you know but but you know this is Mo's law hasn't ended right so uh uh you know we're still getting more uh uh more and more compute so you might wonder where is this going uh uh what will gpta be able to do you know will I just be able to ask it to solve any of the greatest unsolved problems in math or science like prove the remon hypothesis and it'll say sure I can help you with that and just spit out a proof um you know by the way I I asked uh uh GPT to cooperate with me in illustrating that uh which it happily did but then hasten to add that it's only kidding and that the remon hypothesis remains an open problem uh so you know this is one possibility but then you know what about beyond that I mean uh what if you know as some people predict uh uh AI would uh become to us as we are to chimpanzees right well you know how well do we treat chimpanzees right and so then you know you're led to the the uh the the Terminator scenario of course but you know it's been amazing to watch you know I've known the little subculture of nerds on the internet who have uh worried about AI Doom for for 20 years uh uh and uh just within the last year because of chat GP PT this went to something that is discussed in the White House Press briefing and in Congressional hearings uh but you know uh uh okay uh uh uh you know an AI wouldn't necessarily have to hate us or or want to kill us we might just you know be in the way or irrelevant to whatever alien goal it has okay but I think you know that's not the only possibility that is on the table here so um my colleague baz Barack who's uh now also on sabatical at open AI uh and I uh uh uh tried a while ago to make a decision tree of the sort of major possibilities being discussed now so uh you know the progress in AI that we've seen over the last few years could fizzle out right there might be a diminishing returns to you know more and more scale or we might you know uh uh uh find it too expensive to get the necessary compute uh or we might run out of training data you know we're already sort of running out I mean there is all of YouTube and Tik Tok and so forth that you could still feed into the mall but that might just make the AI Dumber rather than smarter right so uh okay but then if it doesn't fizzle out and if it just continues you know the way it has over the last few years then you have to imagine that it's just a matter of uh uh you know what 10 years 20 years how many you know until it can do just about everything as well as we can and uh and and what then you know does civilization recognizably continue with sort of humans in charge uh and and whether it does or it doesn't is that good or is it bad uh from our point of view or you know maybe it depends who you ask uh so um you know now like a lot of people don't want to have this discussion they sort of they still sort of I think don't want to speculate about these things including you know many distinguished colleagues of mine a lot of them are immersed in what I like to call the the religion of jism right so they will say look chat GPT You know despite however impressive it might look uh uh you know it's it's it's actually not because we really we know that it is just a stochastic parrot it is just a next token predictor it is just a giant function approximator it is just a uh a huge auto complete right and and I I always want to say to these people okay and what are you like aren't you just a bundle of neurons and synapses obeying the laws of physics right and what about your mom right you know I mean uh um you know I mean if you're going to you know use these reductionist or deflationary ways of talking then you know at least you you have to be symmetrical about it I think so uh a closely related tendency is well known in AI is the sort of endlessly moving goalposts you know I still remember when uh deep blue beat Casper ofices and you know very smart people said okay but this is not impressive because chess is really just a search problem Wake Me Up When computers can beat you know the human uh uh Grand Masters at go okay uh because that's just an infinitely deeper and richer game okay and then we had alpago and then people said okay but fine it's just a game everyone expected that this would happen you know wake me up when uh uh large language models can win a gold medal in the international math Olympiad okay so I actually have a bet with a colleague that that will happened by 2026 uh you know there was some progress on it just this past month uh now I might be wrong it might happen by by 2036 instead but it seems clear that this is just a ma a question of years at this point and you know after uh uh uh AI can you know get gold medals and you know math competitions okay which goal poost uh uh should we have next uh so you know we might even be tempted to formulate a general thesis here which I'll call you know the game over thesis which would say that given any task with a reasonably objective metric of success or failure games competitions uh and on which an AI can be given suitably many examples of success and failure it's only a matter of years before not only AI but AI on our current Paradigm will match or beat the best human performance now that might not exhaust everything that we care about uh uh uh you know there might be things that are not quantifiable in this way okay but but if even this is true I think that already forces us to some uncomfortable places in you know thinking about kind of well you know uh uh uh uh what do we tell our kids about you know uh uh what kind of jobs are going to be available for them and and sort of what is our role in the world uh so um and and you know it's clear that sort of already what what you know chat GPT and and Dolly and so forth can already do has sort of uh uh created uh for real for us the sort of uh uh Blade Runner scenario where you know we are confronted with the problem of distinguishing uh human outputs from AI ones so you know one of the main safety projects that I've worked on uh during my time at open AI uh has been a scheme for watermarking the outputs of GPT and other language uh other large language models uh what this means is uh sort of replacing the randomness in the models by pseudo Randomness in a way that inserts a uh a secret statistical signal into you know the choice of words or tokens uh by which you can later uh detect that yes this was generated by chat GPT this did not come from a human okay now I should caution you that uh this has not been deployed yet uh so open AI along with Google and anthropic uh has been moving kind of slowly and deliberately toward uh uh deployment of of text water marking um even uh uh if and when uh it is deployed you know someone who is sufficiently determined will be able to evade it uh you know just like with schemes for uh uh preventing piracy of you know music or software or whatever uh so you know it's not a perfect solution but I hope that this and other measures uh will eventually be able to you know make it less convenient for students to use chat GPT to cheat on their HK work uh uh uh one maybe one of the most common misuses in the world right now or uh for people to use it for spam propaganda impersonation uh all sorts of other bad things like that okay but when I talked to my colleagues about watermarking I was surprised that uh often you know they had an objection to it that was not technical at all it wasn't about how well can it work it was about uh well should we still even be giving homework at all right I mean you know if if uh uh chat GPT can write the term papers just as well as the students can right and that's still going to be true after the students graduate like you know what's the point why are we still teaching these skills uh you know and and I think about this even in terms of my 11-year-old daughter for example I mean she loves writing short stories now chat GPT can also write short stories on the same themes you know like a a an 11-year-old girl who gets recruited to a magical boarding school but which is totally not Hogwarts and has nothing to do with warts right or you know whatever other theme like that okay uh now you could ask the question uh you know if you look at like today's cohort of 11year olds are they ever going to be better writers than GPT or you know it's a it's a it's a race right which one is going to improve faster um so uh um uh so so you know you you know you could imagine that you know even what we think of as like the greatest products of artistic genius you know the music of The Beatles right in principle you know you could have uh some AI model uh uh do the same things okay but when you think about that enough you start wondering what do we mean what what would we even mean by an AI that created music that was as good as the Beatles right like uh you know and then that forces you to ask well well what made the beetle so good in the first place and you know I'm not a music expert but roughly we could decompose it into sort of two components uh one being sort of new ideas about what direction you know music ought to go in and secondly technical execution on those ideas okay now suppose you had an AI where you know you just fed at the Beatles whole back catalog and then it generated more songs that the Beatles plausibly could have generated but didn't you know that sounded kind of like Hey Jude or yesterday or whatever okay I think that you know most people would if they saw that would just move the Gul poost okay they would say no that doesn't really impress us right this is just uh uh extrapolation and you know like uh uh schopenhauer said you know Talent hits a Target that no one else can hit but genius you know that hits a Target that no one else can see right and and and and you know the uh uh what we want to see is the AI deciding for itself to take music in this new Direction uh so okay now but now you know imagine that we had that as well you know imagine that you had an AI where every time time you hit the refresh button in your browser window you got a brand new like radically new Beatles likee direction that music could have been taken in in the 1960s right and each time you run it you just get another sample from this probability distribution okay you know even then there's something kind of weird about that I mean you know you could say The Beatles were there at the right place and time to pick a particular direction and not only that but sort of drag all of the rest of us along with them so that our whole objective function changed right we can't judge music anymore except by a Beatles influence standard just like we can't judge plays except by a Shakespeare influence standard right and and so now if you know there's sort of what I like to call an AI abundance Paradox right which is as soon as you have an AI that can produce a new artwork uh uh well you know however good it is it can produce a thousand similar artworks by just uh uh running it more and more often can always rewind and try again and uh so so it sort of radically devalues you know the worth of that kind of production just like the price of gold would crash if someone towed a 10 mile long golden asteroid to the Earth right uh uh it wouldn't actually be be be be worth you know what you what you thought it was okay and and so you know you could say well well at least humans will always have this sort of Advantage okay that uh at least we have the the advantage of being frail and there's only you know there's only one of us you can't back us up and run us over and over on the same input you know when we make a decision we really mean that decision right we're sticking with it and that's the only one that you're going to get out of us uh which is sort of a weird place to you know stake our claim of human specialness on but that might be the place that we're forced to okay but you know as soon as I've said that I have to confront a sort of exotic objection uh which is well is it really true that humans uh uh uh cannot be rewound cannot be copied cannot be you know saved as backups and so forth I mean it is possible you know some people think so that that our own cognition you know is happening in some sort of digital computation layer you know in the neurons and synapses and once technology once brain scanning technology gets good enough you know itbe uh uh the next iteration of neurolink or whatever right we can all just back ourselves up to the cloud we can you know rewind ourselves uh uh restore from back up you know and then that leads to all these strange uh questions like uh would you agree to have yourself faxed to Mars you know just sent as information reconstituted there uh the original meat version of you will just be painlessly euthanized don't worry about it right uh or um you know would you uh uh uh back up your brain before you go on a dangerous trip um so you know I don't know whether these things will ultimately be possible right it's a question about uh uh the ultimately the biology and the physics of the human brain right is just the sort of digital layer the relevant one or is our identity sort of bound up with the sort of unclonable uh uh uh you know not fully knowable you know chaotic details of the uh uh uh molecules you know inside of the you know individual sodium ion channels and the neurons right if you had to go all the way down to the molecular level then the famous no cloning theorem in quantum mechanics would say well you can't make a perfect copy right if you try to you're going to have to make measurements that will you know fail to tell you what you want and even destroy the original copy that you had okay so uh uh you know I don't know whether our identity is sort of bound up in these unclonable uh uh physical degrees of freedom but you know even even not knowing whether uh that's true or not um you know I mean it does seem like difference between us and any existing AI that we're sort of buffeted around by chaos such that no external agent can have at least as as far as we know can have all the information relevant to predicting our Behavior so then to Circle all the way back to AI safety this leads to a very exotic AI safety proposal which is why don't we just teach our AIS indoctrinate them in a religion that venerates the universe's unclonable ephemer ephemeral analog locai of creativity and intellig Ence uh wherever they might be found says protect them from destruction defer to their preferences those are the ones that matter because you know they're they're the ones that sort of only get the one chance uh now I don't know if this is a good idea uh you know in a different Universe maybe I fell in love with a different idea but here I kind of fell in love with this one and unfortunately you don't get to back me up and see a different one so all right so thanks
Info
Channel: TEDx Talks
Views: 56,868
Rating: undefined out of 5
Keywords: AI, Data Science, English, Ethics, TEDxTalks, Technology, [TEDxEID:55171]
Id: XgCHZ1G93iA
Channel Id: undefined
Length: 19min 34sec (1174 seconds)
Published: Fri Mar 08 2024
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.