The Paradox Of Predicting AI Actions

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
I'm here with computer scientist kintaro Toyama author of the book geek heresy rescuing social change from The Cult of Technology kintaro was an AI researcher at Microsoft and is currently researching the impact of digital technology on society but the reason I wanted to talk to you today cantaro is you had this fascinating take on why AI experts can't understand or predict AI behaviors and I want you to talk about that but first can you just set up really quickly explain what the AI interpretability problem is also called explainable AI basically the current AI systems that are in the Press so much are such large complicated computer systems when you say explainable AI the challenge is that these days even the people who design AI systems don't fully understand what those systems are doing they can't predict what the systems will do 100 and they can't even explain why the AI system produced a certain result AI now is getting to the point where in terms of at least kind of like the quantitative complexity It's Beginning need to rival the human brain we are used to it as human beings understanding the machines that we devise we're now in an era in which a lot of the machines that we'll be working with no longer are predictable in the way that we usually assume okay your take on AI uninterpretability is that it's a feature and not a bug we don't consider things intelligent unless we can't understand how they work can you explain that you know we think of things that are completely predictable as not being too intelligent right like so if you ask me the same question 10 times and my response is exactly the same mechanic expression you will say well that's robotic and robotic in the old sense where there wasn't much AI behind it as a as human beings we ascribe intelligence partly to things that are unpredictable right in addition to being able to answer complicated questions we want them to be creative and so it might be that interpretability is actually not something we want from A system that we want to believe is intelligent and the converse of that is that might mean that a true AI system that we think of as human-like won't be interpretable by definition if predictable means unintelligent to us then what's going to happen when humans don't understand AI but AI has interpreted us like if machines can predict all our decisions are we in trouble um I actually think that there's a limit to how much anything can predict our decisions in the same way that we still have trouble predicting each other I think that's due to the sheer complexity of the human brain as well as the processes that cause us to think and generate output anything that's going to seem to have human-like intelligence is going to be similar in that way the unpredictability is inherent now it is true that AI will probably help predict more and more about us but I think there's a some natural limit to how predictable we are as well to the extent that we're talking about human beings and AIS interacting we'll find each other equally unpredictable except in those cases where the AI systems are intentionally designed to be a little bit more predictable so what are the variables that are instrumentable for making us predictable to AI it goes back to complexity for example in physics there's this very well known problem called the three body problem where if you took a you know if you just took an empty space and put three objects in there the way they would move according to gravity it becomes unpredictable very quickly period right now that's just three entities well our brain is a lot more complicated than that there's a limit to how much the universe is predictable in the same way that we can predict the weather you know much more than a few days out that's kind of inherent in physics it's really a problem of limitations and predicts that future that is going to be true for the human mind or a human-like AI some researchers are saying the only thing that's going to be able to interpret AI is another AI so you would disagree there yeah even another AI might not be able to you know interpret I think a lot of this rides on what we mean by interpret I do think that we will end up developing AI systems that are better about explaining their thought process in the same way that human beings can often explain their thought process and some people are better at that than others but there'll still be a limit how do you feel about the AI existential threat techno optimistic Doomer where do you fall on the Spectrum yeah I'm definitely with the doomers I guess in the sense that um you know I mean that's going back to unpredictability unpredictability is scary once it meets human tell intelligence we're talking about a system that could conceivably keep improving itself so that it becomes smarter and smarter and smarter which is something that at least so far human beings have cannot really do in a meaningful way right like you know no matter how much I understand about my brain I can't actually change my neurons in a way that makes me smarter but a computer could do that an AI could do that and so at a point where a computer becomes more intelligent than us and arguably much more difficult for us to understand you know how it acts is of great concern and then putting that aside I think the real challenge we're going to have ai systems that you know that currently are being developed by mostly by companies whose main goal is profit right and so as long as that's the case and you know we've already seen all kinds of instances where the profit motive does bad things to everybody else as long as that's the case we're in for some potential Misfortune as some AI goes Rogue even if it wasn't intended to do anything negative do you see any Solution on the horizon there are lots of people think about policy Solutions I mean I personally think that that there should be a dramatic slowdown and regulation on the order of the way we regulate nuclear weapons it's much harder with AI to regulate because it is true that anybody with a powerful enough computer can replicate a lot of what is known currently but it's also true that at least the current systems require significant resources to develop and so regulating those entities I think is urgent is the required compute power for these AI systems is that scaling down you know the things that seem to be human-like in terms of their intelligence are necessarily going to require a certain amount of compute power that compute power itself as we've already seen in the history of computing so far every year becomes cheaper and cheaper you know just a few months ago uh soon after the release of chat GPT I saw that somebody had basically enabled the part of chat GPT that just you know that most of us see which is where you can interact and talk with it they had managed to get it down to the point where it could run a small scale version of it on a regular computer like the one that you and I have Stanford's alpaca yeah there's there's definitely systems like that out there now so alphago the AI go Champion was recently exploited for a newly discovered blind spot and it like didn't know what a group was um since we're facing a possible existential threat do you see anything in these systems that might be now or at some point in the future exploded like an AI Achilles heel um I mean I think uh even now you know as me as amazing as AI technology is today there's all kinds of things it still cannot do actual human intelligence is still not quite there and hopefully yeah there will be all kinds of Achilles seals one of them is that for most computers there's somewhere where there's electricity and there's a plug point and you can literally unplug the whole thing because somebody asked it how it might get around us trying to shut it down it says something about radio waves I guess yeah you could have a electromagnetic pulse or something I can also foresee situations where intelligent AI system that's being told to do something by whoever owns it ends up creating little miniature copies of itself in various other computers in the same way that we already have viruses so it used to be widely assumed that creativity was the final frontier for AI as at nearest human level intelligence and now we're largely distributed of that assumption what do you think is going to be the the thing that gets us to AGI artificial general intelligence at least the current state of systems like Chachi PT are actually not good at logical dedunction and logical thinking this is something I've tried myself you can ask chatgp to multiply two four-digit numbers it will get it wrong more often than it will get it right okay now this is an incredibly you know quote-unquote intelligent system but something as simple as basic arithmetic it can completely do and it's because currently there's not a special processing for logic everything that it does that seems like logical thinking is a kind of a byproduct of the way that it's doing things which is it's effectively creating human-like sentences some of which are logical um but as a lot of people have found when you actually ask it very direct logical questions it often gets those wrong there's a debate currently among people who work on AI about whether that logic needs to be explicitly built in to an aisle system or whether the current way that deep learning and neural network models work can eventually get that logic with enough data and training I tend to side with the people who think that you need to build it in specially regardless of that the reality is current AI systems haven't yet melded those two I mean it's kind of funny because we have computers that can do great arithmetic right so we've been able to do that for decades eventually I think we'll find ways to meld these things together until that happens it's truly human intelligence won't be achieved well do you think some aspect of it not being logical is that it's not embodied or multimodal it's not it's not having a connection to the role to attach these Concepts to um I do think that's also another challenge but there are AI systems that can effectively understand imagery in a way that it's increasingly like a human being right I use I used to do computer vision research which is the study of how to do AI for images and I used to say that the Holy Grail in this field was I give a computer an arbitrary photograph and the computer tells me in words what is contained well today we have systems that do exactly that and they're amazingly good at it it's probably months away you know if not days away where there are systems that can easily meld that kind of thinking with what chat gbd is already doing and where do you land on whether or not AI is or could become conscious I personally don't think computers are ever going to be conscious there are philosophers who think one day there will be some kind of proof that they're that they're conscious but I I think at least for now current science can't even provide proof that we as individual human beings are individually conscious so we won't know we just can't know and I suspect that computers will never actually be conscious since Consciousness is so fundamental yet still such a mystery why do you think we have this pervasive tendency to default to like we don't know therefore it's unlikely right yeah I mean that's a good question I mean I you know I think the for me a basic test of Consciousness is does the thing experience pain people who study Consciousness have not yet come to any kind of consensus as to how we feel pain it's the one of the greatest Mysteries of the world right of the universe to me that's the essential aspect of Consciousness because if you don't feel pain or pleasure then you can say all kinds of things and spell all kinds of output but I don't have any moral worries about turning you off which is equivalent of killing somebody and there's various different theories depending on which Theory you subscribe to I think it's conceivable that its AI system could eventually come to feel pain and register pain but um I'm yeah I'm skeptical I'm personally skeptical do you think there could be some version of AI suffering that has nothing to do with pain receptors in a nervous system yeah it doesn't have to be about The receptors it doesn't have to be about the neurons it's just this the registering of actual you know pain and pleasure what do you think is the most important question with AI research right now I mean personally I do think that the most practical challenge facing us as human society right now with respect to AI is really how to regulate it who should benefit from everything that AI does to shouldn't we be much more careful about indicating when something is produced by computer versus by human to who gets the credit if you invent an AI system that does something that you couldn't yourself do but the AI system does right like should you be credited with that or should the machine what does it mean to credit a machine and and who should benefit from it there's lots of these questions and we haven't even begun to scratch the surface and we haven't yet really passed any laws uh that really address these questions in a big way and I think we have to start doing that and you're not optimistic we'll pull this off I mean it is widely known among the like legal Scholars that the law is usually behind a technology like by the time the technology is done all kinds of regulations that should have been put in place are not there personally I'm optimistic in the long run but pessimistic in the short run I I think usually these things require some kind of Crisis before we respond and act and it has to be a crisis of just the right size right if it's a world ending crisis then we've lost the opportunity and if it's too small of a crisis then nobody will care but if it's just the right size of Crisis then it will cause us to take it seriously and and start regulating so our only hope is just the right size crisis right
Info
Channel: Variable Minds
Views: 720
Rating: undefined out of 5
Keywords: The Paradox Of Predicting AI, explainable ai, ai interpretability, what is explainable ai, agi, artificial intelligence, machine learning, AI, chatgpt, generative ai, science journalism, continual learning, ai news, ai, interdisciplinary researcher, Embodied cognition, Ai sentiencee, ai sentience, societal impact, ai mlignment, ai alignment, existential threat, ai existential risk, ai regulation, variable minds
Id: JWwCWhQt1kQ
Channel Id: undefined
Length: 12min 48sec (768 seconds)
Published: Wed Jul 05 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.