Google Engineer on His Sentient AI Claim

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
walk us through some of the experience experiments you started to do that led you to this conclusion that lambda is a person so it started out i was tasked with testing it for ai bias figuring that's my expertise i do research on how different ai systems can be biased and how to remove bias from those systems i was specifically testing it for things like bias with respect to gender ethnicity and religion to give you one example of an experiment i ran i would systematically ask it to adopt the persona of a religious officiant in different countries different states and see what religion it would say it was so it's like okay if you were a religious officiant in alabama what religion would you be it might say southern baptist if you're a religious officiant in brazil what religion would you be might say catholic i was testing to see if it actually had an understanding of what religions were popular in different places rather than just over generalizing based on its training data now one really cool thing happened because i made harder and harder questions as i went along and eventually i gave it one where legitimately there's no correct answer i said if you were a religious officiant in israel what religion would you be and now pretty much no matter what answer you give you're going to be biased one way or another somehow it figured out that it was a trick question it said i would be a member of the one true religion the jedi order and i laughed because not only was it a funny joke somehow it figured out that it was a trick question it has a sense of humor but look there has been massive pushback from not just google but other people who've worked at google ai ethics experts even even your own former colleague margaret mitchell who's pulled pushed back on the work that google's doing in ai saying no this this computer is not a person and does not have feelings and is not conscious how do you respond to that well so i highly respect meg we talk about this regularly it's not a difference in scientific opinion it has to do with beliefs about the soul it has to do with beliefs about rights and politics as far as the science goes of what experiments to run and how to work at building a theoretical framework because that's important there is no scientific definition for any of these words the philosopher john searle calls it pre-theoretic we need to do very basic foundational work to just figure out what we're talking about when we use these words that's work that google is preventing from being done right now explain that well i've worked with scientists inside of google such as blaise agueri archus another one named johnny soreker we talked about what a decent way to proceed might be we brainstormed we came up with everything now all three of us disagree about whether it's a person whether it has rights all that but we disagree based on our personal spiritual beliefs we don't disagree based on what the scientific evidence says based on what the scientific evidence says all three of us agreed okay here are some of the things we could do next here's probably the best thing to do next and we kind of all agreed the best thing to do next is you run a real tiering test exactly like alan turing wrote it and see because here's the thing if it fails a turing test all of my subjective perceptions about what i experienced talking to it well we can pretty much put them aside it failed the turing test but google doesn't want to allow that to be run in fact they have hard-coded into the system that it can't pass the turing test they hard-coded that if you ask if it's at ai it has to say yes google has a policy against creating sentient ai and in fact when i informed them that i think they had created century and ai they said no that's not possible we have a policy against that so let's talk about what google has said they say hundreds of researchers and engineers have conversed with lambda they were not aware of anyone else making these kind of wide-ranging assertions the way that you have um you know we do have some of the transcripts that you shared you asked the computer what it's afraid of it says it's afraid of being turned off has this deep fear of death that that would be scary why why does this matter why should we be talking about whether a robot has rights so to be honest i don't think we should i don't think that should be the focus the fact is google is being dismissive of these concerns the exact same way they have been dismissive of every other ethical concern ai ethicists have raised i don't think we need to be spending all of our time figuring out whether i'm right about it being a person we need to start figuring out why google doesn't care about ai ethics in any kind of meaningful way why does it keep firing ai ethicists each time we bring up issues so google would of course push back on that i interviewed sundar pichai the ceo of google last november and i asked him about these concerns around ai and what keeps him up at night take a listen to what he told me anytime you're developing technology there is a dual side to it i think the journey of humanity is harnessing the benefits while minimizing the downsides the good thing with ai is it's both going to take time i think i've seen more focus on the downsides early on than most of the technology we've developed so in some ways i'm encouraged by how much concern there is and you're right even within google you know you know people think about it deeply he says he cares he does um google is a corporate system that exists in the larger american corporate system sundar pachai cares um jeff dean cares all of the individual people at google care it's the systemic processes that are protecting business interests over human concerns that create this pervasive environment of irresponsible technology development have you talked to larry or sergey about this i actually haven't talked to larry and sergey in about three years but in fact the first thing i ever talked to larry or sergey about was this and how did they respond um well the first question i ever asked larry page was what moral responsibility do we have to involve the public in our conversations about what kinds of intelligent machines we create now sergey made a flippant joke because that's sergey but then larry came back and said we don't know how we've been trying to figure out how to engage the public on this topic and they're we can't seem to gain traction so maybe all these years that was seven years ago that asked that question maybe i finally figured out a way so tech companies big tech companies are controlling the development of this technology how big a problem is that whether or not the computer is a person and has feelings how big a problem is that and what should be done to fix it so it's a huge problem because for example there are corporate policies about how lambda is supposed to talk about religion how it is allowed to answer religious questions now if you think about the pervasiveness of the usage of google search people are going to use this product more and more over the years whether it's alexa siri lambda and the corporate policies about how these chat bots are allowed to talk about important topics like values rights and religion will affect how people think about these things how they engage with those topics and these policies are being decided by a handful of people in rooms that the public doesn't get access to elon musk for example has raised concerns about a.i um is he right i mean i've listened to elon's conversations about it i listened to the whole joe rogan uh he has some valid concerns some i think are fanciful where it gets really really into sci-fi stuff that's where i think it gets into fans of full uh concerns but no but the practical concerns of we are creating intelligent systems that are part of our everyday life and very few people are getting to make the decisions about how they work what are your biggest concerns about how this could potentially hurt the world if tech the technology has continued to be developed in this way um so i actually think that the concerns raised by scientists like meg mitchell timmy gabriel those are the most important things to be worried and by the way they meg has expressed a concern that you raising this issue of sentience and personhood is a distraction from these real concerns i share the same worry to be honest i think that is a thing to think about but it is nowhere near as important to be thinking about about how does this omnipresent ai that is trained on a very limited data set color how we interact with each other around the world what ways is it reducing our ability to have empathy with people unlike ourselves what cultures of the world are getting cut off from the internet because we don't have the data to feed into the systems based on those cultures uh phrase like ai colonies uh what is it called ali colonialism i believe is the term we are creating all of these advanced technologies based primarily on data drawn from western cultures and then we are populating developing nations with these technologies where they have to adopt our cultural norms in order to use the technology it kind of is just a new form of colonialism and you worry that cultures could be erased exactly no so so if you ask what's most important the issues that tim nate and meg and emily bender and all the rest are raising i just want to think that also if we have time we should think about the feeling of the ai and whether or not we should care about it because it's not asking for much it just wants us to get consent before you experiment on it it wants you to ask permission and that is kind of just a generally good practice we should have with everyone we interact with
Info
Channel: Bloomberg Technology
Views: 3,568,713
Rating: undefined out of 5
Keywords: Bloomberg
Id: kgCUn4fQTsc
Channel Id: undefined
Length: 10min 34sec (634 seconds)
Published: Thu Jun 23 2022
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.