A.I. Ethics - with Dr. Paula Boddington

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
I think chat GPT has stops at collecting data at 2021 I think but if it carries on updating the data it's going to be collecting its own data it's going to be like it's going to be a feedback that's the really that's really the biggest thing that I'm seeing is that if you project AI very long term it's called it's going to be diminishing it'll be diminishing returns and a kind of you know when people when people talk about like Singularity and like AI you know exploding and becoming I don't see I don't see how that's possible because it has no relevance it has no desires and so all it can ultimately be is a kind of spinning back onto itself with maybe a with a long Arc but on that long Arc you're looking at a kind of of leveling you know a kind of intelligence leveling I don't see I can't see otherwise oh this is Jonathan Pedro welcome to the symbolic world [Applause] so hello everyone uh I'm here with Paula boddington those of you who've watched my channel have seen uh us talk on several subjects she is an ethicist she's also teaches and different different institutions and uh she's been thinking a lot about you know the relationship between biology and ethics but also technology and ethics and so it seems like you know this is the perfect time to talk about it with the rising of AI and also the kind of Madness around self-identification all of this is kind of happening all at the same time and so Paula thanks for talking to me again oh really great to talk to you uh the main problem is trying to focus on something where it doesn't just go all over the place because everything is so interconnected yeah well I'd like to know a little bit about because AI has now really exploded and we've also not only has it exploded but the religious aspect of AI seems to be coming more and more to the fore you know with Elon Musk going on the record talking about the heads of Google literally saying they're building a God you know and and then many of the transhumanists you know have pronounced themselves in Publications but now it's becoming more and more obvious that there is a religious aspect to AI uh even in the people that are making it and so I don't know if you had thoughts about where we are now which igbt taking over really within just to a few months all of a sudden it's everywhere okay yeah well I mean I have lots of I have um there's lots of things that could be said about this so I'm in sort of a religious aspect of AI has been there really for a long time and and maybe it might be also helpful to start off by clarifying some things because there are a lot of people who work who are basically working in that field who no longer will use the term AI because it's so confusing so they might be doing so there are people who are trying to say build artificial general intelligence or trying to build um AI with very very broad capacities and other people who are working on really discrete issues like for example um using a using AI to improve reading of medical images accuracy of how you diagnose Cancers and so on but I know a lot of those people but there's big divisions ideologically within people working on this a lot of those people now just prefer to talk about machine learning rather than AI because I think it can be helpful it's always it's important to break up different ways of looking at it so we can think about AI in terms of the ideology of what we're actually calling artificial intelligence um we can also think about the actual mechanisms of software involved we can think about the hardware and how it's infiltrating our infiltrating our lives and the infrastructure so they can all operate on different kinds of levels so even one question even is whether or not how it got called artificial intelligence uh so which is in a sense a sort of um historically it dates from a summer school in 1956 at Dartmouth where John McCarthy intercept of like a school of people got together because they thought they'd be able to build like an intelligent machine over a summer so hilarious really and so that's how the name stuck but a lot of people don't really like the name because intelligence has got so such broad connotations and a lot of things is really really discreet but the chat for chat GPT is is is really interesting so thinking about what what what happened one of the things that jbt the reason why it's so interesting is because unlike for example using AI to diagnose cancer but Chad GPT is doing is that it's a chatbot it's interacting with humans and it's interacting with the public which means that it's not an elite question of of asking ourselves you know what is this doing where are we pointing this no it's it's it's basically loosed on to the public and it's acting really in all kinds of ways that are unpredictable maybe they are predictable but it's becoming you know really I think it's becoming something like a form of divination for people and people are treating it that way yeah yeah it's just it is it's kind of becoming a sort of better way of kind of reading your horoscope isn't it and trying to find answers find answers to things so but I think it's really interesting to think about the question about the fact that chat gbt is about language because not I mean not all AI is about language but I think it's really easy to fall into thinking um maybe this is conscious it's really an agent simply because of the fact that it's focused on linguistic intelligence which is only one aspect of intelligence so I think one of the things that's going on is that we're drawn towards some of the things going on with AI I sort of emanate from Ai and some are about General things in the culture in the air which feed around in a circle and then help to shape how we're thinking of AI so focusing focusing on intelligence is is is really really closely connected to sort of like Cartesian Notions of the mind and a body thinking about what we are essentially is just a mental substance and focusing very much on cognition as a Hallmark of humans and in particular even the cognitive all cognition is not language in particular thinking of linguistic skills and and and falling fall in fact actually because I had so many different things we could talk about actually I actually where did I write it down I wrote down a club at the sketch sketch of a sort of feedback there are feedback loops so I was like I was really interested in your video you did about a month ago about Ai and Monarch about how it's a form of agency which I think you're right about that but I think it can be really interesting to break it down and see how it's operating yeah in some ways it acts as a kind of ever escalating feedback loop but there also are ways in which it's operating on us as agents and it's getting us to carry along with it like a kind of like you know for example like you can get funguses but infect ants that make the ants behave weirdly and they go up and die and spread the fungus that thing but also one of the things that's happening I think is that it's breaking things down in a way that if we're thinking really carefully it gives us capacity to go back and see hang on this is wrong um this is not this is not working we can go back and see what the problem is so if we if we take the General Life let me just illustrate it kind of operates a bit like I think it operates it can operate a bit like I was using a metaphor of how bushfires spread so so they can spread in Australia where I used to live I know they can spread really really rapidly one reason is because you get the eucalyptus gas um forming Balls of Fire that leap ahead but then after the fire there's kind of Devastation but for there are eucalyptus trees that will only Sprout from seed if they've been through a fire so that things can can come up so it's kind of like a cycle of things but but that are that are happening so if we go back to the notion that some people are trying to build AI to be like a God and there's Theology and AI one of one of the things that's happening is that there's a response to AI that we need to try to combat it so we need to think about like the ethics of AI so we need to have ways of trying to think about that and one of the problems with that one of the problems of ways of trying to think about that is that the standard ways we've been thinking about ethics kind of break down so that you can think about ethics in terms of broadly in terms of harms and benefits across the population but we can't just carry on with old Notions of homes and benefits because what we think of as a harm and what we think of as a benefit is changing because of technology and so so like a basis I'm like a really popular commonly used way of looking at ethics a utilitarian model would be based upon we need to try to fulfill human desires and to try to work out what our desires really are you know not our kind of short-term desires but what are higher higher end desires are but what AI is doing is hacking our desires it's working on us as an object in hackigon so what that then means is that this ethical basis of simply looking at fulfilling our desires or what makes us happy cannot be used we should understand it cannot be used which means we then had to go to try to think really clearly and what the basis of our values are which can end up going back to thinking we need to think about God or certainly some higher value yeah so you sort of mean by the have I explained it okay yeah you can kind of get a loop where we do actually have to so yeah so yeah so yeah so we have to think about we end up having to think about not just some superficial notion of Ethics but about human nature where we are in the universe because of the fact it's being taken apart by AI so so I I I'm a bit pessimistic and a bit optimistic but I think we need to think really carefully about it but have I explained it okay no I think I think you I think you've explained it well um yeah let me let me let me think about what you said one of the issues that AI is doing and it's something like it's what technology does in general yes is that it increases power and it externalizes uh the the means right it's like so you have these means of action and those means get externalized and yes but the the intention is usually always in the person right so a car makes you more powerful makes you go faster but you need to add things to you and to what what AI seems to be doing and it's it's not just AI it's in general uh Information Technology seems to have been doing that for a while is that it it's externalizing thinking um and externalizing the Very process of of thinking and what you need to get to the to the so you can remain with a desire let's say or a question but the the means by which you get to the answer is now completely can be completely externalized yes you need no you need to and so and at the very least when AI seems to be able to do is to increase our power in a way that will also atrophy the the muscle of intelligence that humans have yeah and so then that bigger question like you asked like what it is to what does it mean to be human yeah and then also what does it mean to be happy like what is it that are we just people who want the kick like we just want we just want it to happen or do the very process of being involved in something is part of what gives us meaning and purpose yeah yeah precisely so at least when people talk about oh technological unemployment and they'll be you don't need to do any work anymore they're going to be working they're going to be building all this they they work because they enjoy it I mean they work products and they think the rest of us are just going to sit around doing nothing but it so depends on what it is that you want to do there are some there are some things where it's kind of like like if you need to do organize lots of numbers it's really handy to be able to do it fast to Outsource that but the other things we want to do it ourselves I mean what's the first thing kids start saying is that I want to tie my own shoelaces up they want to be able to do it stuff yourself so that's that's it's taking us away but it's also it is actually atrifying us so do so that's what chat GPT is doing it doesn't do it on its own so so of course for example the first people immediately people in education were really worried because it immediately people started cheating on their essays right away I mean it's like my kids found out about it in five minutes and they knew you exactly how to use it and it was like in all their classmates know how to use it so and yeah yeah but if but if you think about you but you think if you think about the incentives for doing that why would you be incentivized to cheat on an exam it's only because uh education what's happened in education the idea that education is about getting a formal certificate so you can prove that you can do it together The Next Step not about actually wanting to learn it for your own sake if you really wanted to if you really wanted to learn I mean like when you learn carving you you didn't think I wish I could just get it done by a machine did you yeah no at least that myself yeah yeah you use a value of doing it but what so what it's doing to our language so it's so it's so it's to understand language for sure more than that it's it's controlling our language so through through I don't know if I'm a chat GTP but also language in general online it's being censored because of no sensitive control for so-called misinformation uh misinformation actually usually based upon a totally bogus knowledge of a model of what science is because you know all scientific knowledge is up for grabs um it's also actual being our capacity to to learn language you know you just have to use spell check or something but also it's also imposing a kind of uniformity on us uniformity or language so actually so actually I wrote a book recently I wrote a book can I advertise my book sure of course okay came up a couple of months ago AI ethics all right hey artistics yeah it came out a couple of months ago which is I wrote it as a I wrote it as a I should I should give you the information okay the Springer it came out with it's uh look it's really long it's it's like I wrote it as a textbook um to try to sort of uh put lots of ideas together but a reason why I mentioned it now is because the Publishers decided that they're going to use an AI Editor to edit it I couldn't believe that I couldn't believe it and I thought my neighbor was going to call the police because I was like shouting into my computer so much you just couldn't believe what it did to it but one of the really interesting things is I could choose between whether I wanted to use British spelling or American spelling I'll use British spelling which meant I use British English and it really made me realize there are massive differences between American English and British English and this AI couldn't tell the difference it just put everything into American English which from offense to American English but it's really different and it kept change it kept changing my meaning it just really kept changing my grammar but the present tense is used slightly differently I didn't even realize that it really mucked it all up but but chat GPT I asked chat GPT if it knows the difference between different forms of English because Indian English again is different and then the regional dialects in Britain no it doesn't it had no idea so maybe it does now because I because I asked it but so what that means it's doing it's looking at all the stuff in English and just collating it into an amorphous mass of English yeah so different ways of using English are all being collated into one and then there's stuff online I think chat gbt is stops at collecting data at 2021 I think but if it carries on updating the data it's going to be collecting its own data it's going to be like it's going to be a feedback loop there's a that's a that's the really that's really the biggest thing that I'm seeing is that if you project AI very long term it's got It's going to be diminishing it'll be diminishing returns and a kind of you know when people when people talk about like Singularity and like AI you know exploding and becoming I don't see I don't see how that's possible because it has no relevance it has no desires and so all it can ultimately be is a kind of spinning back onto itself with maybe with a long Arc but on that long Arc you're looking at a kind of of leveling you know a kind of intelligence leveling I don't see I can't see otherwise yes yes yeah I know so it's good it's going to be the same way so like in in in music but apparently half a music is set up online I don't know much of the technical details it's it's set to a particular sort of calibration based on keyboards and and so it's it's just it's a limited way of looking at the music so if it's it all ends up online it's the same with everything language is just going to and I asked chat PT about it isn't this a problem and it just replied it was actually quite funny I was asking check GPT about AI ethics it's quite funny it's kind of like a way of skimming the surface of how bad things are um it just replies lied by saying that although human language has always evolved but this is going to stop it evolving isn't it and it's always evolved in different ways it's always evolved in in really different ways I mean because yeah it's different because it can't because it's not referring to anything that's the problem it does it has no relevance and that's and I think and but then I don't know if you listen to the discussion that if you saw John viveki's video on AI I still need to talk to him about it uh explicitly but he he said the only solution to AI is embodiment and not just embodiment but biological embodiment in that so that it attains relevance yeah I completely agree actually I mean I've listened to some of this stuff but I just think that's I think one of the big problems one of the big problems is how it's making us less and less embodied so partly through that's partly why I said we should think about um how we're thinking of what AI is the ideology of it so that because so it's interesting how we think about intelligence as soon as we start thinking about AI is intelligent based on its use of language mostly we then start thinking if it's intelligent therefore maybe it's like us maybe it's an agent but intelligence is just one one aspect of who we are yeah manifest manifested all sorts of embodied ways yeah no you know but I think I think our intuition we did we talked about this a long time ago where we talked about how the systems seem to be farming intelligence from humans and then basically yes acting as a extension of that farmed intelligence that seems to be more and more true because most of the eyes now are what they call hybrid AIS which means that they're trained by humans using different methods like mid-journey is a is training you know because people the mid Journey will select will produce several images and then you will choose which one to have majority refined and so as you're doing that you're constantly telling it what's good and what's bad and so it seems like that that notion of uh that notion that that AI is basically not intelligent or agendic the way that we think about it but that it's it's farming it from humans yes yeah it's also doing in terms of the hardware because in order to do that its hardware and the hardware is most the hardware is mostly Main made by people working in really awful conditions I think sometimes under conditions of slavery or what's effectively slavery kids in a Congo digging up minerals with their bare hands and people in China working in really important conditions so so that's another way in which it's hacking Humanity so that it's so it's so interestingly actually um one of one of the ways in which is also um hacking our agency is in resp is in responses to it to try to control try to control it you know sometimes through like controlling what language people can say online and so on but also through uh through through legislation so um the European Union has got an AI act which is just about to go through but they've been for working on for a couple for two or three years to try to try to reduce the harms of AI so focusing on it's focusing on it spoke it's focusing on because because AI can cover lots of different things so it starts up we're trying to define AI and then it's focusing on trying to legislate about AI if it's going to cause you know sort of serious or or major harms but for Preamble The Preamble to the ACT written written by written by the uh gonna sound a bit brexity now the unelected Ursula Van Der Laden who's holding the European union says that it's important to have legislation to control AI to build public trust in AI so but we couldn't so that we can advance Innovation so that we can increase the uptake of innovation so it's focused on human rights but it's focused on the human rights of people within the European Union because if you're going to increase Innovation you're going to increase the hardware which is mostly outsourced to people in the rest of the world who are working in really terrible conditions yeah yeah but also I mean the idea that we the reason is because we want to build public trust and AI that's the reason why we're doing it it's like we AI is a given it's going to take over and so now we want to build public trust and AI you know my daughter who's 15. she she received an email from the government from the the education uh Ministry of Education uh and it was asking her to fill out a quiz online like uh you know a questionnaire online and you know it said that you could win 50 or whatever and the questionnaire was from the government asking her about Ai and what they were doing is asking her what they think about AI uh school counselors for psychological issues well yeah that's where we are think about it because and in the questionnaire it was basically suggesting that you know this this AI counselor would not have any prejudices wouldn't be racist wouldn't be sexist right and yeah exactly yeah it would it would be able to avoid all the prejudices of a human uh uh counselor well for one thing that's a complete lie because it's going to have built it yeah but they'll be the right people that built that put those you know they'll be the non-hate prejudices right but wow but also I mean one of the things that will be happening then it'll be collecting data it collects data on people all the time constantly we're constantly so that's another way in which it spreads that's another way in which the ideology spreads because of a metrics of all the data that's been collected so so all but all the time we're having data collected about us and all the time data that you thought was just ambient sort of like junk data turns out to be data that can tell you something about yourself or could potentially do that so that's another way that's another way in which we're being sort of trained or seduced into thinking a sort of mentalistic conception of a human being because you're trying to think that we're just basically therefore made up of information or made up of data yeah um but AI human counselor but also why you might ask the question so all these things are nested in other things going on around them you know about cheating on exams with chap GPT is nested in a wider issue about competitiveness in education and and exams and but there's also an easy solution to that which is um I got a really novel idea for that you could just sit in a room at a desk with a piece of paper and a pencil if you wanted what people know you could do that it's gonna be something really novel it's never been done before yeah a lot of it could really easily be sorted out that kind of way but but you might when I was at school there was no such thing as a school counselor now maybe there should have been maybe there were some things but but were overlooked it was a bit more brutal in those days but you might start thinking why why why why are so many school children so many teenagers having such massive range of mental health problems yeah yeah there's there's definitely there's definitely all of this is kind of crunching at the same time which is this Mental Health crisis with with young people yeah but it does have to do with AI in the sense you know that it is the question of what it is to be human what it is to be part of a society what it means to be someone in a community all these things are exploded right now yes yeah I think the pandemic really wasn't as well actually yeah because lots of Technology was introduced but just never went away I mean lots lots and lots of things have just been introduced and they've got all this works fine or it's slightly better in some way um and so they're just keeping it but I think all the on the online presence again is just encouraging us to a disembodied way of relating to people I think we've talked about this before but it's sort of but you can get easy access to a sort of so-called community but it's just completely not the same as actually spending time with actual people is it but it it brings about this idea that some ways that we are these Cartesian disembodied thing there is a rift between your mind your your identity and your body you know this is accelerated so much because the thing is that even this conversation I mean it's fine as a conversation but I've noticed now because now I've done both where I talk to someone on Zoom that I met them in person and you know there's all the there are all these unconscious cues about a person that have nothing to do with just what you see and what you hear but you know body position to muscle tension there's even smells probably there's all these things that are part of our human engagement that that we somehow because of because of these screens we're able to alienate ourselves from that seems to be part of what's feeding into the massive alienation between our bodies and our our identities and our minds yeah yeah I'm gonna I mean it could be so there's so much so much concern now about having people who for example wanting identity to be validated or being really upset if you don't completely agree what their identity is as if it's really really fragile so I suppose but I think if if we if you weren't just online so much if you're with somebody you get you get validation and a sense of belonging simply from you know sitting and having a cup of tea with somebody even without speaking or just I'm sure I mean I'm absolutely certain we pick up signals from people we pick up because when we pick up signals from people like we have electrical fields that go running a long way for our bodies I'm sure that's one of the ways in which we we are interconnected and have a sense animals like we're we're human animals we have bodies animals you know they smell each other they they have like as if we don't have any of those we're just these these exactly just like these eyes and these this mind yes but but that's it that is definitely that's definitely part of the whole issue of what's going on and the fact that we're able to conceive AI as being similar or even Superior to us in terms of what of intelligence and in terms of agency means that we we are deeply misunderstanding what it is to be a person yes yes yeah yeah precisely so there's another aspect of work but I do but when we're looking really closely into this did I mention my work I do some work part-time with a group of sociologists we look at the care of people with dementia in particularly in hospitals so so what are one of the things we're really really really I'm concerned about is it so it's not it's not to deny the dementia um isn't a really difficult condition that can cause lots of cognitive issues but it's to try to sort of fight back up about the idea that we're simply cognitive creatures and there's lots and lots of work that indicates that people have got can even one even after they've lost almost all language can retain a real strong sense of where they are in a community on a power to behave and a sense of belonging and understanding with other people and often mediated through things like so so a lot of work we've done in my colleagues have done in hospitals looks at looks at things like um even the clothing that people wear so that especially if you have dementia if you have if you're dressed in your own familiar clothing it helps to give you a sense of where you are and who you are let's look much much less likely to have their own clothing on we found much more likely just to be in hospital down adapt for men not to be shaved lose their glasses and so on so it's that it's that that sense of of embodied belonging or just left in beds left in bed a person could be transformed if they're taken out of bed and just sit and have a cup of tea with somebody yeah it's not it's not that's not just that happens with people with dementia that happens for for all of us yeah well the image of the old person you know sitting on their porch on their rocking chair this is an image that we have here in in Canada for sure you know the idea that that person is there but is also kind of not totally there you know yeah but they're still there and they still have they're still part of life and they still have their little routine and they do their things and and we we kind of understand that maybe they're kind of slipping away into some extent but that doesn't mean like you said that right exactly so now let's let's put him in a hospital bed in a hospital gown with a septic sized walls and let's just make it all worse because yeah because now they're not part of anything yes yeah yes so we've got an artist joined our team last last term last year um she's doing some doing some work with painting workshops with people who are living with dementia as a way of helping to uh people to ex to express to express ideas starting on some of the workshops with her um it's it's so it's looking at the idea of of the materiality of a paint as a way of as a way of actually but this is actually a form of thinking that expressed through the materiality of what you can understand and communicate and express through the actual physical materials that you're using so so that's that's a like a really embodied way of being a human and and acting something out yeah but you can see it like here in Canada especially all of these things are definitely related because you know the the Scandal around made here around medical assistant dying is I mean it's now it's blowing up because I think it's just been a few years since it's become legal and I think it's 30 000 people have been have have gone through made and so you can see that it is this idea that you I mean just this this very idea that exactly you're this thing and then at some point you decide all right now's my time to to die because it's all cognitive it's all it's all mental and so then you just decide and then you die and it's like it really is this reduction of the human person to to the to to this conscious abstract conscious being yeah yeah I mean it's also related to the use of Technology as well yeah that's one of the things that's happening is it you think that every problem must have a technological solution or if you have a bit of Technology you must use it often so that every every look every yeah every desire must be fulfilled every every time if you're if you have like this much pain that has to be taken away yeah and you can imagine that AI is going to be perfect for this because you know you can take away the guilt or the or the problem ethical problem of the doctors having to make certain decisions where it's just the eye is going to decide who gets to live and who gets to die and it'll be an objective decision uh you know it's all good it's just like the AI counselor you know no Prejudice no no none of the human foibles no guilt no yes yes but but probably will happen but that is a that is a way of also Outsourcing responsibility because that would only happen within the context of a of a medical profession where you know people still for some unknown reason tend to trust doctors they're still there's still an aura that this is somehow Healthcare yes how we get how it gets labeled does help there's health care if they did it in a butcher shop that might be movies a better idea of what was actually happening yeah so what the doctors at least here are almost already reduced to machines you know if you go to the doctor now all they do is sit in front of computer and fill out a fill out a report as they're talking to you right yeah they're sitting in their computer and they're filling out the report and so you know they're already just basically data Gathering uh that's all they're doing and you know their decision is kind of kind of handed down by the system yeah and so you know it's just it's just a question of time really uh before yeah before this becomes more and more automated thank you yeah yeah it's not I'm I'm definitely not a short-term Optimist on all these fronts well yeah but I I'm there is the use of metrics and also the idea that these I.T systems are actually going to save time we've been told that we've been told that over and over and over and over again it's going to save time whereas I can but I can remember when I first started as a lecturer you would just like write your reading list out on a piece of paper and give it to somebody to type we have we have so much more work to do but the metrics the metrics and so one of the things that metric's doing is capture it and then the information belongs to this belongs to the institution so that's a way of rolling things so I've noticed you know I've noticed that you know people who explicitly told that management can read your emails so you must notice that everyone's being really polite about oh we've got this change of University going to be really good yeah because you know because they're going to read your emails um so yeah when you wrote something on a piece of paper and you gave it to someone then nobody could read it obviously yeah yeah so so that's how it's how it's controlled but it's also it's always done in terms of it's just going to say this is going to save you time it's going to save time somebody say they could introduce AI into medicine because then the doctors will have more time for like communicating with the patients yeah of course obviously that's what's going to happen we all know that it's going to lead to doctors spending more time with patients my goodness it's not it's not going to happen there's no way that it's going to happen but everything also it's just produced to metrics that's one of the way which the AI is also you know is also controlling us by if students are reduced to metrics then what gets measured is what we look at yeah if if you if you reduce things to sort of harms and benefits it depends on how you measure them so what do you think about in the Molag video what I was trying to present clearly and it's not me who came up with this obviously that's in the moloch problem itself is that we're so focused on the ai's agency and the fact whether or not AI becomes a Consciousness or whatever that we forget that we stop noticing the way that it's actually acting on us which is sometimes outside of the software outside of the hardware but rather in the very competition to implement so so the the us as agents we are in a competition to implement AI because we know that whoever implements it will have an advantage and so it's almost like an evolutionary race towards the implementation of AI and so the agency seems to that agency seems much stronger than the question of whether or not the software and the hardware or whatever become conscious yeah well yes I mean there is a kind of balance right there's lots of there's lots of different sorts of arms races going on so what uh one of the one of the problems that like like all ordinary people have is working out what they can do about it and I think there's a there can be on the one hand you could say there's not much we can do about it but I think there's a there's a sort of a there's a kind of a danger of thinking this is going to be some big crisis so just let's wait for the crisis and then something will happen but I I kind of think they're still wait there's still ways in which there's still ways in which people can be aware of what's going on and try to get a bit more control in their life because I mean one of the things that AI is going to do and this is something that I'm noticing already you know we're doing this this snow white project and uh and we're going to see is going to be uh a deliberate focus on human intentionality will become will happen so it's not just that AI people think that AI is gonna like take over art but I don't think so I think that there'll be even a more of a fetishization of the art object with AI because it'll become very precious you know this idea of the actual someone making something will will have a kind of aura that that uh you know that all the the like let's say the wave the drowning wave of mid-journey images and of AI generated images won't have so people will will coagulate will kind of rally around intention uh more and so that's a hopeful aspect of what AI seems to be bringing yeah yes will lead to a turn to people actually wanting more what will happen a bit I mean because why do people I mean why do people actually go to galleries to see you know try try to get to see the Mona Lisa you can't get anywhere near it because people don't want to people want to go and see the actual the actual thing don't they I mean the galleries are just absolutely absolutely crowded and so many so many people will be are are really Keen to own a piece of art yeah but some people painted even if it's not even it's not like you might think it's not terribly good they gonna love they just love it because somebody has actually made it I think there's going to be a move to people I mean there is a move to people like doing performing performing new live music I mean because I mean live music like tinned music is just nothing nothing like just nothing like live performance but also even in terms of even in terms of um online censorship and people being you know permanently canceled from things there are people people are beating up so you know so people are you know people are going no I know in London people are going to they're being canceled from so many different things but their views on various things people think you're going to standard speakers call now yeah so you've got the police protect you you can say whatever you want yeah so there's when you talk about live music and I think that that's a good example to help people understand the difference is that when you go to an event where there's people there is a kind of electricity I don't know what it is but there is a kind of electricity that develops a kind of of um feeling of participating in a in a group and participating in a movement participating in a a wave that's going through that you cannot have you just cannot have that when you're sitting at a screen and so that that that will not go away you know the goggles if you get the the VR goggles they're not going to provide that kind of that kind of uh electricity that you get from going to a sports game or going to just can't no no because I I've heard you've talked before about with a bit of skepticism about like concerts and with a comparison between well for sure I think concerts are Downstream from yeah from from real participative actions like you know like whether it's folk dancing or religious participation they're Downstream from that but they they still have a level of reality that is undeniable yeah yes precisely so if you I mean if you go if you if you if you go to see I mean actually I've been um I've been to some fantastic operas recently when you get back up you can't sleep for hours you can't because it's just but there are also there are levels of participation in those sorts of things as well yeah because a lot of people who go to vote don't know a lot about the performers spend her bill you know join some Society where they can go backstage there's all sorts of levels of participation but but yeah I think I think that's the way that's a way to go actually actually meeting up actually meeting up with with people um uh there's some do you know I haven't talked to you before I mentioned it's a really interesting person who works in VR called I hope I pronounce his name correctly Geron Lanier l-a-n-i-e-r is pronounced I think so he's one of the people who've actually developed VR but he's got some really interesting stuff to say um one of the things you might find he's got some videos online and a number of books and one of the things he says about there they are is the best thing about it is taking the headset off because then it makes you realize how absolutely fantastic reality is yeah because the amount of colors the amount of depth like if you look through if you look at you know I was walking today this morning when you look at the sun like come through the leaves and in a wooded area you can have as much you can have as much resolution on your 4K whatever it's just not you just can't compare it because you know also you don't get like you say you don't get the smell you don't get feeling the heat coming you know from from the sky you don't have all these Sensations that are so rich when you yeah that's something else I've noticed as well actually so in order to counteract so much awful stuff online I joined lots of lots of lots of kind of like Facebook groups like Welsh lands great photography like the corrugated iron Appreciation Society things like that just post photos of stuff and then people started posting posting AI photos and things people can tell straight up it's really quickly tell right away like yeah I don't know like they keep telling us that at some point we won't be able to tell the difference between Ai and and real photos but I'm not sure at least you know when they posted those pictures of the Pope everybody got tricked by these pictures of the Pope like you could just tell that it was a I don't know there's a there's a subtle difference in the way that that there's something about it I mean maybe at some point we won't be able to tell but at least for now it's quite still very easy to tell it's just like with GPT a lot of people said oh this is really fantastic it gets his answers it gave it really are really great but you can if you just keep on answering it you can tell that it's you can tell but it's just because it avoids it's low level whatever it's just basically it can synthesize some some things like in tempted to be able to synthesize some some uh some ideas but they're very it's very superficial yes I mean the other thing as well actually even if the end result is the same or already close enough to be good enough for what you want it it hasn't it hasn't got to the end result in anything like the same way yeah in constructing it and constructing an an image or doing a really good like AI painting or something it hasn't been it hasn't spent years in life drawing class yes but it doesn't it doesn't it doesn't look at the figure and sort of you know it a drawing class and it'll well there'll be different techniques for sort of thinking about you know how you might think about what it is equal seeing and how you might see it doesn't do any of that it doesn't do any of that at all so so in terms of saying it's intelligence it could it can do it can do things and achieve a result but whether it's whether it's intelligence the same sort of way as us it's so it's a really difficult question to fall in and so for chat gbt it just it just predicts what the next word is likely to be it is like that no but what's interesting about about that is you know because the people who are programming Ai and the people that are putting it together and amassing all this insane amount of language data and then pumping it into these to these algorithms that are predicting uh the future they don't understand they don't think they totally know what's encoded in human language it is it is true that some of the very very deep aspects of human consciousness will necessarily be encoded in the structures of languages but the fact that that's true doesn't mean that those that are playing with this know what they're playing with like they're playing with these very dangerous tools yeah and some things are going to come out of AI that nobody will be able to predict because because they're not aware of what the very very strange and deep uh drives that exist within the human consciousness and also then ultimately are hidden in human language so that's why it's you know when I talk about the idea that the AI is is like you know the body of a fallen angel you know people I know that some people struggle able to to understand what I'm saying you think I'm just talking about mythological images and and yeah of course the mythological mythological images captured very well but in terms technically you can understand it that way is to say that you don't know what's in human language you don't know what all the motivators are you don't know what all the hidden structures are that that make that underlying meaning and so we're just playing with this and we're kind of tossing the dice and and tossing it around but there are agencies in there that are very dark the type of agencies that made Human Sacrifice other humans the type of agencies that made you know people commit genocide all these agencies are there in the very structure of human language and and the fact that people don't seem to totally understand that I mean probably the raw version of Chad gbt has it all and now they're just trying to patch over it to stop that stuff from leaking out but it's like it's probably there in the Raw version of of the of open AI all that stuff is probably bubbling up all the time Yeah well yeah I mean there's there's so many attempts to try to try to work out ways of General ways of trying to control AI so so what I think I might have mentioned it to you before was it Stuart Russell for example for example has written a book called human compatible which is supposed to be a response to Nick bostrom's book on super intelligence so Nick Boston wrote this book a while ago now actually I think 2015 or 16. about he's a paper kit he's a paperclip guy yeah about how to control super intelligence Stuart Russell is he's like a real AI expert like he knows what he's talking about he wrote he wrote the major textbook on AI co-rudge it he's he's got an idea but we should try to make certain but but AI but we're producing has is aligned with what humans want but then the problem is what do humans want you record right do you know what humans were he recognizes that problem so here's his here's his solution which is it's this which is absolutely terrible it's honestly Asian s just stick with their eye and stop doing this it's absolutely terrible because he thinks we should try to train AI to observe human behavior and extrapolate from from our Behavior honestly he's got a book on his foot look it's got a book he did he did like a set of lectures on the BBC a year or two ago on it it's an extrapolate from my behavior what what it is that we really want what it is that we really want yeah what's that going to be what is that I mean but also could you even interpret you could probably interpret human behavior in in any loads and loads of different ways what do you what do you what do you really want I mean maybe or maybe our darkest desires are really our stronger ones right yeah that's right exactly maybe maybe the the drive towards murder and War and rape and all these things they're there I don't know what to tell you the idea that you would just watch and you would just extrapolate from human language and human behavior it's just not that's just not gonna fly yeah you know that was like P.S please exclude all Hannibal lecture types and then also exclude yeah but there's a but there's a narrative so there's a narrative in human society you know which which is encapsulated in the murder of Socrates or the murder of Christ which is that that human quality is found in in exactly that in not in the quantity it's found in the in these exceptional shining bright lights that we look to and we align ourselves on but that most of the stuff is all this really chaotic dark stuff so so the idea that you would extrapolate you know mathematically uh you know or statistically from human behavior is insane that's a crazy thing because I'm like a really really strong motivator for human beings is enemy isn't it so that if you want to be like if you want to be like if you want to be um picked if you want to be like picked for something there's two ways of doing it one is to make yourself better than the others and the other is just to knock the others down because then you're the one who's picked so Envy is such a unbelievably strong motivator so that if if it picked on that where would we be where would we be would just be just knocked down that's not the problem yeah and especially and what's interesting about the the 10 gpts and the AIS the way they're being trained now is that they're trained on on attention grabbing mechanism they're trained on the internet they're trained on all these things these social media platforms these are the these are really the worst place because because they're only there to get your immediate attention yes you tend to immediately default to the most immediate you know uh Pleasures that we can find you know which are you know Envy all these things that's what people are you know rage anger lust all these passions are the ones driving these platforms yes yes so so I mean one of the things you could say about about AI acting as a kind of super agency is you can kind of turn it on its head and sort of think what the character what a broad common characteristics of agents and and in terms of common characteristic human agents is to actually try to manipulate and dehumanize other people so manipulate people by dehumanizing so that's what the ideally that's what age we should treat each other you know as Emmanuel Kant said and others have said you know we should always strive to treat other people as an end in themselves and never Milly as a means but but something like the Stuart Russell setup is actually built into it but the AI has been asked to treat the human as a means as a means to finding out what the humans want so it's looking at us from like from a third person perspective like a like a behaviorist would looking at our behavior from from the outside so that's one of the things that's happening in the agency is that we're seeing it as an agent partly because some people think it must be an independent agent of its own but also because we're actually we're allowing it to dehumanize us by focusing our attention on something but to avoid something else yeah but but also but the wider The Wider metrics that are using for what effect is measuring stuff it means there's some things it's not measuring so that's left out so we just focused on what's measurement the thick the thing that actually the one thing that I'm afraid one day I'm gonna be like arrested for shouting at somebody in the street the thing but but really really upsets me is parents of small children who are just looking at their phones and not talking to their children it's like it's actually not funny because human babies I'm actually going to get really upset human babies and little kids are designed they're designed to grab our attention yeah they need it they need it to survive because they don't know how the people say but they also need it for language development cognitive development they need constant interaction not constant but you know yeah I don't know where it's like you see there's so many parents who are just looking at their phones and not looking at their kids yeah well you see that in parks and I remember even before I I had a phone really late before I had a phone I would go play with my kids in the park and I would watch parents sitting on the bench with their phones yeah yeah but another thing it does it isolates us from each other in ridiculous in ridiculous so here's can I give you a lot just one really stupid if you look at these tiny little examples tiny little examples you can sort of think about oh yeah that's happening in my life and just not do it so I went from being in a pub with some friends last week and the blue we went to pay proudly announced that they don't it's not just that they only take cards they pay by QR code so for one thing if you hadn't got a smartphone you couldn't pay so but that also meant that the one person who was paying had to scam is it proudly saying oh it's really good for the environment because we don't have a paper Bill instead of having a little scrappy tidy bit of paper bill they came out carried out a card a nice little Posh card with the Pub's name on printed on both sides in color so they're like so he's blatantly lying is it how it's like a Conjuring trick lying to your face that they're saving paper because he's come out with a piece of paper and then the person who was paying her to scan it onto his phone so nobody else can see the bill and he's had trouble doing it so the lie was this is quicker it took longer because he had trouble looking at how to do it and then we paid and then remembered but one of the things we'd ordered hadn't been available so we wanted to try and check did we get charged because of our cheese that we didn't have couldn't check because it's vanished and then the waiter was just like boasting boasting boasting about it do I say listen mate read my book you know yes it's just honestly but that's when you wonder what the hell is driving this because it's the same even just the menus you go into a restaurant and then instead of just giving you a plastic menu they give you a QR code you have to scan the QR code and then you're like on your phone trying to figure out the menu it's so ridiculous yes and it's like but it's so much more complicated like why I don't understand like like I do understand but you know it's interesting to notice that it doesn't necessarily make things easier it's just some fetishization of the process yeah it's a specialization of progress so it must be progress because you're using technology uh it's also excluding people I mean like this is my phone proudly this is my phone I couldn't have paid I don't have to do the washing up so um but but it's so excluding people because then so my friends always ridiculed me for that phone but so so that people get picked on and ridiculed but it's also excluding people because if you had the whole menu you I mean usually they bring many of them more than one people but if we'd had a paper Bill we could all have looked to say oh look they've charged us for qualified but we can't we can't it's just one one person looking at it it puts you into that little kind of like all these times it's set souls to us a saving time now yes sometimes saving a tiny bit of time is is what you want like if you're a pit stop in a Grand Prix or if you're trying to save somebody's life in ICU at incremental difference an incremental difference you know that can be incredibly save more people's lives but when did anybody go to a restaurant and think the real problem is that it takes 30 seconds to pay the bill I want to pay it in 25 seconds yeah what do you think that yeah yeah yeah yeah so but yeah so so what do you think I mean what do you think are some of the ways because it seems like I mean AI is is taking over there's no doubt about it like there's no way around it really and so what are the ways do you think that we can engage with it or not engage with it you know that find ways to kind of stay as human as possible in this context well I've got to think I'd probably say the similar things things similar things to you really but we just we need to do as much as stuff as much stuff as possible in in the in the real world yeah and but but also take the take the time and a chance to sort of think really seriously about what it is to be a human being but you know to join in you know to join in community to join in rituals to sort of see to you know to see people as much as possible to spend as much time spend as much time as you can in nature because wherever you are you could find a bit of nature um but also just really just really think I mean one of the things you can do is just stop just kind of stop rubbing your friends for for not that's one of the things that's happening people people get people joined in or or just oh for example you can just point you know maybe write to institutions and point things out so but I noticed for example the QR code men is the National Gallery in London most of all most all the other places you can get a paper map and you might have to pay a pound for it you can get a paper map of the museum and then with your friends you can say which galleries you want to go to we can have a little look and carry it around with you the National Gallery you have to have a QR code on your smartphone now so so you could maybe I should do that at the end of this right to the National Gallery and say look you shouldn't do this for one thing it's got disability of occasions because not everybody can read stuff on the smartphone you know it's too so small yeah it's a ridiculous thing it's like it's so unpractical to look at him to look at the map on your smartphone yeah so but I would I would just I would just suggest I think just look at how it's just become integrated into the infrastructure of everything you do and just just ask yourself is that what you I mean like maps for example there's some really undermining people's capacity to do oh I was I was walking somewhere the other day in London and we it's a slightly confusing area because the streets just go out at different angles and we had to try to get to some pancreas it's always sort of heading north it's fine so how do you know we're heading north well because of the Shadows on the buildings it's West over there but it's it's uh sorry it's just it but just just look that's why I think it's like important to think about what the agency is doing in really kind of minute little details because that's the only only place that's the only place we have to try to yeah yeah right try to think about unless you happen to be you know have a have some particular position um hmm yeah all right all right let's take that as a lesson for today so so Paula where can people get your book if they are interested in your in your textbook is it possible to get it somewhere online or to order it no you can get I mean you can get it it's you can get it from um the Springer website it's on you can get it on Amazon you can get it on all the usual on all the usual places it's quite expensive because it's it's a textbook or get or get your school or University Library to forget it yeah so yeah so one of the things one of the things I've tried to do is called AI ethics but one of the things I do is actually explain how the standard ways of looking at ethics just don't work they break down we'll be looking at air money and how you need to look at Notions of human nature I mean I mean there's there's stuff in there about religion actually because I talk about how many provisions of AI are actually fundamentally really really religious and we need to understand understand the roots of that in order to understand in order to understand what what visions of the future are being presented to us yeah you know what what position humans how how it's understanding a position of of humans I mean some some of the stuff is some of the stuff is like Bears uncanny resemblance to lots of religious ideas like the general idea that humans are really special because we're the ones in a position who are going to be able to create it's fantastic being who's going to be the Fulfillment of the universe well one of the things that it seems that we could use right now is an understanding of non-human agency and what those non-human agencies were understood to be like in the ancient world and this is going to sound silly to people but yeah fairies and all these little types of agencies how is it that ancient people understood them like the the demons the Angels you know how do they see them acting on people and how do they see their agency kind of manifesting itself I think that we've evacuated all these ideas of non-human agency and now we're basically dealing with the problem of non-human agency but we don't have any of the tools to to deal with them because we don't understand them we don't think about you know you know the idea of if you think about for example a fairy tale like the Shoemaker and the elves you know and the notion of of action that is beyond the will of the of the of the person it's like these these tropes that are in fairy tales you know the pucks and all these kind of household uh agencies these are important in understanding the way and the dangers of non-human agency in our lives yeah yeah I mean that's that's one of the dangers of the way in which we've been encourage you to think about the mind in a purely sort of Cartesian mental way because we have this illusion then but we have we are totally in control of our minds and that we're Justified undermined by the constant need for affirmations from other people but we're just isolated from from from other people I mean have you are you familiar with um Canadian philosopher Charles Taylor yeah yeah like his like his his distinction between the the um the buffered sense self and of course so poor herself yeah yeah those kinds of ideas are ones that people were well worth well worth thinking about in terms of we just got this idea but we're just completely buffered which means we're actually we're not we're we're more poorers than ever exactly and then as people yeah and as people kind of join with their AI familiar they'll have this little AI familiar that'll be working for them you know it's like you should you should start to think about these old stories again because yeah the the buffer itself is over you know we are moving into a very very ambiguous uh state of relationship of identities and agencies yeah are you going to do the elves and the Shoemaker in one of your stories yeah we'll see it's not part of the plan for now but that could be an interesting idea you know I'm trying to think about it in terms of obviously I'm just telling the story it really is just a fairy tale but I do want there to be a certain angle in it that is eluding into some of the things that are going on now like in Snow White we did a whole you know host there's a whole aspect of Snow White which is the obsession that the queen has with the mirror and the mirror revealing who's the most beautiful in it and so this this whole idea of vanity and of the cell phone and of the you know this firming machine that's affirming or denying your value value uh you know it's like it's so relevant today that you know there's a that they there's more emphasis on that in the way that I tell the story uh and so I think it's the same with something like Shoemaker and the elves I would have to find a way to explain these kind of agencies that are acting out you know side and on the side of our will and how they're there and how to deal with them uh in a way that would be helpful for people so yeah it's a good idea think about it great it's my favorite one actually yeah I love that story definitely definitely all right everyone so so uh so if you can uh you can check out uh Paula's Paul's book and thanks again for your time and and uh your your thought about all these crazy these crazy times I really appreciate it thanks oh great to talk to you all right
Info
Channel: Jonathan Pageau
Views: 13,713
Rating: undefined out of 5
Keywords: symbolism, myths, religion
Id: jj69RnoE9qI
Channel Id: undefined
Length: 62min 12sec (3732 seconds)
Published: Thu Jul 20 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.