#043 Prof J. Mark Bishop - Artificial Intelligence Is Stupid and Causal Reasoning won't fix it.

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
do you remember the taste of the last orange you ate do you remember the warmth of that welcoming cup of tea that you had when you came in from the cold november rain do you remember the smell of the hair of the first love that you kissed but i'd like you to try and hold on to these sensations because i'm going to come back to them welcome back to the machine learning street talk youtube channel and podcast with me your host dr tim scarf if you want to be exposed to completely new ideas and challenged intellectually then this episode is probably the episode for you last time we had a philosopher on the show we had absolutely atrocious viewing numbers but i'm a big believer that we need to be challenging our preconceptions on absolutely everything pedro domingo said that there were only five tribes in artificial intelligence and he didn't even consider the other tribe which not many people talk about cybernetics cybernetics is the science of communications and automatic control systems in both machines and living things we're going to discuss ai across three key dimensions today computability understanding and the phenomenological experience or consciousness when we talk about artificial intelligence what we basically mean is the science and engineering we're trying to engineer machines to do things that we might say is clever professor mark bishop does not think that computers can be conscious or have phenomenological states of consciousness unless we're willing to accept pan psychism which is the idea that mentality is fundamental and ubiquitous in the natural world or put simply that your goldfish and everything else for that matter has a mind pan psychism postulates that distinctions between intelligences are largely arbitrary mark's argument is distinct from cell's argument that computers cannot understand and also from roger penrose's view that some tasks which humans perform are simply non-computable he thinks that there is no objective fact of the matter about which computations a physical system is computing this is because of the observer relative problem which mark will outline in great detail in today's episode many of the ideas we're going to discuss today are anathema to the current modus operandi in artificial intelligence research just the reading list we got from mark today will keep us busy for the next year mark's work in the philosophy of ai led to an influential critique of computational approaches to artificial intelligence through a thorough examination of john cells the chinese room argument and we'll be discussing that in great detail later mark is also the scientific advisor to fact 360 a startup deploying artificial intelligence using natural language processing for e-discovery or detecting malicious insiders by subtle changes in language in human communication networks insiders who might pose a threat to your organization and they use sophisticated graph analysis to do that mark just published a paper called artificial intelligence is stupid and causal reasoning won't fix it he makes it clear in this paper that in his opinion computers will never be able to compute everything understand anything or feel anything for much of the 20th century the dominant cognitive paradigm identified the mind with the brain you your joys and your sorrows your memories and your ambitions your sense of personal identity and free will are in fact no more than the behavior of a vast assembly of nerve cells and their associated molecules you're nothing but a pack of neurons and that was according to crick in 1994 the church turing hypothesis stated that every function which would naturally be regarded as computable could be computed by the universal turing machine if only computers could adequately model the brain then the theory goes it ought to be possible to program them to act like minds with its myriad of features running the gamut from causal learning reasoning and understanding even bengio observed in 2019 that we know from prior experience which features are the salient features and that comes from a deep understanding of the structure of our world the church chewing hypothesis has triggered an explosion of interest in biologically plausible neural networks we had dr simon stringer on the show last week talking about the spiking dynamics the temporal binding circuits which emerge when you create some of these biologically inspired neural networks but i'm not just talking about the biologically inspired versions even the relatively pedestrian vanilla neural networks that we all know and love on this channel this has all been um a huge focus point for the last 50 years or so ai eschatologists like ray kurzweil and nick bostrom believe that there might be an intelligence explosion where all of humankind will inevitably be crushed like ants although viewers of this channel will well know francois chaulae's response to this view alan turing deployed an effective method to play chess in 1948 many decades ago but since then we've seen little progress in getting machines to actually genuinely understand to seamlessly apply knowledge from one domain into another judea pearl believes that we won't succeed in realizing strong ai until we can equip systems with a mastery of causation he thinks we need to move away from simplistic probabilistic associations to machines which can reason causally he even proposed a so-called ladder of causation which is seeing doing and imagining which i feel is almost self-explanatory actually unfortunately for pearl deep mind have already demonstrated several times a reinforcement learning system which can perform causal reasoning and counter factual analysis and it seems obvious to me because if you're interacting with a system then of course you can learn the causal factors i'd completely take pearl's point that with traditional machine learning where you're not interacting with a system you can't learn any causal factors that seems quite intuitive anyway all of this is small fry compared to the point which professor bishop wants to make the idea that these silicon ensconced algorithms can become thinking machines becomes a little bit bizarre once you realize that a machine has no choice in what it does computation is not an objective fact of the world it's observer relative even wittenstein said that the meaning of a computation is in its use he thought that understanding could not be a process and therefore it cannot be a process of symbol manipulation whether a given individual understands is often external to that individual mark's intuition is that evolution autonomy and environmental interactions give rise to phenomenological consciousness he thinks that we cannot live inside a computer simulation because he can feel the sensation of cool air on his face so mark thinks that the meaning of computation becomes relative and lies in its use by humans mark gives several examples in the show this evening demonstrating precisely why he thinks this so i think mark's main contribution is this dancing with pixi's reductio ad absurdum he quotes hillary putnam in 1988 the influential american philosopher hillary putnam published a paper in which she showed that under the influence of gravitational waves and cosmic rays the subatomic particles that make up all the objects of our world your seat the seat you're sitting on the very clothes you're wearing the room that we're in they're all containing a rich dance of subatomic particles a dance that never repeats itself and putnam realized that this is analogous to a state machine going through an infinite series of non-repeating states so it then seems to me that if a computer a terminator perhaps is conscious purely as a result of moving through series of computational state transitions then if i know the input to that machine with input fixed i can generate exactly the same series of state transitions with any large counter like a car's milometer or following hillary putnam's move with any open physical system so if a machine is conscious merely as a result of following some computation then consciousness is everywhere in the bricks of this building the clothes that you're wearing the very seat you're sitting on they are all experiencing the zing of that orange the warmth of that cup of tea and the memory of your first love's kiss if machine consciousness is possible everything even the smallest grain of sand is filled with an infinitude of conscious experiences bishop interprets putnam's result to mean that computationalism demands that every physical system is host to a multitude of conscious minds which he refers to as little pixies since a computationalist believes that to be a conscious mind is just to implement the right kind of computation not only would we be surrounded by pixies but the vast majority of conscious experience would be realized in these pixies since any physical system is implementing any and all computations simultaneously then all possible conscious minds must be instantiated simultaneously in every physical object for bishop this is the most patently absurd manifestation of pan-psychism and thus demonstrates that computationalism must be false so it seems like a contrarian position that bishop is saying that computation is very much in the eye of the beholder whereas most of us think that computation goes on inside our brains anyway the key takeaway from the dancing with pixi's reductio ad absurdum is that computation doesn't have those phenomenological conscious states a finite state automata cannot give rise to conscious experience unless conscious experience is in everything bishop says that he's an embodied entity which is to say he's not just thinking in his brain he thinks with his body and his body in the world in today's episode we also talk about some of the greats of computability mathematics and logic starting with alan turing on computability he described a machine called a discrete state machine i now call it turing's discrete state machine because that was the first time i read about it in his work so over any short time period we can replicate the behavior the different state transitions of turing's discrete state machine with any other one such as a digital marometer but because when we added input the number of possible state sequences grew exponentially we can't easily do the same thing when you have a machine with input but then i realized if i know the input to one of these machines that combinatorial state structure collapses again just to a simple list of state transitions he invented this interesting thought experiment called the discrete state machine and he had this physicalist desire to explain all of humanity via a computer program and interesting what he learned later on in his career about the non-computability of numbers led to a significant amount of tension for him later on in life and his children the american philosopher john cell was so exasperated that anyone might seriously entertain the idea that computational systems purely based on the execution of appropriate software no matter how complex might actually understand it was it was ridiculous he formulated the now infamous chinese room experiments and we'll go into this in some detail in the show but essentially he said that syntax is not sufficient for semantics and that programs are not formal and minds have content therefore programs are not mines and computationalism must be false now most of the chinese room argument is the first proposition which is that syntax is not sufficient for semantics and we will come back to that later another interesting character is godel godow's first incompleteness theorem famously stated that any effectively generated theory capable of expressing elementary arithmetic cannot be both consistent and complete in particular for any consistent effectively generated formal theory f that proves certain basic arithmetic truths there is an arithmetic statement that is true but not provable in the theory and this can be used almost anywhere and it's often referred to as the god or sentence for a particular theory and it was used in anchor by roger penrose he made the godelian argument that mathematical insight cannot be computable he said that the mental procedures whereby mathematicians arrive at their judgments of truth are not simply rooted in the procedures of some specific formal system and he followed up by saying human mathematicians are not using a knowably sound argument to ascertain mathematical truth anyway i really hope you enjoy the show today i'm absolutely honored that mark came on to discuss this with us i'm very interested in the philosophy of ai in the philosophy of mind and make sure you read some of the material that mark has signed posted and i'll link those in the description remember to like comment and subscribe and we'll see you back next week welcome back to the machine learning street talk youtube channel and podcast with my two compadres mit phd dr keith duggar and alex basian stenlake now today we are speaking with mark j bishop professor emeritus of cognitive computing at goldsmiths university of london mark is interested in the philosophy of mind and artificial intelligence sorry that's my siri it seems to be very interested in getting involved in this conversation mark is interested now it's playing music because ai is stupid that's why ai is really really stupid and actually that's a great segue for our conversation today because because mark our guest today also thinks that ai is really really stupid but anyway mark is interested in the philosophy of mind and artificial intelligence and rails against what he calls computationalism we'll get to that in a sec machine consciousness and pan psychism in 2010 mark was elected to the chair of artificial intelligence and simulation of behavior which is the world's oldest ai society he's been invited to advise on policy at the un the ec and also the uk government he's published three academic books 200 articles and won three million pounds worth of research funding he serves as the associate editor of nine international journals and his research has spanned the practice and theory of artificial intelligence he's regularly asked to comment on agi particularly in response to these ai eschatologists of course we were speaking about this a few weeks ago so folks like hawkins and and musk and kurzweil who warn of an existential threat of an intelligence explosion now in one of mark's recent papers he concluded that cognitivism which is the whole idea of viewing the brain as a computer and its concomitant computational theory of mind is inappropriate and instead we should emphasize the role of foundational processes such as autonomy exploration autopiosis and that's a strange word as well isn't it so that means um a system capable of reproducing and maintaining itself by creating its own parts and eventually uh further further components um and social embeddedness in giving rise to a genuine understanding of our lived world so in summary mark thinks that computational theories of mind cannot explain human cognition he thinks that the claims of its research is that you know genuine conscious mental states can emerge purely in virtue of carrying out a specific series of computations he thinks that those claims are egregious now i discovered mark about a few months ago because he's published this paper called artificial intelligence is stupid and causal reasoning won't fix it it's actually a really cool paper because it's a tour de force of all of the computational and philosophical issues surrounding ai at the moment and he kind of kicks off in the paper by saying that ai is a brand tag it's becoming ubiquitous but a corollary of this is that there's widespread commercial deployments where ai gets things wrong whether it's autonomous vehicle crashes or chat bots being racist or automated credit scoring you know processes discriminating on gender and of course we have a whole load of people saying that we can improve it so judea pearl and gary marcus they say that deep learning is just curve fitting it's just reasoning by association and uh you know if only we could build computer systems that um took things a step further and thought about time and space and causality but mark takes the ai skepticism to a whole new level because he thinks that machines cannot and will never understand anything professor mark bishop welcome to the show in your paper you talk about crick and you talk about church and and touring giving rise to computationalism what do those folks say well in my paper i start i start off with the idea that it's become known as francis crick's astonishing hypothesis that uh you and everything that we are is are defined by um the certain particular set of neural firing patterns any one instance and if we take run with that idea to its logical conclusion it would seem to be that if we have an appropriately high fidelity simulation of just the brain we abstract from the brain with all the dirty chemicals the the the neurotransmitters the serotonins and the like we abstract away from all that and just look at the neural firings we've got everything everything else drops out for free and a lot of people surprisingly buy into this uh to this idea and in fact it's one of the uh one of the hypotheses that that pushed the human brain projects and one of the biggest european union um funding uh grants of all time a few years ago it was over a billion foreign macrame and that courted a lot of controversy with some people saying with eu is putting a lot of its eggs in one basket and uh a lot of people had doubts as to how much real science that programme would um would deliver and it hadn't died a very interesting um presentation to the group uh the libyan brain project group a number of years ago um because i was arguing um as tim outlined an introduction that um we were not likely to get conscious states in fact i think there are good air priority arguments for believing we won't get conscious states of any computational simulation no matter what that simulation is no matter how fast it is or no matter what algorithm it is unless we have to bite the bullet and we accept a very vicious form of pan psychism the idea that conscious phenomenal states are living in everything the very cup of tea of coffee that i'm drinking at the moment has its own mental states and the symbol likes to pin my colours to the scientific mask i find it somewhat implausible to believe that my coffee is is conscious of me drinking it uh and so we're led to to reject that horn and if we reject the horn of uh pansysm then unfortunately we're led to in my opinion we led to reject the idea that the mere execution of a computer program can bring full of conscious states so that's kind of in a nutshell uh the the executive summary if you like of um an argument i described as dancing with pixies that propose to show that unless we're willing to accept panseikism computers will never have phenomenal states of consciousness now that is a distinct argument from the argument that people like sir mates that says computers can't understand and it's also distinct from people like penrose who say there are certain um things that people can do and pedro is very famously talks about mathematical insight that are fundamentally non-computable so it's my own minor contribution to the debate and that is i don't believe that computers can be conscious unless we're willing to accept a very nasty form and psychism so i think some of that impulse right to break things down you know and to find the absolute minimum component that can implement everything humor i mean that's a very western kind of analytic you know scientific approach to start with and so if we don't want to throw out analytics as a whole like in what gap or or let's say what do we need to add to kind of artificial neurons if you will in order to um or even to the computational paradigm itself so what do we need to add to say the turing machine or lambda calculus you know concept in order to be able to create consciousness what's missing this unfolds hugely complicated stories as you can imagine and i'm hoping that at some point in time we'll get to go into we'll engage in a little bit more depth on what the dance is with pixi's reduction actually says because otherwise it just sounds like a an airy hands-waving philosophical statement that i'm making it might be quite interesting to go into the nuts and bolts of why i believe that argument works but to come back to keith's point um like many people i think working in the field uh as a young teenager uh back in the 70s i taught myself to program and had an unhealthy interest in science fiction and he put those two things together and you're led to the belief that i had very strongly as a teenager that we would build thinking conscious machines and that one day they would they would come to tyrannize mankind and then slavers and go on to be the next stage of evolution if you like i'm not alone in that i'm sure many people have had similar entertainment similar fantasies and it wasn't until i went to university my choice of degree at university was informed by these these interests of mine peculiar interests of a teenage male and i went to read cybernetics at the university of reading it was the only place in the uk where you could read cybernetics at the time and that was a great uh education not least as some of the people my tutor for example alex andrews was one of the was an early first-round neural net pioneer from the sort of 40s and 50s and famously gave a couple of great papers that the mechanization of thought conference of people like minsky and rosenblatt were all out so i was surrounded by old school academics who were from that first wave of new uh of people working in neural networks and that was a really interesting place to work one of the guys that taught us um the subnets is kind of an engineering and it touches on all sorts of things but we're sought in the uk was very much an engineering discipline and so one of the things we had to do from year one was build computers but not perhaps as you might think of building a computer are you going to get your board and putting a few tips in perhaps but literally starting out with ttl and building your own half adder building your own address decoders and building literally computer from individual ttl chips and once you've done that you get a very low level but real engineering perspective of what's going on in the computer you don't tend to think anything too fancy the idea that these things are thinking machines start to become a bit bizarre um because you realize the machine has no choice in what it does if i imagine like a balance a ruler on a pivot that's balanced and press the ruler down one side the other side comes up it can't do anything but that and when you see that the operation of the logic gates in the computer work in exactly the same way uh that seems to be a very different mode of oper operation to that which we're used to entertaining when we think about human cognition that said all those thoughts were further down the intellectual line for me when i was engaging as an undergraduate i'm just interested in building these things and i guess it wasn't until my third year i was doing a joint degree in science computer science and then one of my lectures in computer science was a guy went on to be professor of theoretical computing at oxford a girl called richard bird and he introduced me to the notion of touring non-computability and um and also introduced me to gerd lecture park but that's another tale very interesting book which i'm sure you're all very familiar with and uh that was quite an intellectual shock because prior to doing that course i'd had this sort of a very modest young man and i had this idea that you give me a problem and enough time and i'll i'll boss you out the computer program that'll solve it and the idea that there could be problems that were fundamentally non-computable was was a shock and it took a while for that shop to actually seep in i was a bit skeptical even though i knew what to write to get the reasonable mark in the exam i can't say my heart was completely wedded to the notion of non-computability not least because the proofs are these are kind of weird as well when you get into them there's lots of self-reference and things and they seem a bit sort of bizarre they did to me as a young undergraduate anyway but then roger penrose and i came back to degree from phd at reading um i was looking to meet roger penrose uh just around the time he was publishing um the emperor's new mind or just before that time and we got chatting and i realized that my intuitions about the horror that non-computability might pose we were being echoed by someone who was even at that time known as being a bit of a polymath the guy that taught stephen hawkins amongst other things that worked with him and the fact that this guy was also echoing some serious reservations that were based on gerdlinger my ideas and based on chewing on computability he stuck a chord with me but the real big intellectual shock to my uh again i started out with the pity in neural networks with the intention of building a thinking living breathing conscious machine uh and then over that period of doing that phd and my position changed 180 degrees so after meeting penrose the next big thing was i went to a conference at oxford in 19 uh god when would it be it was around the time that uh parallel distributed processing first came out and ronald harper pulled and came over for the first time in the uk to describe back prop so this is how long ago it is i'm really quite old-schooled in all these things so yeah which is the first presentation about proc at oxford and it's a massive conference there was you know it was a sellout conference for the main room which probably held about seven eight hundred was sold out they had two overspill theatres we were in the second of these overspill theaters with video links to the main stage and we heard these presentations about prop which is all very exciting but the thing that blew my mind having been brought up in an engineering discipline cybernetics was listening to two philosophers because that was quite odd and there's two philosophers with dinner and so and i never heard a presentation like it because unlike the sort of kind of measured dry presentations of how to control the server mechanism or a new algorithm for quicksort or whatever the hell it happens to be in our computer science and engineering presentations i assume that philosophers argue in a much more fisticuff kind of way and it was it was a shock um but also very engaging and uh and so i came across you know searle describing his chinese room argument which resonated with me at the time and and hearing dana um poopoo this at the time in his own unique way and um so around that time my thesis i began to uh entertain able to question where i was going am i and are my team and are people are there people in the world who are doing exciting things when you're all knows are we going to build these thinking machines and gladly over that time and in the few years that followed i reversed that opinion and the core intuition that drove all that i guess really was so chinese room argument no i think you'll find a lot of sympathy a lot of people that actually work you know very close to the ai or you know what's referred to as ai when we're trying to sell things um you know we'll raise an eyebrow and skepticism because you know we know gpt is just a big pile of linear algebra you know the the talk about the transformational effects really come from the business side you're among good company here but to kind of go back you said you wanted to touch on this dancing with pixies yeah argument that you raised earlier and this kind of links into the the notions of pan psychism and how this relates to you know if you if you want to take the idea that a computer can be intelligent that it can think that it can understand things then you end up concluding that anything can kind of have this you run us through the like very briefly people haven't read the paper pan psychism argument the dancing with pixies fallacy and kind of like how this all kind of flows into the conclusion that computers cannot be intelligent yeah one of the axioms in which this argument is built is the idea that computation is not on objective fact of the world it's observer relative so i first want to give you a couple of examples that i think will underscore that axiom one because a lot of people would reject that's a nonsense computation what a computer does is a fact of the matter so again i'll go back to my undergraduate days when we had to build um before we use ttl we literally have to build these logical gates out of transistors so it doesn't get much more basic than that so imagine you built some a set of transistors to perform the following electronic logic you have two inputs to the circuit and one output call inputs a and b and the output if both inputs a and b are 0 volts the output is 0 volts if a is five volts and b is zero the output is zero if a is zero volts and b is five the output is zero if a and b are five volts the output is five volts what logical function guys is that performing and you might be right it's performing and if we assume that naught volts is false and 5 volts is true now if i tell you that you are actually wrong in your assumption that nor false was true and five volts with false what logical function is that performing nand oh who remembers their bullying there we go oh that's right it's performing or not so in other words the computational function this bit of electronics is doing is contingent on the on the observer relative mapping between the electronics and the world right if i use a naught volts false five volts three mapping it does an and if i use the universe of that it's doing an or you cannot tell a martian from planet mong couldn't look at that and say that's an and gate that's an all gate without knowing that mapping and that mapping is subjective i might have one mapping alex might have another keith might have another one in fact a a great uh israeli scientist orange extends this argument pathologically and looks at multi-level logics and the problem gets really weird if you go down that route but i'll just stick to the simple case with the and the orphans so that's one reason why i said fundamentally i mean it seems to be just axiomatic i just baffled when people tell me this is not the case but i'd still occasionally meet people who dispute it so there's a second follow-through argument and it's it's built on the work of uh winograd and flores in their book understanding computers and cognition when they start to think about what is a world a word processor and i've reframed their argument a little and i think about what is a chess program i know that any of you guys are old enough to remember in the 70s we used to have these little chess plastic chest computers that on a square they had little holes on on a bit of board and little tiny little chassis plastic chess pieces and the light when you made your move you lifted piece up and popped it where you wanted to go and then the computer would light up the piece you had to move and where that was to go using one of these gadgets you could i could quite happily play i'm not very good chessboard so i could happily get thrashed by these machines day in day out and and enjoy that thrashing so to say speak and i could argue that i could use that piece of computational equipment to play chess with now in the uk there's there's a famous conceptual artist by the name of tracy evan who does a lot of work with neons i don't know whether you guys have come across at work at all and also in the 60s there was a big movement in what's called kinetic art where you're in cybernetic art where people interacted with art pieces so now what unbeknownst to me tracy's sabotaged my test computer she's ripped the innards out and she's now wired all the inputs to pressure pads in an art exhibition in an art gallery and all the outputs to neon strips so when people walk over these pressure pads different neon lights come on and off now there was no sense that you can possibly it seems to me that you could possibly say that when i walk around the art gallery i'm playing chess i'm interacting with a bizarre piece of abstract art certainly not playing chess so it doesn't seem to me there's anything intrinsically chess like in this device yes it was engineered very carefully so that uh if i knew what i was doing i could play chess with it but i could use it in other ways as well um and the problem gets even worse if you've come across isomorphic games um let's imagine probably those notes and crosses now uh imagine you've got a um a knots and crosses game on your on your iphone and you've got a like i have a six-year-old daughter who's just just about got a head around knots and plotters i can keep her occupied for i was gonna say half an hour without the exaggeration for five minutes say giving her this thing and she'll play notes and crosses happily against and says oh daddy i'm bored what am i gonna do so ah well i've got another game i can show you but she says daddy you've only got one game on a computer so i'm going to play it against the computers don't worry about that we've got the nuts and crosses i've got another game i'm going to call it um computer whisk now imagine you lay the car the deck of cards out ace through to no uh nine and you we take it in turns to pick cards from this deck the winner is the first person who can get 15. get cards to 7 to 15. it transpires that if you've got a program that can play notes and crosses with a suitable mapping you can get it to play a perfect game of computer whisk so given the grid it's like a magic square where all the verticals horizontal and diagonals add up to 15 right you then plot your computer go it's marking your one square choose your go it tells you which card to pick next you can play a perfect game of computer with so i can use that same computer program with my mapping to play a perfect gaming computer with so you cannot say in advance without knowing what i'm going to do with that program whether i'm going to play tic-tac-toe or computer-wise so i think those three arguments together make to me a persuasive case to paraphrase wittgenstein that the meaning of a computation is in its use by human computer users the phrase though as you want to you're all aware from fitness and paraphrasing it's one of the investigations where it makes the claim that the meaning of a word is its use in by human plays with human language games and i think the sum applies to computation the meaning of a computation lies in the use that we as human uh users of computers put that to so that's that's kind of setting the stage so i think there's always going to be this mapping at the physical level and then there's always the idea of what we're going to use a computation to do and that's a very social human activity so that's setting the state where i want to go with advancing the pixies now the next move i make i know you'll have all have read computing machinery and intelligence during famous 1950 paper everyone's at least looked through this and whether touring first outlined a turing test what became known as turin test also in that paper he outlines the operation of a very simple machine turing's discrete state machine as it became known and this is a beautifully simple machine it's a disc light device and it can go around in 120 degree intervals and it can stop at the 12 o'clock the 8 pm and the 4 pm position as it moves around a clock and it can exist in each one of those discrete positions and we can describe that the operation of that machine as a as a finite state automaton if the machine's in state a and the next clock ticket is going to go to b if it's a b next cocktail is going to go to c and if it's at c it'll go back to a again now if we want to we can um arrange that when the machine is in computational state a it will do something when it's in computational stability you'll do something cheering in vegeta when it's in computational state a a light would come on you also imagine there being simple input to the machine like a big lever break mechanism that you could have on or off so if the machine was in state and the brake was on it would remain in state that if it breaks off it would go to state b one of the first interesting things that we again like computation you see the copy when you're given a turing discrete state machine to read off the computational state abc we need a mapping between the physical position of the lever machine and the computer rotational state to which it refers so we may define computational state a to be the 12 o'clock position in which case when the lever is there we're in a or we move it so it's at b and the wrestling follow super we always need that that mapping we've always got to do these mappings between the physics of what's going on and the computational state that we're instantiating so now we've got this machine uh without the brake it just goes through a b abc abc abc abc uh and that's interesting enough if you're interested in input this finite state automaton they're not very exciting machines all they can do do is go through a cyclic series of states forever in an unbranching series of state transitions well what was interesting is that uh in the appendix to hillary putnam's representation in reality is a little known proof this shows how we can effectively how we can map the operation of any uh um input this finite state automata onto a large digital counter actually putting them goes further and shows that we can map it on to the opera on to any open physical system an open physical system being a physical system that's open to gravitational waves and all the rest of it uh electromagnetic spectrum and binding onto it but for simplicity let's just consider the without water generality let's just imagine we can map the operation of any input this fsa onto a bloody large uh digital counter how does put them do that well let's take chewing's machine it just says if the computation is in state a i'm going to map that to the digital counter state zero zero zero if it's in state b i'll map that to counter state one if it's in st in state composition state c i'll map that to counter state two and then the a again we'll go to three b again we'll go to four and see we're gonna go to five and then we get over any finite time period we can replicate the state transitions of our digital uh discrete state machine by the numbers that we're cycling through on our digital counter and again you might answer so so what that um that doesn't seem particularly threatening result for computationalism at first sight because real computations much more complex devices than inputless finite state automata well in uh in a paper called does a rock implement every input that's fine i say automated david chalmers responds to this argument in in an interesting way and uh he says that yeah i'll concede if you like that we can implement really trivial machines like this financial automata using putnam's mapping but when we want to look at machines with input this breaks down because we get a combinatorial explosion of states that we need and chalmers introduces a very neat uh construction called the combinatorial state automata which we can implement using putnam's mapping but an exponential increase the number of states that we need and the combinatorial state automata is sensitive to initial conditions and so could be genuinely said if we could implement it with an infrastructure could generally be said to be implementing a computation with input but at the cost of every time step of the computational you need an exponent your number of states grows exponentially and chalmers makes the point that after a very short number of states will run out of the number states needed is bigger than the number of atoms in the known universe and hence putting them as a mapping must fail and that's kind of where i entered the debate because i made an incredibly trivial uh all the all the hard work been done long before i came to play with this game so to speak but my only trivial uh modification to putnam's argument that to me makes it robust the charm as it says is to say this well if we look at any real machine of which it's claimed has genuine mental states conscious states as it interacts with the world and this this intuition was brought real for me because again some people dispute the fact there are people who believe that there are serious scientists who believe in the machine consciousness program they definitely are and i used to my header department at reading cybernetics was one of those people a guy called kevin warwick and we reading and built these little simple uh robots that moved around a corral controlled by a neural network and kevin said well these got roughly the center of neurons as a slogan it's pure human bias if you say a slug has conscious experience and these robots didn't and i thought that was a ludicrous claim and that inspired me to move and develop this uh dancing olympics is reductio so to come back to the case i said right then kevin if you say your robot as it moves around like a chorale over a finite time window t1 to tk experience is something that it is like to be a robot bundling around the corral not bumping into things i don't know what that is but let's just imagine it has some conscious experience what i what i can do is log all the inputs to that machine and then i'll play them back to them so i now lifted the rubber out the corral i've disconnected all its uh sensors if you like in actuators and i'm just injecting into the robot the the states it would have got were it whizzing around the corral on its sound says to go and does the machine still have conscious state well yeah of course it has it's it's reading the numbers from a latch the number the data was originally taken from an 80d converter for argument's sake we're now plonking that data in there from injection system but the computer still has the phenomenal states and so kevin warwick asserted and that unfortunately that to the problem that he was going to encounter because if that was the case all that we're really into we can take we can collapse the exponentially growing number of states that chalmers showed we would have if you actually want to implement fully all aspects of the computation using putnam's mapping if we just try to look at the particular computational trace we just need the inputs to that machine that pertained to any over time as the machine did its little thing and then we can remove all the counter factual states and once we've done that we've got a linear series of state transitions that we can reliably map using putnam's mapping and hence if it's the case that kevin warrick's little robot was conscious then so must our account to be conscious and then after putting any open physical system so now in a nutshell is is the dwp reduction it's interesting that in a lot of these seminal debates to me about ai about penn notes about serving about my own small contribution there's a lot of confusion people can very easily misinterpret what's been saying a lot of people got hung up about does a rock genuinely implement a computation you know and i think to me tom has completely proved that it does not right no problem with that can we make a rock with a suitable mapping implement an arbitrary series of state transitions yes i think we can and i think we can make any count to do that and because we always use a mapping whatever system we use i don't think i'm doing anything there's no slight of hand involved here because all computational systems involve an observer relative mapping somewhere along the line to get them to work whether it's only assigning a logical truth to five volts and a logical false to naught volts so i don't think the use of a mapping is something that you know and there's no sleight of hand involved in that given that i confused an arbitrary complex series of state transitions so then the question is if you're a physicalist and i would approach this problem originally as someone who you know i like to think of myself as if i'm not a mysterious i don't want to appeal to some supernatural forces to bring forth my consciousness and at one point in time it was the case that well if you don't believe in functionalism or computationalism then you've got to believe in supernatural effects well that is no longer the case cognitive science has moved on a lot since the 1960s there's an awful lot of new tools in town and these are really exciting tools in my my viewing highlighted a few in the introduction tim but things like the embodied and active embedded and ecological approaches can go a long way to answering or giving his insights into these questions without having to bring forth particularly supernatural notions so i'm going to put that to one side we don't we're no longer faced with a choice of either accept computationalism or accept putnam's rock i mean there's a lot of interesting responses to it but i want to point out a couple things or maybe just ask you about a couple things so one thing is that uh first i think what your goal is and correct me if i'm wrong with the the pixie dancing with pixies argument is to say simply that if we accept um uh say turing complete computation or even in this case finite state machines but i think we can probably go one step further if we accept that effectively computable systems can implement consciousness then we also have to accept uh pan psychism correct that's that's what i try to show right um because once you have a system that you that you play like my boss kevin said that machine is conscious right i can look at what happens if that machine interacts with the world and log all the inputs to it i can trace the flow of the execution flow of that of the machine code that control the robot and then i can implement that's a an arbitrary series of state transactions that's another that would do exactly that with an appropriate mapping with a digital media digital camera so if that machine is conscious then my digital accounting would plus this mapping must be conscious right so that's the position i arrived at now when i'm chatting to david chalmers about this and he said oh no no no no no you've gone off the road because we need the full potential of the computation to be there for functionalism to hold now this is quite a mysterious view in fact it was so mysterious that when he first said he had to repeat it about three times because i'm not the quickest to uptake and i found it so bloody bizarre what he was saying but when i did unpick what he was saying the chalmers you actually need uh once you effectively slice off the potential counterfactual actions by saying well i know the input at this point in time i know the input of that i'm going to replace the counter factuals in my program by direct go to statements if you like or just just snip them from the from the program to me that couldn't possibly affect the phenomenal state of the system because otherwise you're saying that non-entered branches of a computer program right have a causal effect on the phenomenal state of your machine but the bizarre thing is that's what david said said no that's it you've got to have the potential for counterfactuals there otherwise we don't have phenomena we don't have the machine generally instantiating phenomenal states so if we just if we just assume in argue window that people don't or that we don't accept pan psychism okay and that and that that these arguments prove that if we accept computationalism it implies you know pan psychism okay we also have mathematical results that say um let's say turing computation or effective computation encompasses all computation like there is no you know there is no other kind of computation unless there's hyper computation right and so i think what we're saying i believe and correct me if i'm wrong but i believe what we're saying is that there exists hyper computation and that human minds are performing hyper computation so just just take this back to the rock for a second one issue with that mapping right is that rocks actually have physical states that may be real numbers you know they they may have values that in and of themselves are not computable you know they can have positions and states and quantum states that that have values that are essentially defined by an infinite precision real number and therefore are not even accessible to computability to start with like even describably right so are we saying because i'm always looking for where consciousness is hiding if you will like are we saying that it's hiding in sort of real valued states you know maybe quantum states like penrose would say perhaps some microtubules or something like that you know is that is that a form of hyper computation and is that where our consciousness derives from where do we draw the dividing line because that's something that that's like i've read quite a few of your papers in preparation as you do when you're going to speak with someone um and this dividing line where we go from you know computational and essentially impossibly into like impossible to achieve intelligence or understanding or any measure thereof to the point where we have an an intelligent system that can understand its world and and sort of redefine itself and redefine its world um that distinction is is not terribly clear and i as we dive into this question i want to drive towards where this distinction lies if this distinction exists well to pick up on keith's point first i think i'm i'm neutral i mean penrose has given a positive thesis as well as a negative one so famously in the emperor's new mind he gives he gives he gives his first version of the goodly the an argument that reports to show that mathematical insights non-computable and then says well this suggests to me that non-computability lies at the heart of what it is to be human and then with stuart hamroff they outlined a positive thesis which which proposed to show that non-computability can arise in the brain through the orchestrated collapse quantum collapse in the microtubule skeleton of brain neurons i'm perhaps neutral on this i know that when penerose held a psych symposium on his work in 1995 and it attracted a lot of responses well over 20 friend and brightly and i don't think any of his logical work was seriously brought into question so the that is his interpreter even though that was actually a naive interpretation of google compared to the work that he put out in shadows of the mind he's a much more nuanced uh uh approach to the argument in my opinion but nonetheless it wasn't seriously criticized nearly everybody criticized his positive thesis so i'm yeah penrose is a clever guy it seems interesting i'm not going to hang my colours onto that flag particularly if it works great but i'm i've been more drawn to modern approaches to cognitive science which look at uh at the end and you know there isn't unfortunately a very quick six-page paper that can can lead people gently into this but there are four schools that all this work really started out with working for roboticism mit called rodney brooks who wrote a classical paper which you guessed you guys are familiar with called intelligence without representation which basically goes to you senator tommy when i did robotics old-fashioned old-school representational robotics as a young postgrad a lot of our work was trying to build take data from sensors and build rich internal models of an out their world and remember we we spent all our budget on buying this biggest fattest computers we could possibly afford at the time and strapping them onto these poor little autonomous vehicles and they were laden down with computer power and they moved absolutely tragically slowly we're going back in in the early 90s now but they were they were pathetically slow things you know really embarrassingly bad because a lot of their work was trying to build up these models looking we're trying to build mothers of them now i know that these days we can do that kind of thing bloody quickly but back in the day you couldn't and and brooks thought well do we need to do it and in that paper he argued that we didn't why build the representation we can use the world as its own representation and in in a sense that sort of paved the way for thinkers like um francisco varela then in a book called the embodied mind with evan thompson and anandaraj to to start thinking about different ways of doing cognitive science and and the embodiment is a mind-blowing book it's a it's quite a ah it throws your your whole view in the same in a in an to go to leicester park can be quite mind-blowing as a as a young kid this is mind-blowing as well but in a in the kind of a weirder way because it actually questions the existence of a fixed pre-given out their world and that was quite a shock to me when i first came across these ideas not being a trained continental philosopher that my first mode of engagement was with varela who incidentally started out as a theoretical biologist and then someone very active in the a life community so i think his initial work has also been from a sciencey perspective but he engaged quite deeply with uh with the european philosophy which which at that point in my life i was totally ignorant of and so this led to a development of alternative schools of what cognition is all about then the inactive skill is one that i'm interested in and it says that you know um effectively we can look at one sort of the nationality itself is these days split into numerous sub approaches uh one of these from developed by a guy called kevin reagan and alvin airy argue that uh visual consciousness is something that we do so they're moving away from the idea that vision is like interpreting like the eye getting the scene from the world on your brain having some little like uh uh cinema which you're then interpreting what all these little bits do they make the case that the vision is more akin to an activity is what we do it's guided sensory motor exploration of the world rather itself is particularly interested in uh embroidering ideas of autonomy how how can how can things become meaningful there's all these very complicated debates that i think to try and come back to your question keith and your question alex we you need to touch on but it's really challenging to to try to turn them in an intelligible way in a relatively short period of time you would you would at least agree though that um you know your contribution penrose etc points very strongly that there's something embodied there's something physical that we haven't quite figured out yet maybe it's microtubules maybe it's something else you're agnostic to that but there's something physical that allows my intuition is to do with autonomy uh and this is why we're bringing the idea of ultra policies and that tim mentioned at the beginning that auto breezes is a again in the 70s zumba to achieve the chilean side musicians alberto machurano and francisco varela came up with a theoretical device for delineating life and non-life because astonishingly this has been a really difficult problem you'd think it was probably solved hundred years ago it hasn't been and even as recently as when margaret bone was writing on this one school tried to say life is and then give enumerated a list of properties it has to metabolize it has to reproduce bloody bloody blood and maturana looked it from a different perspective well what fundamentally life is a system that has a circular organization it's got to be able to maintain its own boundary of itself and the other and it has to encapsulate the rules the automatic rules that maintain that boundary in the face of a changing environment we spoke to um uh fristen carl fristen quite recently and he was talking about markov blankets which is quite interesting about how do you define the the boundaries of a physical system and you know does a hurricane have a mark of boundary and what we're talking about here in a very general sense is defining boundaries between what lives and what doesn't live and what is meaning and what isn't meaning and what is understanding and what isn't understanding and it's very philosophical it's quite difficult to pin this down if you look at uh materano and varala's book it's about 60 pages their original treatment uh autophages and cognition from from the 70s and it is very dense it's it's it's it's not a waffly philosophical book it's quite hardcore quite mathematical yeah i think they they do do an interesting job at pinning down uh what uh if you like what it might be for something to be alive and this has been this challenge which was explored in in the embodied mind has since been developed by evan thompson who's a an american american but uh an interesting philosopher who wrote a book called mind in life where uh the argument is laid out that there's a that life is a continuum and wherever have you a uh this continuum of life then you have a proto mentality and i'm kind of drawn to that i think it's a very persuasive argument and um then we have to look at the question of what constitutes autonomous systems and why should it matter if an autonomous system has a phenomenal sense of what it is like to be um here if you like you can link into the work of a guy i've just recently come across who wrote to me a few weeks ago uh he used to run a big lab in france a.i love in france mikhail i think his his name is and he he argues that we need phenomenal consciousness to arbitrate between uh different actions if you're sending a robot to a mars it absolutely has got to be completely autonomous and it's got to react appropriately in different in in unknown environments all sorts of different threats effectively the robot's going to have to know something in its state that it's good to be that makes the robot feel pleasant in a state that's horrible it's danger that might cause death to the robot it has to that we use the phenomenal sense of what that feels like we can then use that to arbitrate between different actions and this is actually by the way as an audience i first came across through a paper by daniel dennett called cognitive wheels the frame problem of ai when he looks at what must be known to us to arbitrate on what he called the cookie problem imagine you've got a big jar of cookies and some little kids like my six-year-old daughter two families one next to each other in one family when the child goes for a cookie the family beats it smacks it relentlessly until it's in tears and never goes it doesn't have any more cookies after that and the other one they they're very sort of touchy feeling oh no please don't have another cookie tarquin and occasionally talking just go and have another cookie denna asks the question why is it that beating your kid causes that child not to go for the cookie jar anymore and we know that well because being beaten is something deeply unpleasant and uh you don't particularly unless your metacats want to do things that are going to bring this feeling of pain about you but then then they said well how do we know that we can we can we can arbitrarily hard wire such facts in so i could hardwire into my computer program if something else biffs me then i'm gonna i'll increase my pain by one and if pain gets over a certain threshold i won't do that action again that's incredibly brittle it's literally arbitrary but unless we have phenomenality unless we have access to phenomenal states and know that getting bift hurts or going over rough terrain if you're a marginal robot takes you around a bit you you have to just sidestep that by hard wiring effectively hard coding because as engineers the system is no longer autonomous now we're having to define what it has to do for all these different possible states that's the price we have to pay so i think you can argue that evolutionism has blessed us with phenomenal consciousness so that we can act autonomously and that's the way that i guess after evan thompson and varela and the view that i i come to so we need consciousness to to enable us to succeed evolutionarily well haven't we just fallen foul of the fir the kind of prime axiom of software engineering at this point that you know every problem is just a level of abstraction away because i mean we could you know i forget which paper it was you had a really great uh conclusion in one of your papers where you essentially said if we built a bunch of robots that looked and behaved just like us their automata they they laugh at our jokes they respond but the thing that defines like the difference here or the difference between us and them would be that you know when when we're laughing and feeling we feel it it's phenomenological when they do it it's not it's a simulation but you know leading into this argument it's like well to have these martian robots that have autonomy and can succeed they need to have this phenomenological state isn't this isn't this really just a software engineering problem away from being solved i i think when i wrote that paper i was i mean certainly very recently literally and i i want to i'm hoping to to work or at least to reach out to mikhail to see if we can do something together so i'm quite excited by these essays this sent me i think they make up his i don't know why i just haven't occurred to me that this could be a reason why consciousness has has evolved he makes a very persuasive case i'd love to claim it was my idea but it absolutely isn't but i think it's a very beautiful one i need to understand more and either he will push it or perhaps he might do something together i don't know um but yeah so i i i'm a little bit skeptical that without that the glue of consciousness that we could get a machine to act as a symbol chrome of you in all possible cases um i was arguing from the stronger i guess in that paper well let's just assume that we can i think engineering-wise i'm a little bit skeptical now i'm following mikhail's work that that is going to be possible but also to come back to another point uh that relates to the are we going to talk about the chinese room with all or are you assuming that's boring for your readers no no i think that's one of the most important things because because we're we're talking now about you know consciousness and the various different kind of boundaries between what is and what is not consciousness but i think the other one is really important is understanding and the boundaries between what is and what isn't understanding you say in one of your papers what does it mean for a central processing unit to understand does it understand the program and its variables in a manner analogous to sales understanding of this rule book but this sells rule book thing right it describes a procedure as you say in one of your papers that if carried out accordingly um allows cell to participate in an exchange of uninterrupted symbols squiggles and squoggles which to an outside observer look as if so is accurately responding in chinese to questions in chinese about stories in chinese in other words it appears as if so in following his rule book actually understands chinese even though seoul trenchantly continues to insist that he does not understand a word of the language so this has been used i think very reliably and you had a paper actually recently introducing a couple of other responses because there were four responses to sales argument right there was the robot reply the system's reply the brain simulator reply the combination reply and in your recent paper you talked about um robots and animats in the target bbs article there was a lot more than fault so when he wrote the paper came up with four possible counter arguments which are the classic ones that tim just outlined but from memory there must have been over 20 people really big names in philosophy and ai who responded to that bbs target article people like marvin minsky mccarthy uh denna obviously uh uh god i can't the shanties have forgotten but that big big names uh who are different responses there's a lot more than the four that so but i mentioned these because what's been bizarre having uh i edited today with john preston we edited on the 20th 21st anniversary of the chinese room argument john preston and i put together an edited collection of responses to it twenty year one years on from leading ai scientists cognitive scientists and philosophers and i still think that's a good collection of essays that we that we picked and good collection of people to contribute to that volume and then again in the intervening years between this chinese room argument coming out and that volume coming out and then between that volume coming out in 2002 and now really and i've talked about this in as you can imagine in many places most of the uk universities and quite a few in in europe and one or two in america and nearly all there is the most formidable responses that i've come across really go back to the responses that sir actually predicted and by far the most common and probably the most i think the strongest response is is some variant on what became known as the system's reply that you that you mentioned him can you just quickly define what the system's reply is do you mind if i just just go over again because he went over really really quickly the essence of the argument but i'd like just to sort of unpack it slightly more slowly so sir imagine so to set the scene cell's a monoglot english speaker shamefully like myself shamefully because i'm married to a greek lady i still can pretty well only communicate in english at best and um the soul can as well so he imagines himself locked in a room and this room has got effectively got a letterbox instead of a door to which he could communicate with the outside world and in the room with three piles of papers and on these papers are strange symbols that serve doesn't know what they are we know there's people reading about the experiment hearing about it that these are actually chinese idiographs but to sew they're just uninterpreted squiggles and squiggles you've got no idea what they are so you've got these three piles of things and on the desk there's a big grimoire a book that tells sirl how to correlate the symbols from the first power with symbols on the second and then other rules that time out to correlate symbols on the first parallel and with the second part and also linking symbols on the third parallel another rules that tell him how to take symbols to one of these piles and stick them to people uh through their throats to the outside world well unbeknownst to so the first panel defines a script uh the second in chinese the second panel describes a story in chinese and the third part describes questions about that story in chinese and the symbol soul was told by the book to give to people in the outside world answers to questions for that story in chinese and sales point is that if we concede so he's arguing again from the strongest it says okay let's concede but that rule book however it's defined and again a lot of people got home because they thought so was purely talking about a naive pattern matching program if this symbol doesn't then do that actually still makes it clear if you read the paper carefully that he wants us to stand for any conceivable computer program this was the first reaction that i had i had an allergic reaction to it because as we know from talking to walid suburb you know that we can't write the damn compiler for language it's too complicated so it's not possible really for even us to explicitly understand and verbalize the rules that we use in language and you said yourself that artificial intelligence practitioners were incredulous at the extremely kind of you know simplistic uh view of cell that you could have this um you know low level rules described yeah but i mean if that's because they didn't read the paper carefully because cell makes it absolutely explicit that he generalizes he gives a simple example to sort of just get you thinking about the problems and imagine as again so this is the world that you know some people have tried to do language into understanding in this very naive way but sir wants the uh thought of the stand for any possible program all we're doing is the rule book tells cell how to manipulate uninterpreted symbols and put uninterpreted symbols out of the door uh how it does that whether it's implementing a neural network whether it's sometimes geogetic algorithm whether it's implementing valid uh uh sense based uh uh uh word effect sentence-based word direct compositional understanding natural language designing whether it's doing uh uh gpt-3 kind of operations it's irrelevant still says whatever your program is that's what's in the book and at the end of the day that program will tell me how to respond to questions in chinese with answers in chinese if i follow that program uh carefully don't make any mistakes i'll give answers out the door if your program's any good it will give answers that are indistinguishable from those a natural and native speaking chinese person would give even though as seoul transitionally insists i following this program as not in order to get even the toefl in chinese semantics all i've been doing is like a mega fast idiot's evolving manipulating uninterpreted symbols around and sticking some symbols i don't know what the hell they are through a letterboxing and that to the outside world does that imply then that it's just observationally impossible to determine whether a black box is conscious well this is a i wrote a a counter argument to susan i can't pronounce it now probably these uh cheering test for machine consciousness why i make that exact claim um this was in frontiers and robotics or one of the frontiers journals a couple of years ago because susan says oh we can ask her ideas we if we ask questions that are about particularly human activities relating to phenomenal experience we'll be able to tell whether this machine is uh she gives a procedure um for doing this i would say well do whatever set of questions susan you have for deciding whether your machine is conscious i'm going to sit unbeknownst to you and what you ask them i'm then going to go away and write a little program in basic because i'm good at writing in basic that says if question one says bloody bloody blah are you susan's first question give this answer which is the answer that a really complicated machine consciousness program gave so in fact have a look at the table but because susan doesn't know because i sneakily switch the machine so she asks her questions thinking she's talking to a really complicated machine consciousness thing she asks the questions which she claims will tell her whether this machine is conscious but she's actually just interrogating with a really simple up thing and at the end they give us the answers she wants us yeah that's conscious but she's just been talking to a look-up table it's yeah i think that you're quite right keith i think it isn't obvious to me how we're going to be able to do a test for machine consciousness purely on the basis of external observation in the absence of anything else because we cannot if we're machiavellian we can always uh cheat the thing is when you were talking about it doesn't have semantics i want to one pick that a little bit as well because you said the robot rover on mars uh the the semantics there were there's the state spaces of all of the sensory experiences and then you said it's brittle and there's an alignment problem and i can understand that it's very similar to the the ai alignment argument but this is different if you if you can replicate let's say you're talking to a black box and you can't distinguish whether or not it has consciousness or whether it has understanding why is there an issue with semantics in that case ah you're wrong for me i thought you're going to say how can i tell that anybody understands or is conscious which is actually one of the core uh responses that's all um anticipated in the chinese that that is actually that is the obvious question as well like presumably do you think that we are conscious because we might exist in a computer simulation right well i i don't think we can because i don't i i know that if i slap my face it hurts right and i yeah i know that machines can't instantiate phenomenal consciousness i made a paper called a ref refuting digital ontology which is after an invited talk at the royal society workshop on the incomputable hosted by barry cooper who was the leader of the cheering centenary celebrations in the uk and worldwide and um i made this very argument then that i don't it isn't obvious to me i think that it's clearly obvious to me that we're not in a computer simulation because i feel and if my dancing with pictures reduce your argument is correct unless i'm willing to accept antagonism and computations can't realize sensation so either my dancing with pixie's reduction is wrong in which case i'm very i'll be sad but in the instance i'll be happy when you show me where i'm going wrong um or if i'm right then we're not living in a computer simulation so i i don't think we are in computer simulations furthermore it's an axiom of cognitive science that other minds exist so it's not for me to have to explain why i believe that you three guys have phenomenal conscious states that's part of cognitive sciences is is acknowledging that you do and trying to come up with a theory that explains why these occur and how they occur the conscious state argument to kind of perhaps play devil's advocate a bit here um to me when i first came across the dancing with pixie's argument this wasn't an argument that for me implied rocks had understanding or intelligence to me it implied that that we don't there's nothing that that has intelligence or understanding right what nutrition i wanted you to come to is that computation doesn't have those phenomenal consciousness states i actually the argument again interestingly what a reviewer said this of one version i think of the 2009 computing paper aren't you arguing too strongly doesn't this prove that nothing can have consciousness no it doesn't it just says that the the operation of a digital computer a finite state automata to be uh to pin it down more precisely cannot give rise to conscious experience unless conscious experience is in everything that's all it says now if you don't bite the bullets it will actually i think i'm more than a finite state automaton thank you very much alex there's more to me i'm an embodied entity well i i don't just think in my brain i think with my body and my body in the world and again there's a beautiful result that came from a guy and i'm going to argue as a professor at goldsmiths and i'm going to take his work and and extend it probably in ways that he he would be uncomfortable so this is my interpretation of work that's published in a number of papers in nature by jules davidov on colour perception by the hindu tribe uh in his work in africa i'll just describe this beautiful expression because i would argue jules is a very cautious academic and he doesn't make wild proclamations that i'm more comfortable doing uh looking at his work but basically jill's did this beautiful set of experience over a long period of time working with the hepatitis color perception and when i first saw them again they blew my mind because they basically appeared to support the idea to me that language informs the not just the way that we package the world whether we label it that's kind of obvious but the way that we see the world so how did jules's work show this well he took a series of color slides like muscle colored slides if you're familiar with these well precise uh blocks of color and on one piece of paper for argument's sake there were different shades of yellow and another one there fragments say there were different shades of green all equally the difference between one side of the next with the same colour differences but uniformly so they went from like a dark green if you like to a light green or whatever the uniform color difference between each one of the slides and um on the sort of okay yellowy one uh if you showed that to europeans said which is the odd one out if you looked really carefully not always but often you'd say the one at the two o'clock position but it took you a lot of time and you literally actually in your mind's eye compare all the slides with each other and if you were lucky you said yeah it was that one but i i've done this test and sometimes i don't even see it myself it doesn't jump out it's not obvious it doesn't pop out and some of the times i make that decision wrongly now you show that to the hinder and they sort of smile it's the two o'clock one immediately no it called me places it leaps out of them and then we get the the green ones now on the green ones there's a blues tile so they show that europeans what's the odd one that the blue one show that to the himber and that guy's got a stick and he's rubbing his head and remembered his imaginary beard and i can't work it out now when you look at that from a western linguistic fact how can the person not see the blue one is the odd one out well it turns transpired nimble got very restricted or very limited color vocabulary compared to europeans and no doubt he took someone from the himba and threw them into london for after a period of time there they would perceive the blue pop out you meet just as quickly as we do it's nothing to do with the hindus colour system it's an effect brought on by language but to me what makes this really powerful is it's a pop-out effect right the hyundai immediately get the one that's the color boundary that's important to them pops out to them because they've got a name for it and it's a i can't remember what it was whatever it's an important thing and it pops out at them straight away for us the blue green color boundary's important and that pops out to us straight away and that because it's a pop-out effect that says that it's to me it's actually what it is like to see the slide the himba cannot possibly be seeing green and blue in the way that i am otherwise the blue jimmy would pop out of them because they must be seeing something bloody weird but not they aren't seeing those green and blue tiles like i am in their mind's eye for once a better word um and similarly when we look at this of oprah yellow ones they must be seeing that quite differently for me because i can't tell the odd one out with rb and quickly in the way that they can now i think this gives this very strong evidence to me at least this is the way that i would like to use george's work and i must underscore that i don't think he would be comfortable with any of this but this is the way that i like to use it to say that you know that the language is actually affecting how we see the world so when i say that our understanding of the world it's not just it's not just what's going on in the brain the brain is instrumental to how we see and perceive and interact with the world but it's not just that it's our entire body and also the body in the environment and also within a social network of language users all these things come together to enable us to form perceptions of what it's like to see and what it's like to speak so the idea that we could just reduce this to mere neural firings or even worse just some bloody manipulation of symbols by a computer is just ludicrous as far as i'm concerned i i can see the the logical thought train here so you're saying jules said that language informs the way we see the world you know one potential problem my interpretation on his work that's fine that's fine but i suppose where i'm going with this is that it it does it's starting to get towards this very kind of ultra relativist constructivist view of the world and i i want to put an anchor the reason why it's interesting is when you're talking about um sensory states that it doesn't actually seem like such a bad thing because everything you're saying there completely makes sense of of course depending on where you are in the world and your and your language and so on you you might have very very different sensory states and you might experience color differently but but then there's there's like a topology isn't there so then you start getting into um understanding and semantics right and and then you start getting into knowledge and truth and at some point you know people might start to disagree with you that that we should have a kind of um ultra relativist view on what informs those things so where do you draw the line there right well i've one of the lovely things about being an academic is that you get to work with some clever people i'm not clever but i've been blessed by working with a number of people who bloody well are and um at the moment i've got a a postgraduate who's mind-boggling i like to say the other way around we're going for a discussion and i come out feeling i've had my mind blown away and then he's sitting there looking quite shift with how the way the discussion's gone now his particular thesis has raised some really interesting questions and he runs with his very post-modern ideas in extremis and i i i'm on one particular uh uh seafood vision we were i'm diabetic and and and and he was trying to make out or he could give me this argument where my my diabetic is just an enactment of a certain practice and i'd say that's nonsense i'll die if i don't do this how can you decide you know the cell call thing uh the toefl conspiracy or you know if i if i leap off a tall building i'm going to bloody well die it's not a social concert this um but anyway um paulo's work is is very very nuanced and he does make some radical claims when you make radical claims you've got to be very very very careful about how you use the language so um i'm certainly not going to try and paraphrase his thesis in five minutes on here because it's incredibly nuanced piece of work but i just want to take one aspect from it that i think is interesting because i think it reflects on the world that we're living in now and gives us an insight with possible insight into how trumpism and the echo chimba culture has taken root because paulo and i'm not saying asking that you go along this i'm just trying to report this as best i can from our discussions parallel thinks that we don't just have uh epistemic perspectives on a universally shared world we each of us ontologically can have our own distinct ontologies and why does that matter well if it matters says power though if you start looking at how this is reflected on on twitter if you have a community of people and one of my other postcards a guy called christa mario did a film called right between your ears which i can't recommend harley enough looking at an end of the world documentary full-length documentary feature on on the end of the world cup on an end of the wildcat in america and how these guys were saying that reading the bible was going to end on a certain day and of course it didn't and how they then dealt with that and then the leaders i was going to end again in six months time yet again it didn't and chris's film was just looking at how people can arrive at these real beliefs and i think there's something akin to that with the q anon movement and you have this idea that you know somehow trump was going to lead and the result was going to be overturned then it wasn't well something that is going to happen i think if you're not in that how can people have these bizarre ideas well how would i if you get in the community of like-minded people if we go back to our basic philosophy and don't buy into a correspondence theory of truth so much as a coherence theory of truth where a proposition is true because it coheres another with the body of other propositions that you take to be true when you're interacting on a daily basis with people in your facebook bubble who all share the same beliefs that's outside that bubble look bloody bizarre they're reinforcing this thing so your view of the world your ontological view of the world becomes really real you're drawn into this and that's why people get so aggressive about it because it's not as an academic might abstractly discuss the truth or falsity of i don't know newton's laws emotion or something this is what the world is you're questioning something fundamental about these people's experience of the world we've kind of covered a couple of big names tonight we've had touring come up and searle uh the other big name that's kind of been floating in the background is girdle in in terms of like girdle and chabak the the famous book on human creativity and uh you kind of point to the girdelli and i will skip the other guys but like the godelian argument against the possibility of machine intelligence just for for the guys out there that kind of want to get a bit of a grounding on why his name has been coming up so much can you kind of just give us a very brief primer on um how good alien incomputability kind of relates to the impossibility of machine intelligence and how that kind of ties into the other arguments we've been hearing tonight yeah um i think in the arc of paper which you guys i think helpfully signposted at the beginning uh um a lot of intelligence is stupid there's a as best as i can give it a a a one paragraph summary uh in mathematical form of the girdling argument and i don't think it'd be helpful to go through that line by line now if your viewers are interested i refer them to the detail that's in that paper but to just sort of talk a slightly more abstract uh wider view um i think yeah i'm sure girdle and shearing were both aware of the implications uh of their work on uh on logic in girdle's case and on computability in turin's case and in fact as there are some people who think for turing and possibly for girdle as well some people have made the claim that this disconnect between turing's a validly physicalist to desire to explain all of human mentality uh via a computer program plus what he'd learned professionally about the existence of non-computable numbers led to a big tension in children's life and there's a documentary that's looked at dangerous ideas i think it was called that explored girdle cheering and cantor in that context of people who came up with ideas in their own work that challenged intuitions they had about the world but coming off the girdle although i'm i feel certain that girdle and turing are both in their mind gone through the implications of their work as far as i met one of the first times where this this was actually cashed out in terms of an academic paper was by the oxford philosopher john lucas in a series of exchanges with another philosopher called paul bennett in the 60s where to cut to the chase i think that lucas's argument is much more uh blunderbuss and less sophisticated than penrose's version of this so i commend anyone interested to look at shadows of the mind rather than go back to lucas but lucas basically says for any any consistent uh mathematical system there will be sentences in that system that we outside of it can see to be true the girdle sentence of that system but which we can prove can never be shown to be true by that system unless the system is inconsistent and so that lucas says you know you give me only any version of computation that you like uh i can i as a human can step outside of that and see things about that computational system that it provably cannot know for itself and that's i think the first place where this argument sort of took off um penrose's own take on this is a little bit more nuanced he likes to imagine looking at in the shadows of the mind he looks at uh computations of one parameter uh and looks at the question of whether he looks at the halting problem effectively if you're familiar with that on a function of one variable and that's can you know is there a set of rules that will allow us to deter to tell of any function of one parameter whether that function is computable or non-computable and he shows how that leads to a contradiction uh and and if we follow the the lines of the proof we see that there comes to a point where we see the particular statement on the operation of computation k given k as input uh we can see something about that computation that it cannot possibly terminate that cannot be shown to be true following the rules of that system itself and i think that's an interesting uh an argument and as i said i'm not it seems to me when i've read commentaries on penrose most people have criticized his his speculations on quantum physics rather than what he has to say about girdly and logic in fact penrose was famously invited to give the keynote at the centenary at the vienna conferences honoring girdle for the centenary so i think if there was some school by error because you occasionally goes oh pep rally is wrong and uh uh i think that the penalty is not stupid and i think if there was a school by arrow in his work i i i'd like to think you'd probably found it by now but i can recall when i was at the university of reading having an immensely long exchange with someone about penrose's ideas i won't name them but they were at the university of sussex and after this had gone on for months and months and months and i was saying look look at this look at this look at this paper and in the end the guy turns back to it said life's too short to waste it reading what penrose actually wrote and i've come across this so often people think they know what penrose has said oh they think they know that soulsweat and they haven't bothered looking at the source material on either and yeah yeah this is an issue that comes up time and time again and that that was quite a shock to me though when this guy did eventually concede that he hadn't actually read pen nose at that point i hope he has that but he hasn't been fantastic well um professor bishop it's been an absolute pleasure honestly thank you so much for for joining us today and um i hope to get you back on the show soon well thank you i really i can't i seem to be very rambly an unfocused discussion i tried to think how you're going to get something interesting out of all this but that's your genius behind the editing machine i feel honestly it's absolutely fascinating i think we don't really have much of an opportunity to talk about philosophy of mind and talk about some of these deeper issues and ai and i think this is a really fascinating framework actually to think about some of the various different focus points i mean you know we had pedro domingos on last week talking about this that was awesome i really enjoyed that that paper i read the paper that you foregrounded and also watched the little video that you did accompanying it and um yeah i thought that was a really interesting uh deeply interesting paper effectively you know i'm making the case that any any grading descent train neural networks doing nothing more than a look at interpolating between its training points effectively which is a profound uh statement and uh i need to let them just settle over that but that was yeah brilliant if nothing else i mean yeah reading that was a great thing for me with you guys it's good for deflating some of the deep learning hype too just to kind of like contextualize but like also with with regards to the night you've fleshed out probably 10 people's reading lists or people's um reading lists for the next sort of 10 years um i i wouldn't worry too much about rambling all that means is that there's a lot of ground to cover and not a lot of time to cover it in this is the reason why i invited you on because a lot of these things are very impenetrable and when i read your paper it's like a google maps kind of point by point stepping stones list of all of the all of the important things that i need to know in order to it you know to get my head around this and i think it's actually a really great starting point for people that are interested in philosophy of mind and understanding and consciousness and so on to to read that paper that you wrote if i said read one book on the on modern cognitive science i'd say look at evan thompson's mind in life unfortunately it's a bloody fat book but it it i think it's that is a genius book but i just there's two watching a number of your podcasts now and i'm really enthralled by all of them
Info
Channel: Machine Learning Street Talk
Views: 10,419
Rating: 4.7934785 out of 5
Keywords:
Id: e1M41otUtNg
Channel Id: undefined
Length: 95min 23sec (5723 seconds)
Published: Thu Feb 18 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.