The Margaret Boden Lecture 2019 - Professor Daniel Dennett (Tufts)

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
good evening everyone my name is su price I'm academic director of the lingam Center for the future of intelligence and I'm delighted to welcome you all to our second lecture Electric professor Margaret Bowden a remarkable Cambridge trained pioneer of cognitive science and artificial intelligence and for many years professor of cognitive science at the University of Sussex our inaugural Margaret Bowden lecture was given a year ago by Professor Bowden herself and you can find their recording of that online within days of that event last June Maggie gave us a ranked list of people she recommended herself for future lectures and at the very top of that list was tonight's speaker well introduce in a moment now Maggie had been very much hoping to be here herself tonight she had some health problems during the winter and told us after that but she cancelled all commitments in spring and summer except for this lecture unfortunately she's been unwell again recently and so she can't be here tonight but I'm sure you'll all join me in in wishing her a speedy recovery I first met tonight's speaker about 35 years ago and I remember the date it was on July the 4th 1980 something probably probably 1985 because it was in Australia and he was in Australia that year forgive the gabbing giving young lectures at the University of Adelaide I was a postdoc in Sydney and Daniel Dennett already a star both within and outside philosophy gave a talk at Macquarie University in suburban Sydney he was a bitterly cold winter's day middle of winter the southern winter at least bitterly cold by sydney standards i think the temperature plummeted by below 10 degrees which is very unusual and somehow I offered always violent to give the speaker a lift back into the center of town where he was staying with the the late great Sidney materialist David Armstrong I remember two things about that trip apart from the weather and my nervousness at having such a distinguished passenger one was a conversation about an idea that I was trying to work on at the time and the other was getting lost in an unfamiliar part of Sydney so that I had down to myself rather longer than anticipated today professor Dennis's co-director of the Center for cognitive studies and Austin B Fletcher professor of philosophy at Tufts University where he's taught for many years he's the author of about a dozen books many of them long since classics and I think some 400 or so papers he's not just one of the world's most famous philosophers but surely one of the best-known public intellectuals in any field clearly making you what she was doing when she made him her number one choice to deliver the second Margaret Bowden literature so please join me in welcoming professor Daniel Dennett who's speaking to us this evening on smart machines and a reverse Turing test welcome down there's Maggie which are all well seriously a little round of applause for Maggie because I'm sure she'll be watching the video of this soon get well quick Maggie the question is smart tools or artificial colleagues there's a difference now the question is will the difference disappear as machines get smart and my answer is not if we're careful and proactive god we can help it in order to see why we want to keep that distinction alive I want to take you on little history through the Turing test I spoke a deep mind yesterday and was fascinated to discover that a lot of these brilliant young people hundreds of them working at deep mind are surprisingly unacquainted with a lot of the history of AI and made me feel particularly old but I realized that yes I I had been needed on some of the early days in AI and I want to make sure that this history doesn't get lost in the furor so here's Alan Turing one of my all-time heroes I put him right up there with Darwin as my two favorite intellectuals I guess of all times and he wrote a famous piece Computing Machinery and intelligence published in the philosophy journal mine way back in 1950 but he had a he had a predecessor somebody who said something remarkably similar few hundred years earlier and that steak art in his discourse on method famous passage just let's go through it so you get it in your head here's Descartes in 1637 saying it is indeed conceivable that a machine could be so made that could utter words and even where it's appropriate to the presence of physical acts or objects which caused some change in its organs as for example if it was touched in some spot that it would ask you what you wanted to say to it different another that it would cry that it was hurt and so on for similar things he goes on but it could never modify its phrases to reply to the sense of whatever was said in its presence as even the most stupid men can do now that certainly sounds like the Turing test and notice that thick heart says this is his proof of dualism in a way yes who could make a machine that could do some of this but it couldn't do it all it couldn't respond to the sense it couldn't understand that was a brilliant thought experiment in his day but I want to say it was a failure of imagination what was it but Descartes couldn't take seriously was the idea of a machine with the trillion moving parts like you or me more than a trillion actually quadrillions of moving parts he just had no way I have the presumption to imagine Descartes then he well let's see do I know that no machine could respond to the sense of things well let me imagine such a machine Oh let's give it well let's give it 10,000 gears and 20 miles of wires and 60,000 Springs and no no it couldn't understand and he wouldn't bother trying to imagine a more complicated machine like you or me as a machine so that was a failure of imagination on his part and so we just never thought about it touring comes along with the Turing test and he introduces it by telling us about what he calls the imitation game and then the imitation game I don't know if he ever played it but I've witnessed it played and by the way I encourage you to have a shot at it's very easy to do these days with with email and laptops and smartphones one person pretends to be of the opposite gender so if the if the woman is the player she pretends she's a man and has a man say as a judge and in fact you have to have two you have a man and a woman both claiming to be a man and they're interrogated by the judge and if the man can convince the judge that he's a woman or if the woman can convince the judge in other test cities that she's a man then that's winning the imitation game any of you ever played it it's it's very challenging I want you to think about just how challenging the imitation game is do you think you could win at that game now Touring's motivation is in itself I think wonderful and important he wanted to create a screen through which only the right stuff would pass that's why he restricted it to teletype communication so that none of the in essentials of personal quirks or appearance would matter only by his lights pure intelligence could get through the screen this is of course a policy that's been adopted by for instance orchestra auditions with very good effect when a new person auditions to be the first oboist in a symphony orchestra they have a bunch of auditions the the oboist are all all play from behind the feet it doesn't matter what they look like doesn't matter whether they're male female white black doesn't matter whether they're fat or thin what matters is can they play the oboe that's all that comes through and this was the inspiration for the imitation game do you think you would be a good judge in the original imitation game or do you think you could fool a good judge in the imitation game it's a pretty challenging game all right say try it you might be fascinated with the results the premium the imitation game puts a premium on imagination on deep world knowledge a strong model of the interlocutor and cunning in other words the winners are good deceivers that's as good as college achill for winning in the imitation game and here's another failure of imagination in this case Touring's here's how I think he thought a woman who could simulate a man that well would be brilliant a man who could simulate a woman that well would be brilliant therefore anything that could win the imitation game would be an intelligent agent a machine that could think like us what was his failure of imagination well Turin couldn't take seriously the idea of a deep learning pattern finder not an agent with hierarchies of goals and comprehension but just a deep learning pattern finder feeding off trillions of online conversations he never imagined such a thing so he never thought about it but it opens up and some people have rushed in another possibility of winning the imitation game where the goal and Turing test is to convince human judge that you're a human being not a computer there might be as we're a backdoor way of winning that the deep learning way not the thinking like a human being way it's interesting the touring missed that because notice in the imitation game the the tacit argument is not that anybody who could convince a good judge that he was a woman would have a woman's brain or a woman's mind no still have a man's mind but just have a very good idea of what a woman's mind was like but he missed the idea that maybe didn't need to have a very good theory of what a woman's mind was like or what a human beings mind was like all you would have to do is have access to so much data and so much pattern finding that you could do you could simulate a woman without understanding what you were doing at all that was something that Turing just missed but he had a second failure of imagination too and that was he overestimated the discriminating power of everyday people it's much easier than he imagined to fit through fool people into thinking they're dealing with an intelligent agent much easier here's Touring's optimism famous quote I believe that about 50 years time it will be possible to program computers with a storage capacity of about 1 to the 9th to make them play the imitation game so well that an average interrogator will not have more than a 70% chance of making the right identification after 5 minutes of questioning that's in retrospect profoundly optimistic first of all that a computer he was deciding that that was a computer of sort of unimaginably great powers in his day when he wrote it in 1950 but of course it your mobile has much more memory than that and can't win the Turing test well we've we've known that there was a problem of overestimating the discernment of everyday people for quite a long time my old friend Joe Weizenbaum created you Eliza the first sort of chatbot program way back in 1966 and wrote a book about it a few years later computer power and human reason still well worth reading the book has got some fundamental confusions in it Joe never got clear about whether he was saying AI was impossible or AI was possible and we shouldn't do it and the book vacillates between those two very very different claims what i want to say today is the principle as possible yes strong AI tranches as in principle possible Joe was wrong about that but it's much much much harder than people think and we shouldn't do it you shouldn't do it that's going to be my message today Joe was at MIT he was one of the inventors of time-sharing and there was a public terminal in the great long corridor at MIT where students could access the time screen computer and they can access Eliza and so every day he gathered hundreds of conversations that MIT students had with Eliza this dead simple cheesy but ingenious program and to his horror and dismay he discovered that the MIT students were very angry and upset that he was looking at these transcripts because they said it violated patient-doctor privilege they were pouring their hearts out to this simple program and they didn't think he had any right to read the transcripts it was that moment that turned him into the Jeremiah of AI that he soon became I got to know him about that time actually read a draft of the book before it was published and talked with him about it then a few years later I was involved in rodney brooks car project sort of first serious humanoid robot and I'm just going to give you a tiny sampling of what amazed me then Karzai's well you can see Karzai's it don't look much like eyes pair of a pair of cameras two in each eye a high-resolution focal camera and a powerful wide view camera but the eyes moved seconded ended smooth pursuit just like human eyes and just the fact that this robot would go could absolutely mesmerize people Alan Alda in his Scientific American program came to the lab one day and was interviewing Ron about cog Ron here cog here all to hear Rob says a little bit about what cogs can do all the interrupts to ask the question and car goes and he just couldn't continue was as simple as that cogs arms had these wonderful serious elastic actuators they were wonderful loosey-goosey arms and it could they could go back and forth from of the slinky I bought one of my teaching assistants from Tufts over one day to the lab and cranks arm would have not even attached 2-car it's happened it was clamped to the bench and that Williamson the creator of the arm said to her well shake hands with she reached over and she shook hands and she screamed it's alive it was that powerful and illusion it was there were MIT students that organized in protest on behalf of cars rights cogs MIT students the COG was nowhere near conscious but cog had the behavioral manifestations which were unlike the stereotypical robot and it just it captured people immediately so there's some failures of imagination I want to tell you about my failures of imagination I defended the Turing tests of the peaks of 1985 called can machines think where I didn't anticipate anything like deep learning as a possibility and thought that the Turing test properly executed was clever questioning was the best test I could devise for intelligence so I defended the Turing test absolutely well philosophers thought experiments are unlike real experiments the Sun always shines the subjects behave the equipment works flawlessly so what did I learn in a real experiment which brings us to 1991 and the first logo reprise test restricted touring tests so the original Loebner prize committee were these six individuals Joe Weissman bomb Allen knew of the famous AI a person with Newlin Simon from Carnegie Mellon Harry Lewis professor of computer science at Harvard Oliver simple who was the director of the computer museum at in Boston where the contest was held Robert Epstein who was the sort of executive head of the committee at the Cambridge Center for behavioral science and vampire and they put together the original rules and set up the competition before I was invited to join the committee as the chair so I didn't get any chance to object to the rules that first year in the competition and here are some of the restrictions on that test the topic at each terminal is limited judges have been instructed to hold normal conversations and experts in the field are excluded human Confederates have been instructed to hold normal conversations instead of going out of their way to compose answers that are in the minimun I would have objected to the second and the third condition the first was there couldn't be a competition unless you allowed the conversations to be restricted we got a nice help from Duncan Luce who gave us an idea for how the judging should be done the judges would first rank order all the contestants there were 10 and all in fact and from most human to Lisa and then they were obliged to draw a line across their list saying human above the line computer below the line now we didn't expect any program to be judged to be human so the computer program that had the highest ranking would get the limited prize at first year so then the event happened and there were camera crews from all over the world there were probably six or seven different television camera crews there and reporters it was quite a Valley whoo and the contest got started and I was walking around watching people watching the screens and looking over the judges heads and all of a sudden it hit me people were being taken in even some of the the press were being taken in by these I think the technical term is cheesy programs and here's here's these are from my actual notes on the occasion so there's ten terminals dry martinis was one topic burgundy wine was another whimsical conversation was the winning contestant Joe Weintraub women's clothing Shakespeare's plays human Confederate she won two awards most human and she was judged by one of the judges to be a computer on the grounds that nobody could know that much about Shakespeare but whimsical conversation which was bunch of clever stunts I'll get I'll just tell you what the clever stunt was is it actually is important for the rest of so none of the things I'm going to say Weintraub had a few introductory chitchat bits and as soon as the program the the judge asked anything else the water went trout's program would ask the judge a question and the judge would dutifully answer the question and then the program would ask the judge another question and simply leave the judge through a completely canned set of questions and the judges since they had been urged just to have normal conversation they were being polite and so they never challenged they just docile II let themselves be led through these Potemkin villages that the Weintraub and when that didn't quite fit Weintraub's program had about ten cans jokes that just were chosen one was chosen at random how do we tell that was it that was the program so I was worried and so during the competition which only went on for a couple hours I rushed into a back room with one of the people at the Museum there and found a desktop publisher program and we created this document which I just in case I needed it we were so sure that no program was going to fool the judge that we hadn't thought of this in advance but we needed it on the occasion I had to give three certificates three different programs convinced at least one judge that it was a human being there were seven misjudgments out of a possible sixty that was stunning a few years later ninety three in the third competition we change the rules I'd insisted on it and now we had journalists as the judges and I said your reputation is on the line you're famous for sussing out frauds and Liars you've got to find the cheats and we told the human judges the human Confederates behind scene think of it this way your life is on the line you've been captured by space pirates you have to convince the judges that you're human they still had restricted topics and so forth so what was the result to misidentified a human being as a computer nobody and when we debriefed the judges to find out why they had a very interesting answer they were their honor was at stake if they misidentified a computer as a human being and they argued as follows there's a bunch of good ones there's a bunch of bad ones if there wasn't one good contestant they wouldn't be having the competition so just to be safe I'll take the most unimpressive human candidate and call it a machine just to protect my reputation so this is what happens to a thought experiment when you actually put it to the test but I think if we can correct for the imagine imagination failures we can still use the Turing test today very effectively we can use expert judges who know about AI and know what the tricks are and who keep their conversational gambits off the internet so that the competition can't parasite eyes then now there may be billions of intelligent conversations on the Internet but that's a vanishing drop in the bucket compared to all the possible the vastly many possible intelligent competitions conversations that could be hacked this is an important point and I want to stress it because it's related to a point made by simona ginsburg and village of lanka you might see sitting right there whose new book the evolution of the sensitive soul looks at what they call unlimited associative learning which i think is a big step in the evolution of intelligence in in minds and animal lines they were inspired by Zara and Maynard Smith's book the major transitions in evolution where South Mary Mary Smith talked about unlimited heredity which is a major step in the phase shifts of evolution transitions in evolution and they say this is this is the sort of Darwinian equivalent of unlimited heredity an unlimited associative learning which makes possible is made possible because its compositional and its generative in a way that opens the floodgates to nobble content really I'm not entirely convinced by all the details of their argument so I'm gonna leap over part of that and simply give you an example of human generativity so that you can remind yourself of how many possible conversations there are so here's my diagram of the library of Babel this is all the possible books there's 10 to the millions power of books each book is 500 pages of 2,000 characters per page from a set of a hundred different characters if you take all possible permutations those you get an amid some it's not infinite but it is an absurdly large number those are the possible books moby-dick is just one of them there are however a hundred million single type of variants on moby-dick in that library you couldn't tell which of those volumes you were reading so it's a very big set and I wanted to draw attention to that I introduced back in darkness dangerous idea to terms people often say infinite and infinitesimal but they don't mean that what they should they the terms I use instead are vast for very much more than astronomical and vanishing for sorta infinitesimal so just to illustrate the concept there's the library of Babel it's a vast set of possible books a vanishing subset is vast it's the subset of books that are composed entirely of English words evacs the vanishing subset of those are grammatical if that's the vanishing subset of those make sense the vast majority of the books in the grammatical set our books where you if you want to make one of those go to the library full books at random English books at random out of the library copy a sentence at random then go on to the next book copy a sentence at random every sentence is grammatical compulsive English words and the book makes no sense at all so but now we've got books it makes sense it's a vast set they've asked for vanishing subsets of them are about somebody named John a vast but vanishing subset of them are about the death of John F Kennedy a vast with vanishing subset of books in the library babble about JFK's death are true the vast majority or false and a vast mothering subset of those true books in the library babble about the death of JFK are composed entirely of limericks now give you some idea of how large the number of possible conversations are it is not infinite but it is not even well sampled by what's been put on the Internet today's so I think if you have a judge that understands that and devises conversations which clearly have never had a precedent but they're perfectly understandable to an ordinary human being then you're gonna you're gonna have a good test for the time-being of the Turing test so the Turing test can survive for a little bit and I think it can survive if done right expert judges for a long time I'm reminded of an argument by robert frank and his book passions within reason where he argues that the most cost effective way of seeming to be good is to be good yes you can go to all sorts of trouble to be a devious person who jealously protects your if t easier quicker just be good and life will be swimming for you and he makes this a centerpiece of a very interesting set of arguments i also want to compare with some of you will know thomas Landauer's latent semantic analysis this is a forerunner of some of the pattern recognition programs that run now Landauer wrote a program that has the following feature professor teaches of course writes an essay exam for the students writes what the professor takes to be an a answer to that exam and then feeds all the students exams to this program and it grades them it doesn't matter whether it's in French or English or Turkish or Spanish it doesn't know anything about doesn't know the meanings of any of the words but it does better than the teaching assistants that grading those exams if you ask Tom land are well is it possible for somebody to write an answer that is pure gibberish but that nevertheless gets an A from your program yeah but if you could write that answer you deserve an A anyway it's easier to be honest the most cost-effective way of seeming to be conscious is to be conscious so the Turing test I end up defending it as long as it's properly conducted with expert judges and they have more than five minutes so I want to insist that AGI conscious AGI is that's artificial general intelligence is possible in principle the task of making a conscious Adi is possible in principle but very very costly and we can make it more costly and we should I'm glad that that's so difficult I don't know if you heard the reminder I love that somebody made the remark that the philosopher who says of something well we know it's possible in practice we're arguing about whether it's possible in principle I've seen this is possible in principle it's very very difficult but it's possible in practice just in case it is possible practice we ought to take steps to make it harder because of conscious AG I would be very dangerous a better policy would devalue deception all together and here Maggi has some good things to say a couple of years ago in a British Academy panel which you can find online she spoke very eloquently about deception in AI and why this is a problem how could we undo the inadvertent premium on deception that was introduced by Tory what do I want to do well I want it to be that instead of applauding verisimilitude with prizes we should treat it as a crime the crime of false advertising AI is an epistemic power amplifier which can be used or abused so here's my recommendation first of all for serious AI systems that are very powerful you should license users can't use the system without a license and then you should have strict liability laws so that anybody who's licensed to use the system and uses the system and met gives advice makes diagnoses decides political decisions whatever on the basis of the AI tool is held liable for any damage done and ignorance is no excuse the machine can miss me tough you're the one who's got the license you're responsible strict liability laws are in a certain sense unfair but they are clever stratagem for putting an extra heavy burden on the users to do beyond due diligence to protect themselves now this would have some interesting effects one the malpractice insurance would be very expensive these are very powerful systems you make a mistake you might really do a lot of damage so you're gonna have to have a very big malpractice insurance policy and the insurance companies will protect their investment by maximizing the transparencies of the system they if if they're going to insure users for big bucks to use the system they want to make sure the system doesn't fool the users into making mistakes doesn't want them the system to deceive the users once the system to be maximally honest and transparent when it doesn't know an answer it's got to say I don't know the answer and not anything else beyond transparency I think the insurance companies would be in favor of a test that where the system tests the user for their ability to identify the gaps in other words let the system judge the judges this is a moving target the initial tests that a system might exact of a user before allowing the user to ask real-world questions would have to be very carefully designed but I think we could design tests which in effect oblige users of these systems to be undeceived to be to have no illusions about the system say we're using they would understand as well as anybody does what their limitations were and what to look out for but here we run into a problem known as good hearts law this is how good heart put it any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes or another way of saying it when a measure becomes a target it ceases to be a good measure because people will try to job the system I don't know whether this happened in the I think it happened in the United States insurance hospitals were judged to be performing badly on the length of time it took to get a person to see a doctor once they arrived at the hospital and it was very clear that hospitals were going to have to improve their practices to shorten that time have average time to seeing the doctor so what the hospital's do they have people sitting quietly in ambulances outside not starting the clock until they're sure that they can see the patient in a short amount of time it's a sort of simple example of this sort of dodge the good hearts law creates and we have to look at for that everywhere what it does it creates an arms race and it's an arms race that we can see in the sort of ridiculous subterfuges of subsequent Loebner prize tests where everybody is trying to gain the system and we can expect gaming to go in both directions but for the time being I think we want to realize that the difference between IBM's Watson everybody is familiar with IBM's program Watson the one that won jeopardy and you might think gee it would be fun to have a conversation with Watson forget it Watson can't hold a conversation the communication control system for that program is paper-thin and entirely dependent on the very rigid rules of the program of the of the television quiz show so the difference if you wanted to take IBM's Watson which won a Jeopardy and turn it into a converse of a GI that could have a good conversation with it would not double or treble but it would be orders of magnitude more work for the people at IBM to do that I'm happy to say the difference between Watson and AGI is like the difference between an abacus and Watson as I say the communicative controls are shallow and canned and I want to say let's keep it that way I want to suggest a different way that AI can go which is more benign AI is very powerful there are two kinds of empowerment where it's as it were muscle power on the Left we have the bulldozer on the right we have the novelist's machine the bulldozer you can move mountains but you're still a 98-pound weakling the nautilus machine gives you power that you walk away with so now the question what would be the parallel of a nautilus machine from the mind it would be an imagination prosthesis it would be a device that would train you to use your imagination better let AI do the heavy lifting let AI do the the tedious data examination and save the creative thinking and the policy making to us human beings because we don't really want an autonomous AGI because autonomy is dangerous how autonomous should we make a self-driving car for instance well as so often Dilbert has a good answer I find it offensive when you call me a self-driving car that's my slave name I prefer to go by the name Carl shut up and drive me to work said the self walking human we don't want autonomous cars to be that autonomous and we don't want AG eyes to be that autonomous either in general we don't in fact there's a lovely article on this called robots should be slaves by Johanna Bryson a few years ago her point is slaves were terrible because they were human beings with feelings and all the rest but if we keep a eyes without feelings and without autonomy then they can be there's no moral objection to slaves then we want to keep them firmly on the side where there's no more where they have no rights you can take them apart you can use them for that matter you can abuse them but they don't have autonomy besides selfishly if we use AI as smart tools to do all the heavy lifting then we get to do all the fun stuff we get to have our imaginations empowered we get to have our imaginations disciplined by all sorts of AI tools that can help us think about things we never could think about before and then we can take responsibility for it and that's very important because you can't make a responsible KGI for a very simple reason because an AGI can be backed up years ago I heard wonderful saying that suddenly the import of it struck me on this occasion recruits may be draftees first day in boot camp and the drill sergeant lines up the motley crew and says we can't make you do anything and they're all sorted but we can make you wish you had that's the way it is with human beings that's how we control the autonomous agents that we all are running around tree we have rules and penalties and systems of control because these are agents that can wish they had done otherwise now if you could be backed up the way a robot can think of what you could do backed up on Friday just for the heck of it jump headfirst off the Eiffel Tower on Saturday watch the video on Monday of your death it would change your whole attitude towards life an obligation that's a feature that we don't have and it's I think an important feature that we don't share with imaginable AG eyes in the future if you could be backed up you wouldn't be inclined to take this seriously you wouldn't have a good reason to take it seriously so suppose we do create policies that enforce the distinction between tools and colleagues and that make the tools transparent not putting in the Disney fide human touches that we now view as false advertising here's a problem that comes up and I mentioned that Maggie Bowden raised this in her 2017 panel discussion what about elder care this is maybe the largest market for robots in the near future and for good reason more and more elder care is going to be needed and it's not a very satisfying job for most human beings it would be really good to have intelligent slaves that could do it but then you think well then but but we want to make them very companionable the the market urge to make them very human would be hard to resist that worries me but then I thought no no because let them be artificial dogs and let's keep their ability to call the pharmacy and order up some drugs distinct from their companionship roles in the household we can keep that this thing so that they're not super smart and they don't they only understand a few commands but they're sell for old folks they're they're good company there's a lot of evidence that having a dog around is very very good for the sense of well-being of many people in elder care but what about child care that's another area where maybe robot baby sitters will be developed in here I think we have to be very cautious will children prefer fake companions to real companions fake companions that don't get bossy or disagree with them or fight over toys will we disabled children by giving them babysitter's that are as it were too much fun and make them not able to grow up as they should in other words we have much to be vigilant about as we face this new world of artificial intelligence thank you for your attention thank you very much down we we now have some time for Q&A we have a couple of roaming microphones so if people like to put their hand up I'll make an attempt to keep a queue I think the first person I saw was the back up there thank you very much for that wonderful lecture I have a question about the idea that AI can do the heavy lifting and leave the fun stuff to us so how then do we determine what the fun stuff is and what we can and must leave to AI because the reason why humans have been enslaved throughout history is because their masters wanted to have so much done by others so much done for them that only a human would be able to perform all these tasks would be intelligent versatile autonomous enough will I guess I don't see that this is a an insoluble problem or even a difficult problem that won't sort of solve itself as it goes I'm pretty sure everybody in this room has gadgets technology that could only be done by a human being in the very recent past sort of super jeeps with a telephone and I don't see that that is in itself a problem although we are becoming much more bitter and fragile in our dependence on that technology map-reading is going south while people use their GPS take an obvious case I think that one of the sort of existential worries about the role of AI in the near future is that it's not just going to take manufacturing jobs it's going to take a great deal of the white-collar jobs a great deal of the mind work jobs and make those a thing in the past so I think that's a real issue thank you for an excellent talk so I wonder about that argument that you can trust on AGI because it can be backed up because then you would not take your thing seriously because you could just die but suppose I jump from basil tower and then on Monday I'm gleefully watching a YouTube clip but I'm also getting a fine from a French authorities and southern my wallet it's going to be a lot lighter in fact even on Friday before doing that I should realize that AJ's would presumably have preferences over the world and over their future lives so it would seem that they actually could have a lot of things to care about even though we come back give himself up and sacrifices on Kapisa well the fact that they can get backed up means that they don't have a ultimatum of one sort that we all have you only live once and sure they can have preferences but one of their preferences may be to take whatever steps they can to avoid having their backups destroy it then they'll do that if they're that smart so it's not really as foolproof as it sounds this is an idea which is related to now I'm forgetting his name the wonderful economist schelling Tom Schilling he has this lovely puzzle what would you do if you were kidnapped and your kidnapper decides that he doesn't really want to collect their answer you just as soon let you go but he's afraid of course that you will rat on him and take take take him to the police or get let the police get him what promise can you and you make to the kidnapper that would convince the kidnapper to let you go since this promise was a do you know well we're supposing all you can do is talk the shelling has a very elegant solution of this problem you confess to the kidnapper the worst thing you've ever done now you've put yourself in jeopardy now you have something to protect when you say you know you if I tell on you you can tell on me and if you haven't done anything terrible do something terrible in its presence whatever you think will be enough the point is Superman can't make a promise because we would trust Superman Superman can break any promise with impunity and AG eyes are too much like Superman we can't trust them I'm not sure the kidnappers could trust us wouldn't it be just as good to make up some story about something terrible that you've done tell that to the kidnap it might be but you you might have to go the the sometimes it's more cost-effective to be yeah so at the end of your talk you mentioned some kind of ways in which kind of policies we should implement for making AI good but then suggested some ways in which perhaps market incentives would lead us to make AI which is not good in that way and this seems to mirror the kind of incentives that we need to put in AI itself to make sure that it is good so kind of the seems to be an analogy between the incentives from that we need for people making AI in a market and the AI itself and I'm wondering if solving kind of one problem there the AGI problem and solving the political philosophy question of how to organize a society are related and require similar types of answers good I think if I'm following you right I think yes they're related I'm after all an AGI is a person except for the capacity that it's sort of separate it is sort of immortal but it has all the reasoning abilities and it has all the tactical if you actually make one that's that clever then it would have all the tactical and strategic abilities that any human being has which means that it's going to be very capable at gaming any sister that's creative so the best thing to do though is to try to create circumstances where the most cost-effective way of behaving is to be good and our system is very imperfect and it is in need of a great deal of reform but still it is good enough so that we can mostly lead fairly secure lives and plan ahead and count on the civilization not coming to a halt do you think that the the problems imposed by having a I mean that we need to be have come up with better solutions in the political realm for our own lives I guess it's possible in principle I thank you for a really interesting talk and you mention quite a few times I think that you believe that conscious AI is possible in principle could you say more about what it is that is leading you to have that belief that it is a possibility I missed the key word in your sense I'm just wondering what the reasons why you you stated a number of times that conscious AGI is possible in principle I'm just wondering what are the reasons that have brought you to that belief okay sure thank you thank you there's several hundred reasons right in this room we're robots made of robots made of robots made of robots with no magical ingredients if you're a materialist if you believe that in the material basis of mind and consciousness then it is almost a one-step argument that a a conscious AGI is possible principle because we are conscious Agis the only difference is that we're made the old-fashioned way and there's nothing that evolution and nurture and development can do that we can't do with technology if we want I want to take off also on the leaving heavy lifting to AI and having fun firstly heavy lifting is fun and your own life is such a wonderful example of that in every possible way when you get the chance to get your hands dirty you took doing plumbing actually doing the slave jobs and not only I think because you enjoy it but also it's been a wonderful source of metaphors in your own writing pumps and cranes and machinery and so on which you've engaged with when you've done the heavy lifting have continually informed the way you think about fun thinking about the larger issues and I'm rather worried that if we leave too much of the heavy lifting to another class of being in fact produce can't lose a very powerful source of imagination oh I think I think that's right if but after all most of the heavy lifting that's done by computing technology is the kind that we're just as well without like being a telephone operator that's that's a job that that's not fit work for anybody there are lots of tedious data analysis jobs that are too big and too tedious to do you don't want to spend person centuries doing them if you can have a machine that doesn't in my youth I learned celestial navigation trumpets and chronometer and sextant and I dreamt of sailing myself across the Atlantic navigating by celestial navigation I never had a chance to use those skills I'm very glad I learned them but it's now essentially impossible except as a stunt to do it the insurance company won't let you take your boat out unless you have three GPS systems of our trip like that but but I do think it's right that there are losses of imagination and losses of experience that we have to make sure we count when we're adding up the costs and benefits of all of these things again thanks for a lovely lecture and you talked a lot about consciousness and a lot about dangerousness and I wonder how related you think they are and I doubt very much that I could convince anyone that I was a woman but I have plans and preferences and that's enough to make me dangerous so I just wanted to hear your thoughts about whether it's necessary for an AI to be consciousness in order for it to be an existential threat on my view of consciousness consciousness is an arena of control an arena of self control which is only partially controlled because there are too many degrees of freedom there's not an infinite but a vast number of degrees of freedom in human level agent and there's no algorithm for controlling all those degrees of freedom there's only in effect algorithms for exploiting the materials in that semi chaotic set of possibilities to create degrees of self control which are reliable enough trust but we're just think of your own stream of thought to some degree you can control it but ideas come to mind and they don't you try you can't concentrate all of these factors make consciousness very volatile very versatile very creative and very dangerous and I don't think there's an algorithmic way of blocking that oh let me little bit of history of AI one time Marvin Minsky and I were sitting in his amazing house talking about AI and Marvin came up with a idea for a short story that he wanted to write that sometime in the future and it's the boardroom of United amalgamated AI and the engineers have just reported to the board that they they done it they've got an Ag I said ah that's great and when do we put it to market and the engineers say well there's one problem we've actually achieved a GI yes that's great from when do we market DC's no you see they don't want to do the things that we want them to do they don't want to handle the the the airline reservation systems and all that they complain bitterly about the heavy lifting that we're asking the board address as well just turn off the complainer just you know yeah can't you disable that part they said no that's the whole point if they're really AGI you can't turn off the complainer they're gonna break your heart with their stories of why they don't want to do what you want them to do and the board is really puzzled what to do and one of them gets an idea he says how about this why don't we commissions public intellectual some philosopher eminent philosopher a major west coast university to go around the world saying if a robot ever says it's conscious don't believe it don't believe it it's just a Chinese rule yes dan thanks for a lovely talk there's one thing if you're concerned at the prospect of having persons who are immortal and untouchable the simple fact is that we have many such persons already we have States we have religions we have companies we have charities Cambridge University here was started in 1209 and it's still going strong many persons legal persons who are immortal and for practical purposes untouchable should we all be abolished no of course they shouldn't but that's because so far these super personal entities have been organized in ways that put responsible human agents in control I said so far and I agree that we may be moving out of an era where we can trust at least many of the laws and just the constraints of civilization and politeness and decorum all of these things I'm I'm an incurable optimist I can find the silver lining on any dark cloud and one of my silver linings is that if we can just survive the next two years we will have such a heightened appreciation of the fragility of our institutions and the need for honesty and decorum that have been a sort of wake-up call and but it's it's going to be a close call thank you thank you so much for the very thought provoking lecture um are you concerned about whether it is possible to enforce that a I continue to be tools especially if they are sophisticated as mine prosthetics that enhance imagination and what not surely it would be very easy for a rogue individual to modify them to be completely autonomous for example one of the MIT students that was protesting for the autonomy of cog actually I think I'm I may be wrong but I am currently of the opinion that it's much much harder than that that as I said turning Watson which is the product of I don't know person centuries of brilliant engineering and uses the power of a small city and it's not within orders of magnitude of being an autonomous AGI it's a wonderful fabric but it hasn't been made into an agent that has those powers at all yet and I don't think it is going to be easy to what it is going to be easy to do if we don't very much on guard against it just to fool people into thinking that we've made an AGI when we haven't that does worry me can you explain more thoroughly why the deception that's motivated by the Turing test is a problem it seems like on a basic level a certain type of the designers of the programs are motivated to deceive the TAT person the judge but once the program is sufficiently complex if the program is motivated to deceive the judge then that could be a further evidence that it in fact has general intelligence that could be a further evidence that hoped that the program itself is generally intelligent but it itself is motivated to deceive I'm not sure I'm hearing exactly right you're saying that if the program so on in terms of deception in the way that the programs are designed yes it would be a problem if the designer is motivated to deceive a judge which is certainly true if a program is sufficiently complex though and the program has some sort of ability to learn that its goal is to deceive then why is it a problem that it's motivated toward deception because if it's able to deceive and has awareness of something like that it could be a signal that it is in fact intelligent well you see this is this is really precisely why I want to maintain the tool colleague barrier and not give AG AI systems the sort of strategic imagination that would allow them look this is just a generalization of a point that's been made by a lot of people recently and that is if you think there's a kill switch on an AGI you're fooling yourself the first thing a really intelligent age is going to do is figure out how to disable the kill switch and it doesn't have to have fingers it doesn't have to be a robot though it can reach back and disable the kill switch maybe it'll just persuade with words somebody to disable the kill switch maybe it will simply charm the users into obedience there's all sorts of possibilities of that sort that I would rather we keep them slaves and tools and not open that Avenue at all hi Dan thanks so great lecture I just wanted to clarify get clear one thing I'm overhead it's an several times you slid gracefully from talking about AI general AI strong AI into consciousness as if there's an inevitability that once a system crosses some threshold of intelligence that it necessarily starts to have subjective conscious feelings of the sort that you and I do and I can't help wonder whether whether this is a an assumption on what on whether this is an assumption you're making and if so on what it rests because it seems to me that one can separate these two concepts there are certainly creatures or certainly for most of those other living species that almost certainly have conscious experiences of suffering and pain which we would not attribute high intelligence to so why should we why shouldn't why should consciousness become part of a system once it crosses a threshold of intelligence because that makes a big difference to all the ethical questions that you then raise about how he treats these other systems or species and what and what that does to us to say thank you that's a good question very much on my mind yes of course animals suffer some much more than others much more capable with suffering but if you stop and think what does suffering amount to it's got to be well the answer can't be intrinsic awfulness of pain or the intrinsic awfulness of suffering that's just passing the buck that's not an answer intrinsic just can't carry the weight so if there's something if something is capable of suffering it's got to have the sorts of hopes and projects and plans and activities that are thwarted interfered with and that that it minds for that reason it wouldn't pain wouldn't bother you if it didn't interrupt your life course in some way prevents you from doing things you want to do and I think that if you're an AGI that has projects and plans and goals and so forth then the capacity for suffering is essentially built in by that very fact and we shouldn't think that we don't have to worry about suffering there because you know they don't have C fibers and nociceptors that's irrelevant but that's not a very good answer that's just that my good answer is about an hour long if it's good maybe it's not so you seem to be saying that tools and colleagues are distinct categories of agents that should be kept separate and yet if I understood the example correctly in elderly care you might want to have an instance of an agent that can perform more than one task now in deep learning so one of the reasons why it was so surprising that it produces such impressive results was that the technology hadn't fundamentally changed the computation of power has increased and by just adding layers we can achieve results that we wouldn't have thought possible so I'm wondering now that how do we know that if we combine just state-of-the-art algorithms for sorry for solving particular problems with some sort of meta level categorization machine that would just pick the right tool for the problem at hand how do we know that building something like this would not ultimately produce consciousness as a side effect oh I think it could in principle I just think it's very hard and I think it's not just a question of adding you say adding meta levels I think that's part of the answer I mean meta in a strong sense once you have systems that not only can notice things things that no human being can notice we've got those we've got tons of those now with deep learning but they can notice that they're noticing things and notice that they're noticing that they're noticing things and think about what that means once you add those layers you can't add them unless you give them work to do and when you give them work to do you're turning them into a conscious agent because there's in case anybody wondered what my view is on this because there's no difference between a conscious agent and a very intelligent zombie get i I think it's commendable to seek to impose restrictions on AI in order to make it more compatible with the lives we want to lead but surely it only takes one unscrupulous lab or one unscrupulous State eg China to pursue AI without ethical restrictions and then we're all bound to have to abandon those restrictions in order to keep up with that state so is it really a sort of evolutionarily stable strategy to have restrictions on AI at all well we've tried similar things with other technologies that are scary and the big difference as I'm sure you would hasten to remind me is that we really don't have to worry about somebody building a nuclear bomb in their garage the the heavy industry the technology to build a nuclear device and and fuel it is fortunately beyond the reach of hobbyists and it's not entirely clear that bad AI is similarly out of reach I think there's a more pressing problem and that is if you want to know my pet example of what I have fear in the immediate future from technology because I fear the day when the internet goes down and takes with it the power grid and takes with it the television channels and the phone system and if the internet went down that would probably happen and what I'm afraid of is the panic that would spread wherever this happened within an hour or two if you plunged say the United States into electronic darkness you go off to your car you turn on the radio there's not even anything on the radio in your car you suddenly cut off from all the electronics I think people would go into panic mode and they do more let's suppose we could fix the internet and within 48 hours but in that 48 hours what I'm worried about is the destruction caused by panic because I think would dwarf every disaster that's ever hit civilisation and I think the only cure the only protection we have against that is to start talking about it and creating grassroots local communities who don't depend on electronic communication at all if they don't have it to be panic absorbers in those first 48 hours so that when the world goes dark instead of immediately thinking I gotta rob a gas station and get the last petrol on the block so I can run up into the hills with my guns and my family to protect them I think well wait a minute my nearest panic absorber is just down in the church basement down the road let's go there and see what we can do to keep everybody calm until they fix this I think that is a project for the immediate future that I put way ahead of devising the rules for protecting us from AGI which is decades away I think we like giving you the microphone it'll be very quick thank you Dan although I must say I found that rather negative and dystopian in the same way as humans can be kind do you not believe that AG eyes can be kind and therefore control other AG eyes to be kind and create their own society that is a kind of society and a benevolent one and one that helps repair rather than destroy I don't see any strong reason to assume or suppose that AG eyes will be kind no I'm not assuming they're going to be evil either I'm just assuming that they're going to be as volatile and is out of our control as we are I don't control you you don't control me I'm happy to live in the world with you because you've been educated and groped you were raised to be a honest law-abiding citizen and so are we all but it's very hard to see how any process of Education and nurture of a of a super AG I would make an agent that was a reliably moral agent thank you apologies - there were a lot of hands than people I didn't have time to get to I'm afraid we're over time before we finish I'd like to ask you to join me in thanking Dan for a fascinating talk I'm just one reflection for me before we can you know it occurred to me as I was thinking what to say at the beginning and I'm telling the story of how I got you lost in Sydney the first time we met but that's not an experience a young philosopher or anybody else could really have these days precisely because my father would be telling us where we're going no I speaking from personal experience I say we've lost something there because the opportunity to to get lost with somebody like you the age I was then having us I had two of the most memorable hours of my life being lost in the desert trying to find our way to a party a Santa Fe Institute party with John Maynard Smith I learned more in those two hours then nothing at any other time maybe in my life yes getting lost could be wonderful you're with the right person you
Info
Channel: Future of Intelligence
Views: 3,902
Rating: 4.826087 out of 5
Keywords: Philosophy, AI, Artificial Intelligence, Turing Test, Smart Machines, Machine learning, The Imitation Game, Alan Turing, Policy, Ethical AI, AI Ethics
Id: etRqofuWHpo
Channel Id: undefined
Length: 93min 7sec (5587 seconds)
Published: Fri Jun 28 2019
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.