Prof. Brian Cox - Machine Learning & Artificial Intelligence - Royal Society

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
oh good evening well welcome to the Royal Society's third in the science matter series here at the Royal Festival Hall on London's Southbank now tonight we're discussing machine learning and artificial intelligence and the way in which they are already shaping the world around us as well as discussing how they might transform our future artificial intelligence that's a term that we are all probably aware of usually I think via science fiction so tend to think of things like howling 2001 or Deckard in Blade Runner Skynet the Terminator which certainly gives us probably in popular culture at least a rather bleak outlook about the future of artificial intelligence so tonight is really about the scientific reality now a lot of the current advances in AI have been made possible in a scientific field called machine learning which is one of the key things we're going to discuss it's a nice he's grown rapidly in recent years and there are many applications you're probably familiar with them although you might not know it so Google search for example uses machine learning as spam filtering on email Netflix film recommendations Facebook feeds all those use this technology although perhaps we don't know it when we use it and this is why there are scientists carried out a major policy project on machine learning and it's been investigating the potential of this technology the near future and the barriers to safely realizing that potential now speaking to the public has been a big part of that it's a big part of what the role sighs he does and which is why we've asked you to send in questions and all the questions I'm going to ask tonight are going to be questions that you've sent in and to answer them I'm joined by a panel of some of the world's leading experts in computer science statistics robotics and artificial intelligence ethics we're also going to get a chance to see just one way in which machine learning is being used and will be implemented our everyday lives in the not-too-distant future so without further ado let me invite SAP to the stage I think there we are [Applause] [Music] and I'm gonna get a panel to introduce themselves and give it beef a brief description of their areas of expertise one by one and we'll start thanks Brian my name's Peter Donnelly I'm based at the University of Oxford where I lead a large research center about 400 scientists we're trying to understand the genetic basis of diseases why people who have one particular editor at one place in their DNA code are more likely to get heart disease whereas a different letter at another place in the code might make someone more likely to get arthritis say and we're trying to use those insights to understand the disease's better and once we understand the disease is better to to use those biological that biological knowledge to develop new drug therapies and new treatments for the disease I'm a little bit unusual in working in genetics in that my background actually is in mathematics in statistics and in the early years of my research career my research was statistically based statistics has never been a subject since I've had a pletely easy to get people excited about there's an aphorism we now feel the statisticians of people who like working with numbers but don't have the personality skills to become accountants and it's never been easy or in a succinct way to explain what I do many years ago my then-girlfriend later my wife did a much better job I was working in America I was coming to visit her she was based in the UK she mentioned to some of her colleagues that I was visiting and one of them said well what's your boyfriend do and Sarah obviously thought a little bit about how I'd explained that they use mathematical models to understand evolution so she paused for a while and then said to her colleague he models things well a colleague got rather interested at this point and and the woman said to Sarah what does she model Sarah thought a bit more and then said genes he models genes as you might imagine when I did arrive and the woman met me she was rather disappointed but it for a moment it was an interesting area and in some sense my work today and our research in Oxford is about modeling genes it's trying to make sense of the very large amounts of data available in modern genetics so as you probably know we each inherit 3 billion letters of DNA codes from our mother and another 3 billion letters from our father and in modern experiments we can measure those 6 billion bits of information in tens of thousands of people and the challenge is to try and relate that to health outcomes to to what diseases they get and how they suffer and so forth and those challenges involve statistics with a involved machine learning and and tools that will be the subject of the discussion tonight and finally apart from my day job Brian's mentioned the policy project at the Royal Society are involved in in machine learning and I'm not he asked to be the chairman [Music] so I'm Sabine Howard I'm an assistant professor at the University of Bristol and the Bristol robotics laboratory I usually say I'm a swarm engineer I'm fascinated by swarms if you look at birds and ants they can do these beautiful complex behaviors and I take inspiration from these swarms to engineer solutions for real-world applications so half of my lab does robot swarms and we have a thousand coin sized robots in the lab that we can use to think about how robots might be helpful for environmental monitoring to monitor animal populations and the other half of my lab does nanomedicine how can you control trillions of nano particles so that they can be more efficient in cancer treatments I'm also fascinated by science communication so I spent the last 10 years helping the robotics community communicate about their work and I run Robo hub org which is a non-profit to do that and on the machine learning side when machine learning is really helpful for me because in the case of swarming you don't really know how to design individual behaviors and machine learning can help us figure that out so I'm part as well of the Royal Society's working group on machine learning hi I'm John Cockcroft I'm a computer scientist reformed physicist from the University of Cambridge where we work on building large computing systems networks cloud computing which used by a lot of people to run machine learning algorithms things like analytics that companies like Google and Facebook and Microsoft and Apple run on all of you which are the origin of some of the some of the cooler more recent things that are perhaps not so cool and so these are things that that we built because we're interested in seeing how well we could scale computing systems to tackle really large problems when we've heard about large problems of DNA analytics or swimming robots there plenty of them out there which you'll hear about some of they're also some interesting small problems nowadays are half of my day job is that a new institute called the Alan Turing Institute which is a National Institute for data science where I'm rapidly learning machine learning and one of the goals I have is to make machine learning and AI a democratic process of a DIY my one of roles I could take here as a token deke is I have around 5000 science fiction books and in the back of early science fiction books or magazines there will be advertisements for growing things and you know Triffids maybe but I kind of like the idea of you know having a habit that says you too can grow an AI in your bedroom and so we'd like to make that something that was very easy to do and understandable what the consequences were so hopefully I can offer some comments based on that background right my name is Johanna Bryson I am a reader or associate professor at the University of Bath I study artificial intelligence my first degree was actually psychology and so my main interest in artificial intelligence is using it for making scientific simulations for understanding human intelligence animal intelligence what intelligence is for what cognition is for so about half of my PhD students work on that question the other half work on something called systems AI so it's about how do you build intelligent systems how do you make sure they're safe how do you make sure the robust how do you make it easier to build a human-like real-time AI and we mostly had been applying that into computer games a little bit into robotics until recently when I was a PhD I was working on a human humanoid robot at MIT and I was just one of the many people working on this but I got interested in why it was that people came up and said well it'd be unethical to unplug that and I'd be like well it's not plugged in and they'd say well if you did plug it in it would be unethical to unplug it and I was like well it doesn't work and they said well if it worked I eventually realized that people had no idea why it was that they thought AI was something that they owed an obligation to and so since the early 90s I've actually been doing a lot of work in AI ethics and I think that's one of the reason I was invited here and I'm one of the few people that has been doing AI and science social science human social science and AI ethics for a number of time for some time [Applause] [Music] so I'm Marie Shanahan I'm professor of cognitive robotics at Imperial College London and I've been working in artificial intelligence for about thirty years or so in different flavors of AI and so for about 10 15 years I was working in what's now called classical symbolic or good old-fashioned AI which was really based about around building systems that kind of used language like representations and reasoning and so on I got a bit disillusioned with that and then I moved into thinking about how the brain works how the biological brain works and I spent a good ten years or so thinking about that and trying to build computer models of how brains work animal and and human brains and then recently because ami has become very interesting again really largely thanks to machine learning and I've been looking at how I can kind of bring all these things together and in order to create increasingly sophisticated artificial intelligence and along the way I had the good fortune to get involved in the making of ex machina and I was the scientific advisor on ex machina and I'm very interested in you know the cultural responses to our official intelligence and and the implications of AI social and philosophical implications of AI and where it's going in the future [Applause] well thank us so let's go straight to some of the questions we've heard to terms they're actually a machine learning and AI artificial intelligence the question here from Lisa which I addressed a pizza first of all which is what are the main differences between those two things or the difference between machine learning and artificial intelligence it's a great question and I think it would place for us to start and let me start by talking about machine learning so traditionally when someone programs a computer they give the computer a series of instructions the computer should do this and then do this and they can foresee various possibilities so they can be if statements if this happens then the computer does this and if something else happens it does that so in that sense sort of traditional computer programming is about foreseeing what might happen and giving it a series of instructions to cope with all of those eventualities what's different about machine learning is that in machine learning algorithms the computer programmer doesn't tell the computer how to solve the problem it tells the computer how to learn the solution to the problem how to learn its own solution to the problem so you can think of machine learning algorithms as in many cases they are rather like this as about learning from examples the the program helps the computer to spot features in the examples and to derive its own patterns and rules and to use those rules to solve the problems based on many many examples and one of the reasons there's been a revolution in the last few years in the power of machine learning is because there's so much data there are so many examples available of certain sorts so machine learning as Brian mentioned it's already here we although the term isn't so well known we interact with machine learning in many different forms already in our day-to-day lives so email spam filters based on machine learning systems to detect credit-card fraud through unusual patterns of use are typically based on machine learning when Netflix or retailer recommend a film or something you might buy to you that's based on machine learning algorithms which are trying to predict on the basis of what you've done so far what you like what might be tempted to buy when we speak into an Apple phone - Siri or - an Android phone and say ok Google the the phone's ability to recognize the words we're using that's a machine learning algorithm and it's been enormously successful recently and when Facebook and other programs tagged people in photographs because they recognize that this image looks rather like the person that's called Rebecca in the photograph you posted a week or two ago again that's based on machine learning and a number of those examples so image recognition and voice recognition are at the stage where the best machine learning algorithms have comparable and in some cases by some metrics stronger abilities than individual humans now what's typical of all of those examples the machine learning examples and it will be true I think in the foreseeable future is that machine learning algorithms are trying to solve very specific and very narrow tasks recognize this work out what this word is spot an unusual pattern in financial transactions and if that's artificial intelligence at all it's what's called narrow artificial intelligence at a very specific task by artificial intelligence more generally I think we'll probably talk more about it I think we mean a much broader thing we think of the idea of machines or computers that have cognitive abilities across many different tasks that are comparable to humans so machine learning it's great at specific narrow tasks but it's not yet anywhere near the the broad range of tasks that humans could undertake jimana sometimes mean more than that would are sufficient intelligence because so the colloquial view would be as you said a humanoid robot or even if you've ever seen things like Westworld or something like that these things that are essentially completely multi-purpose in the way that we are right well I I actually we have a slightly different definition of I totally agree on the definition of machine learning by a slightly different definition of artificial intelligence but then we'll get to it the question you just asked about why we think it has to be human so one of the things that people mean when they say intelligent is they mean human-like but we have a lot of words for human life so let's forget about that for right now I like to think of it in a very computer science kind of way which is for intelligence for me and actually psychology is like this - it's about being able to do the right thing at the right time and that's assuming the world is changing so we call that a dynamic world then it's not always that easy to figure out the right thing to do at the right time but by that definition plants are intelligent so a plant knows how to grow towards the Sun it notice to drop off branches that aren't getting trees drop branches if they don't get enough light they grow new leaves and foliage where there is light so that's a kind of action but it can't learn to get smarter so we call it we said that trees aren't cognitive all right so what does all that means to do with intelligence so I would define intelligence as something that takes actions in the real world and I would be happy to include like a thermostat that knows how to turn on when when it gets too cold or you know it's turn on the heat when gets too cold the air conditioning when it gets too hot that's like plant like artificial intelligence but we've had that for actually centuries we've had simple machines like called governors that helped us for centuries when people talk about artificial general intelligence what they really one of the things they do is they make the mistake of thinking humans are perfectly intelligent and can solve all problems which we we know we can't even necessarily figure out the right person to date you know there's a there's all kinds of things we can't do we can't memorize phone books you know whatever so that doesn't quite make sense the but the other thing is that this assumption that if something was as smart as us it would want to take over the world like we do well first of all most people don't actually try to take over the world even very smart and some very wealthy people don't necessarily try to take over the world but secondly your phone if those of you have smart phones I bet your phone can play chess better than you and it can do better arithmetic than you can but your phone hasn't even taken over your pocket yet right that's not one of the things we build into AI we don't say take over the world that's not one of these pieces so I think that's why there's some confusion we project into things that because intelligence is ours it's ours but actually you know chimpanzees are pretty smart too we we need to be more generous does anyone else have anything to add on those I suppose those definitions or descriptions of artificial intelligence and machine learning well I think it's just to kind of expand a little bit on Joanna's definition a definition of artificial intelligence that I quite like is or intelligence generally is the ability to make decisions and achieve goals in a wide variety of environments I mean it's actually that ability to do to do things in a wide variety of environments that's really important to general intelligence and that's the sense in which a lot of our AI systems today are very special purpose such as something that the system that's in your self-driving car well maybe you know you're not your self-driving car but the one you'll have in a few years is a very specialist a bit of technology all it can do is drive cars even the amazing system that defeated Lisa Dahl at the game of go back in March all it can do is play go where as humans are have a kind of general generality in their intelligence they can make good decisions and achieve goals in a enormous variety of environments and that's something we don't really know how to put into computers and robots yet do you know how to know no I don't think humans are that good at general we are better than most other species in fact primates in general are great generalists right so that's why before up until 10,000 years ago the macaques were doing better than we were there were more macaques than hominids of any kind you know the Homo erectus or whatever else so monkeys are really good they get into all these different continents suddenly about ten thousand years ago we got agriculture we got we got we got riding we got a bunch of urbanization and then we started taking off so this general capacity is relatively recent and it is correlated to intelligence but I still think individual humans can only sort learn so much in their lives I don't think it's entirely you promise to disagree with me I don't really disagree well I disagreed with it because I still felt like the case you were taking was that like over this incredible journalist but I still think that actually AI can learn as much we already have just in the last few months AI has gotten to the point where it can do better transliteration that's like if listening to us and typing it down a computer can do a better job than human that's just stuff in the last few months I mean there's so many things it's I don't think that there's a limit again AI is an artifact it's something we built and so if we put all these things into one box that would in turn it into a human Peter I'm sorry John I was going to take the middle ground here it that the thing about machine learning has has advanced so far with a large amount of data AI uses a bunch of machine learning techniques so there isn't really a distinction except who's driving and so when we use machine learning typically machine learning tools have been devised for humans to use for final profit almost literally for final profit that one of the steps that happens when you build an AI is it has a level of intent or autonomy and it may be a very simple intent that it's been given an address to reach mr. salt rising car or disease to treat but it will use a collection of machine learning algorithms that have been trade up on varying to avoid colliding with things or doing the wrong thing but it's it's it's a goal seeking system and that is actually an extra step which has always been around even from 40 years ago in schertler where we could go back to ancient history of AI that some of you says we want you to solve this problem not just create a really good model that's better come on you know that we're we're the machine learning schools have succeeded is curve fitting better than anything previously faster more accurately this is doing that and then put it in an artifact which could be a robot or it could just be a decision support system and have it make decisions or do things there's actually a perhaps a related question from from Peter certainly about I think this idea that our view of something this artificially intelligent can be can be colored by the and essentially imagine in this thing the more human it gets the more intelligent it is it's a question about science fiction so it's our last John could you said you're a science fiction fan is is our relationship this long relationship with the concepts of AI in science fiction has that given us a loaded concept of what artificial intelligence is and how it will interact with society in a word yes this is very problematic and the way I mentioned science fiction and being a geek is this is very unfortunate the technology is quite often created by people because they're in a culture which says oh this is the next technology to build not because there's the thing we should be putting all our effort into and I would say that that's not entirely fair to the early strong AI community but certainly the more recent proponents seem to be hell-bent on creating Skynet as far as I can tell when you look at the creation of autonomous weapon systems what else is it and that's where a lot of the money is and unfortunately devoid of you know an ethical background which some people can be or we cope on it they may they may run in there but they're handed these models now having said that we will also hand it very early on some strong ethical positions and I would I'm sure many people are aware of Isaac Asimov's Three Laws of Robotics which he revised later on to include a fourth floor zero floor actually if you're a computer scientist and which was that the robots in other words artificial autonomous beings that should do the best for society not just prevent humans from being injured or or coming to harm or indeed each other coming to harm it's a very extensive exploration through many stories and and this goes back 50 years it's not recent and so in many people who have read that will come at the world from that perspective and Asimov was a very interesting person he was quietly deeply religiously educated wrote commentaries on religious texts as well as around 5 million words of science he was a professor I think biochemistry at Columbia University in New York and so when he wrote science fiction he was informed by both sides I think is very rare and when you when you look at the bulk of the film's in literature it tends to be dystopic visions rather than an offer of the potential benefits there's actually a that question about how we might control or manage artificially intelligent systems our Sabine actually is question asked by Sam which is when you build these things and artificial intelligence how do you keep track of what the machines learns and crucially how that information is informing its decisions so as Peter mentioned machine learning is all about learning from lots and lots of examples and this has been around for a long long time but the reason we're talking about it now is that we finally have the right amount of data and the right quality of data we finally have the computational power and we finally have algorithms that allow us to put this all together and so the question of how these algorithms work is going to inform what we can see in how this is happening so let me just give you an example let's say you want to create a machine learning algorithm that can take a picture based on a biopsy and based on that say is this cancer or not cancer well the way you would train this machine learning algorithm is you would give it the first image and that's been labeled by an expert as cancer or non cancer your machine learning algorithm would make a guess because it doesn't know anything and then it would check was I right or was I wrong and it would go back and change what that algorithm has learned so that the next image maybe it makes a better prediction the third image it makes a better prediction and over time it becomes better and better at making predictions about cancer not cancer now the patient if the doctor is using this as a tool to say is this cancer or not we'll want to know why was this decision made and so that's where we want to be able to have a peek at what this machine learning algorithm is doing and so some machine learning algorithms might tell you well the cancer cell on this image is this size and many of them and so I think there's this probability that this might be a cancerous tumour or it might say nothing essentially because a lot of the very powerful machine learning algorithms that we have today our black boxes you might have heard of artificial neural networks they are inspired from what the brain does which is lots of connections lots of neurons and just like we don't really understand the brain it's sometimes hard to understand what's happening in these black boxes but we can do lots of tests right we could test this algorithm many many times and that might give us confidence that usually it's pretty good at predicting cancer but I think what we need to remind ourselves of is this is an important question and we need to really figure out how to approach it and that's going to depend widely on what type of algorithms we're using that was a great answer and I'm not going to even pretend to disagree with you about that but but I will say another part in a session but I will say another part about besides that and safety is also this question about the whole system so that's what the Skynet scenario you created to do one thing is it going to suddenly start doing something else and the thinking the the the the thing that we just heard about it could maybe you know again if you had a human that could solve cancer you would think wow that human is amazing they might be scary they might be going to take over the world if you have a machine that takes pictures and then says yes or no that can't take over the world there's no way it can take over the world all it has is pictures you've given it and it doesn't have any arms or legs or anything right so what we call that in a in while in computer science in general is architecture so there's an architecture about what the system is capable of knowing and what the system is capable of doing and so a lot of the fears of a I are really ungrounded because again we think it's going to be this general-purpose ape-like we are they could go and you know deploy itself somewhere else it's not the machines that are going to do that it is possible so one of the things don't worry about in terms of safety is privacy so so that if if you can recommend what is the movie you want to watch and you can recommend what is the book you want to buy then you can also guess how are you going to vote all right and then you can decide whether to try to encourage that person to go to the polls or to stay home you can tell them oh all the parties are the same don't worry about it stay home and then that affects whether or not that person goes and votes so this is a mechanism of social disruption if people or organizations use it to be but the machines themselves are not the things to be afraid of because we can build them with limits about what what they can do what they can know so especially raises the question are you suggesting that we shouldn't then try to build generalist AI machines for that reason is okay first of all again this goes back to whether or not it is very disagree and I disagree we so that the algorithms that they were talking about are very very gentlest like deep learning very very gentlest algorithm you can apply to an awful lot of data if you format the date of the correct way and that's great it's really useful just like you know synaptic learning is really graceful for brain useful for brains and you can use it in like little you know sea slugs or something or you can use it in eighths and it that's great you know so it's not that I fear generality it's that I'm saying that I mean maybe I could fear little nanobots that are they're somehow programmed to construct armies or something like that but that's really unlikely and and would be really expensive to it's cheaper to use humans so it's not the generality that's the concern but the concern is that while people are standing around thinking that you don't need to be afraid of AI unless it looks like Arnold Schwarzenegger that we aren't noticing the other ways that we're changing our society like this issues about about losing our privacy and losing our anonymity we have a question actually about the the near future perhaps I could ask pizza first the question is what might we see machine that learning being used over the next five to ten years now we've given some examples but if you speculate say in a decade's time I think it'll be very very widespread in some places in things we interact with directly but often behind the scenes as things that go on in our world so we've had some examples already for healthcare I think that's one area where machine learning has a huge amount of promise in helping doctors get better at diagnosing particular conditions choosing the right treatments potentially even doing operations talked about driverless vehicles I think there'll be part of our lives within 10 years we haven't talked so much about public services so their opportunities for public services using machine learning algorithms to get better at targeting interventions to try and be better at spotting a child who might be at risk in a particular care situation or a young adult who might be at risk of reoffending and amongst the many such that they have to deal with with limited resources to target those rather better they're examples in New York of New York City Fire Department are using machine learning to get better at picking houses or buildings which were in danger of fires and again targeting interventions or tailoring services working out where the real needs are for the different public services and and tailoring those and that there are lots of changes I think that will happen in business and industry machine learning is already a big part of Finance financial trading we've talked a bit about credit card fraud also many lending decisions machine learning plays a key part in I think the pharmaceutical industry will use it to get better at they're pretty bad currently but to get better at developing drugs manufacturing will use it logistics so it's already the case that some companies Amazon's an example of Cardos another example use machine learning algorithms to design their warehouses in such a way that they kind of minimize the amount of effort involved in getting particular filling particular orders that'll become much more widely used I think and finally academic research people like myself in genetics we're trying to solve problems that involve large and complex about of data and I think machine learning will be increasingly powerful tool for us as well I think Sabine you described yourself as a swarm engineer which is presumably you that's how these machines can interact with each other do you see that being used in the mainstream in the next decade or so I think in in the mainstream for the next 5-10 years it's worth reminding yourself that doing machine learning is really hard I spent a whole PhD solving one simple task in a laboratory setting and so five 10 years is a really short time that was almost 10 years ago and when I started my PhD so so what I see for the future are things that are going to be solving specific tasks things that are very very narrow and what they can achieve and the way I like to think about it is if you were to dream in your day to day life what would be helpful for you as a doctor as a caregiver as someone who needs this technology to unload some of the burden they have so that they can focus on the things that they're most interested in so all the tasks that Peter mentioned I think are very very good ones on the swarm engineering side I think there are definitely interest in making robots work together but swarm engineering is still still a long a long way ahead any other contributions speculating gonna I'd like to add to the list of sort of applications personal assistance so the kind of thing that we've we've seen just very recently Amazon's Alexa has become a huge bestseller just over the Christmas period and and of course we already have a Siri Apple Siri and Google Google's assistant and Cortana and Microsoft I think these kinds of personal assistants technology which are speech you know basically involve a speech interface with a computer I think they're gonna that's going to become increasingly prevalent and anybody who's tried out these assistants will pretty quickly realize their shortcomings they they don't really in any sense at all understand the the things that we refer to in the conversations so for example I when I got one of these Amazon Alexa's and and I asked it who was the world record holder in the Women's Marathon and it came up with the answer of Paula Radcliffe but then when I said how fast did she run it had no idea what I was being referred to by she even and then if you were to go and ask another question like such as you know how many legs did she have then it would have no real clue that you were actually talking about a human being who normally has two legs and so on and so I think we're going to grab that we're going to gradually see is is these kinds of personal assistants will have a better understanding of the everyday world they'll be able to understand the terms that we use in language much much better than they do at the moment and that's I think it's going to be a big growth area it's actually a question in from Erin from Jackie I don't need any people in the audience ooh professional personal assistants who might be getting worried about that because a questionnaire which is any recommendations for tech proof or AI proof careers given that it probably won't be a personal assistant it's a plumber I think actually I'm not entirely joking I think job jobs that involve the sort of the physical manipulation of you know a house where every house is completely different every plumbing job is completely different and there are many other manual jobs like that that surprisingly I think of the kinds of things that will be among the last to go and and then of course there are many creative occupations that will also be among the last to go I'm sure we've all got our favorite examples another set of examples I think a careers that in where in situations where personal interaction is important so many of the caring professions as well I think they're not going to be quickly overtaken by machine learning it's obvious with another career that's a good wonder to be thinking about in an age of machine learning is going into machine learning on computer internal intelligence that but it isn't just about throwing every possible learning machine into a box it's about human likeness it's about being an ape and having the physical experience and and we can actually learn a lot about what that is and we can reproduce that with AI but I think just to generalize from what you were saying the most basic things that humans can do of all the models we build us we have a lot of idea of what it's like to be a human and so one of the great examples for careers is automatic teller machines so so what happened to bank tellers when you got a machine that you could punch in and then get your money from the machine well initially that you don't need as many tellers per branch anymore because a lot of the easy things are taken care of but there are actually more bank tellers now than there were before there are ATMs why is that because every bank branch is cheaper now so banks have more branches and what are the humans doing they're not you know counting notes very much anymore instead the humans are actually working with the other humans who come in and helping and being empathic and understanding their problems and helping channel them towards the right kinds of services if it's a nice thing otherwise they're up selling so many jobs have some kind of utilitarian value for very little enjoyment so that's a perfect example replacing there with machine learning and modest intelligence seems eminently suited lots of jobs are professions most of us have things we really enjoy doing and there are many other professions but most of the time on this stage there are people that play music or acting you know in in fictional or even documentary drama and this is amazing there's an amazing thing those are things I can't see why would you replace those in any way you could but why would you but I would I would just go back to some of my personal experience which is when I prefer I went to study in the university I spent a year encountered there teaching and I was doing catch-up maths classes for people and the most enjoyable thing I think I was okay at it was figuring out why somebody didn't understand something in what way they didn't understand it and then what was the way that I could figure out with them they would then understand it so actually trying to put myself in their shoes and understand the strategy they could use to tackle this problem they were having problems with which which is difficult to do in a class and so this is the thing where if 80% of people don't have boring jobs we could all be in that situation in small groups learning more and more things we would extend the range and scope of human knowledge for every lifetime by you know factors of maybe two three four five I don't know we don't know we haven't tried it but I would suspect that would be a really worthwhile thing to do in the life time you could learn to play five instruments act in three things or maybe you'd be tone-deaf and be terrible at that but then you look really good at gardening or sculpting or or teaching somebody to do those things too so I I don't see a shortage of things for where the if you remove all the mundane utilitarian part of labor then you leave things that are enjoyable and then you free up time for doing more of those which I can't I can't hear a reason to make and they I to do I may be wrong but who'll be naive a lot of what's been said I think machine learning is about doing tasks not necessarily jobs so that that frees up more time so that we can focus on the aspects that we want to do caregivers could spend more time caring doctors can make it spend more time with their patients but there is a need in terms of being future-proof for more engineers and especially we need more women and we need more minorities within this field and so that's something that I think we should be mindful of but not everyone needs to be a programmer I think what everyone needs to be is is literate in the technology so that the doctor does see machine learning is something that could make his job easier so that the caregiver can see something in machine and that could make their job easier and so it's this mix between more people who can design the technology and more people who use the technology in a proper way which hopefully can build up things in a way where we have more time to do that the creative sign and what we really care about but I I want to it's not I want something that John said it's very important this thing about what whether or not we have drummers okay so so there was a no artificial intelligence one of the things I've been saying for decades is that we make a big exception about it sometimes there's not that big of an exception that was already a question when people brought in recorded music right how is this putting drummers out of business right the and this this issue about whether or not we have enough money to pay people to come and you know paint our house you know we first of all it's a political decision that we can make when we decide whether to have live music at our wedding or to have a DJ right or or whether we want to have custom-made furniture we want to have factory made furniture so this is not something that's being driven by some exterior force this is decisions that we make as individual consumers and then there's another issue which is about how is the economy shaped whether or not the money is distributed you know all going up into just a very few people very few companies or whether we apply a taxation and redistribution such that we can employ you know that more people can afford money afford the capability to hire real drummers for their wedding since I have people come and paint their house by hand and have interesting murals instead of just all have you know plain white wall it's like we do right now so I I want to point out that this is not just about Oh what is you know what is the what are evil technologists going to do to us or whatever this is a decision that we as a collective have to make about what do we want our future to look like yes most of the discussion so far has been focused on these things is very specific tools I suppose almost glorified screwdrivers in a way to do specific tasks but we have a series of questions which I think a points into that broader idea of building very intelligent systems for example what one from Ramen who's asked what ethics ought to govern the development of artificial intelligence perhaps I could ask Murray I suppose that really is beginning to talk about sentient machines ultimately he's points into what perhaps well yeah you can ask those questions ethical questions about the sort of shorter term tool its tool type AI as well and and they're probably the more urgent question yet to ask and they're so sometimes that people discuss the possibility of having kind of regulation of AI research and I always think that that's a slightly somewhat misguided idea but I think what you can do is you can have regulation and ethics the government particular application areas of artificial intelligence so you can look into privacy in the context of personal assistants for example which are getting a lot of data about the things that you that you say to them and you can imagine all kinds of you know ethical regulatory frameworks that the government how that data is is use similar leave autonomous weapons you may think about banning them altogether might be a very good idea and so I think that you can think about ethical considerations you know one kind of application area at a time in the short term in the longer term well I think possibly some of these questions are alluding to again the sort of possibility that we've drawn towards and we see in science fiction of something that is very human-like and seems to have kind of consciousness and and so on and I don't see any reason theoretically why way it's not possible to build something that you know is it has those attributes but I think if we get to the point where it looks like we are capable of building those things we should ask ourselves whether we really do want to bring such things into the into the world whether that's a good idea or not because if we brought something into the world two worlds which we had some kind of moral duty then no you sure think twice about that that perhaps does anyone see a fundamental reason why we couldn't at some point in the future build a thing that you would say was conscious something that passes the Turing test or something like that bothers there's a difference between saying even okay so you may not believe this but remember how I said that intelligent we thought all intelligent means to be human same thing about conscious is they're conscious there are so many things that we have words about conscious if you mean by conscious self-aware then the basic problem computers is there too self-aware they have access to every bit of their memory that doesn't do them any good right whereas we don't have access to our memories we were talking about because in computer science terms that actually there's combinatorial problems that with knowing too much and the whole point of machine learning is sort of generalizing shrinking all that knowledge down to representation that can be used in more different situations so so I don't want to talk about consciousness but but to go back to just talking about actually being like a human like like Marie was just talking about the the British were actually ahead of the game this is something that the UK is one of world leading we had a policy document six years ago now it was called the well it's called the EPSRC principles of robotics EPSRC is the engineering and physical sciences research council so they're the people that the British government used to fund may I research and the a had five principles the first three of which are correcting the Asimov's laws first of all to make them computationally tractable so Asimov was not a computer scientist he had two PhDs and I believe they were indeed in biology and chemistry but they were not in computer science so the asthma's laws were not computationally tractable it's impossible to see everything you're going to do and what all the consequences of your behavior are we can't do that so but secondly more importantly we were looking at AI as a manufactured artifact and what do you want when you have a legal commercial product and we said look we are we want to disrupt the law the British law as little as possible so we can we think AI it just fits in to that we we can just say let's think about AI this way let's think about AI as a product and then let's make the choices to make sure it is a good and safe product and in that case making an AI that would suffer if that's possible and I think there's real issues with whether you can make an artifact suffer not because suffering is magic but because by definition suffering is something you would voided all costs and if you build a safe system you build a module you tend to put things into it in a modular sense and so if you were going to avoid it by all cost you take the module back out again right so I think there's a problem with suffering in particular but most things it's not that big of an issue I think I'm going on - well I think that I mean there - link questions perhaps just very briefly you started answering this one it's that's for Adam which is perhaps I could asses Sabine and John and do you believe that we need to start building a framework of governance for AI technologies I suppose in a sense we've just heard that we have started yeah so do you think do you think it needs more work is it sufficient I think at this stage what we need is more data more information about where we're going with machine learning and what we want to use it for I think standards are wonderful doing best practice from the point of the government in terms of making data open explaining why they make decisions is also best practice and what I really like with what we're doing at the Royal Society is actually asking the public what they think because a lot of what we're seeing happening this year is because the public don't feel they have a stay in the way technology is progressing and so a lot of the discussions that we're having asking the public what they think only nine percent know what machine learning is that's a very small percentage I think is going to help understand how government should build this framework and inform the direction in which machine learning goes in the future is it Peter similar to questions of for example drug regulation and many many areas where there's scientific input required but there's a John other but there's a significant element that the scientific advice is one component of a much wider debate that you absolutely have to have in order to build these regulatory frameworks I think that there's a missing step there there is a similarity business of difference which is as I mentioned I think will kind of run roughly the same page that AIS incorporate machine learning if we're just talking about machine learning machine learning algorithms are to a more or less extent transparent in how they operate and some of them create models which are explicit and visible some of them could do that if the the wrote the algorithm would let you see that so what Facebook used to order their newsfeed or what Netflix used to recommend to people is not transparent its commercial in confidence but but some algorithms are not even amenable to that at the moment that means that that the the system is trained on some data and then the data may be discarded for privacy reasons and and then from then on in you actually don't know why the system outputs behave the way they do and this is the kind of new area I think I think it's computer science has led us into a rather weird situation we we could say well actually it's not obvious why people do things the way they do and you only have to watch the last episode of Sherlock to be struck by this you know there's many of things of you what on earth is that guy doing and he's the detective so um the however you know these are we're constructing systems and then we have some kind of responsibility to make them effectively legible because in other words people can see why they make the decisions and perhaps give people agency so power to how access to them and in fact there is European new law general take protection regulations just coming in maybe just in time or not which actually says you will have the right to have an a decision about you met by an algorithm explained to you that doesn't necessarily mean you get to download all of the training data for them the neuron there and run the neuron it again with your data taken out but that somebody has to go figure out what that will mean and I think that's that's a sort of level of governance where is it's not just being explored I mean that's quite mature that somebody's trying to cast that into has exact cast it into law right now without special spelling out the technical way it's going to be done in every case in many cases it can be done in many of the older machine learning cases it can be done quite easily which is which is a good thing so if we want to remove recommendations for things that are essentially unfairly based in race or gender or some other factor that you shouldn't then we can do that in most machine learning but but there are other systems where it's still is rather weird to say this is a computer scientist and part of a team of people who make these things it's mysterious how they work we're building a thing and then we can get into a discussion about making things in being like God and they're not knowing how they work so they can't be a god it's just I mean we have to just pause because we're gonna have a demo in a minute but I just wanted to bring up something we spoke about this backstage actually which you said to me that a it can be the case that these algorithms do things that are really rather not only unpredictable but surprising you know unusual yes because we don't fundamentally understand how they making the decisions so the problem with specifically in neural nets which are the the sort of deep learning technology of choice and the most effective is that they do a trick which is seems odd which is called dimensionality reduction in the way they were which effectively means they are discarding information because it would appear to be irrelevant and this maps into physical world problems that we talked about earlier like image processing or voice recognition rather well because the inputs have physical properties that that are local but it it means that they're fooled by weird things so they're classic examples of images which you can feed into a sort of google image search you can you can basically give it a Rorschach diagram and it will find penguins because the algorithm it has will faithfully match on random black and white blobs and the that unfortunately is not explainable it's actually a little bit like humans with optical illusions that we're susceptible to there are glitches in how we've evolved to detect things and this is similar I think but we don't have obvious ways to fix that so in the older machine learning things we have ways to look at the model and it's explicit inside the code even it's a Bayesian inference each system learns latent variables you've got oh look it's learned this association between these things we didn't expect that there's a medical cause of something you know great but where that explains itself these occur early deep learning systems don't quite manage that but there's a lot of work going on to fix that pop-top what can I give you a positive example of a bit of behavior this that's hard to explain so this is going back to the alphago program deepmind's alphago program that excuse me that defeated Lisa doll back in March at the game of Go this extraordinarily complex Eastern game where you place black and white stones on a board and in one of the games it played a move that all of the Go masters these are very advanced go masters thought was a mistake because it's not the kind of move that any human you know go champion or great go player in history has ever played and it turned out this was a masterful move that opened up a whole new kind of way of playing go in fact that they've been very interested to explore since then and alphago went on to to win that game and to defeat Lise at all and and that was an example of a kind of creative move from from artificial intelligence which did something extraordinary that that the the you know the humans don't the programmers didn't understand why or how it did it and and even the masters couldn't under couldn't explain why it was such a such a good move it we actually we're going to just break into the Q&A because we do have a demo which is a quite spectacular demo about how the machine learning it can be used so lights are welcome onstage Mirko Kovac [Applause] so America as you will see is gonna give a brief talk and then demonstrate drone technology okay thank you very much in fact thank you also for the interesting discussion it really inspires me of to think of the future and how the future will look like in fact it connects a lot to what I was thinking for the last many years in fact on particularly future cities and how intelligent future cities could be like what how your life would be like how we all come here how our homes look like and in fact if you look about that there is one vision you see here in the screen which shows how a future building actually could be an intelligent being an intelligent system similar to an organism that would equally have all the capabilities to sense to interact with humans we're also the drones or drone technology could interact with them interact with humans and like this propose an entire ecosystem of systems that would interact and provide us with the well-being and everyday needs that we have so this is something that my group the Imperial College is focusing on to develop the flying robots and in general the robotic technology today enable that so just to give you today two examples that would like to share with you that kind of outline some of those examples and one of them is this so the idea here is to really lymon overcome the limitations of drones which is really the flight time so you can only fly about 10 20 minutes and one approach how to do that and how to reach hard-to-access areas is to go there and perch so to attach to the structure and stay aloft for much longer time so you see the concept of one drone double go there build a support net like a spider net for the next drone to then collaboratively attach to it and observe the environment for longer durations of time so you can imagine that this can really extend the duration of sensing or just provide us with the needs that are there that we have like to sense pollution in cities to sense different ecological markers to protect environments as well now the way how we built that that's in a laboratory environment the Doron did you see here is about as big as my palm so it's a very small drone but what you see is that autonomous control systems can allow it to build this kind of spiderweb structures but also the philosophy behind it is that we look at nature of you look at spiders and we extract the key principles of how they operate how they build how they sense and in fact spiders they sit in their nets and even sense the environment through the net that they have built so they don't just use it for perching for catching but also for sensing and this is one approach that we have developed here here on the concept level good so like this it attaches thinking dangle aaron's fly out again now another example would then be to use several of the ons to build largest spider webs to then catch other drones or even manufacture different environments make a bridge for example and like this use these tensile elements to M build those type of strategies as you see here now a difference there's a second project or a second idea that I would like to expose to you today is to use the same approaches how for example nest building birds use to repair structures so repair pipelines for example this is a built drone concept so you see there is a pipeline leak for example and we have a drone the goes and deposits material like a bird's to repair this pipeline leak now we have shown that conceptually here as a seaweed fly and would deposit polyurethane foam for example to do that very precisely so this is a classical control system that isn't built in here so it doesn't learn how to do that but it could do and learning actually could improve the performance but also it could we need to develop the mechanics to make it better and so one aspect how to do that how to make it more precise because wind might perturb it for example we built this small Delta arm system as you see here that balances the movement of the drone so there might be winds there might be in sin consistencies in the sensing sensor drift for example and then this Delta arm can really balance and stay on the spot here again it's the philosophy of the animals but it's the best of engineering that we have to build this in and machine learning is really in between those it's also looking at nature in some way of how we learn how humans and animals learn and then build it into the best using best of engineering knowledge now we have today with us our artificial agent which is the built drone and if you don't mind I can reveal it for you so please so it's one that lives with us in the laboratory it's our friends and in fact in many like many of those visions we see it sida measure as friendly drones friendly technologies and so just so to illustrate a bit how this looks like in flight you hopefully can make it fly given it you're a friend does it have a name yes it's called built on but I'm not very happy with the name in fact if you have a better idea please tell us how can we all right then well at the risk of it getting called boaty mcboatface if you want to hit you couldn't wait actually so there's a hashtag RS science matters so if you want to tweet possible names for this this what is it I was going to say this chap over there collie but anyway they've got a treat for it then I do so and I don't know whether Euler except the most it's gonna get cold boating but if it's a nice one I'll think about it but you really look for one we need a very nice name actually Bob is not interesting enough really so so hashtag RS science matters and we'll look at those okay thank you can we try to make it fly I hope it's in good mood today yeah you never know what it learns it's notion okay so here is see generally how it is dusted so the stability to stay on one spot is done again with a PID controller for examples of classical control but even this on how to balance this could be learned so machine learning could be an approach of that in sin fact some people are looking at and doing for the next generation of this the other thing that he could do is he could use this Delta arm you see the movement that he can do so I can move around the bed to see yeah okay so he could also fly and not just put like patch repair it could also manipulate the environment like solder things or cut things and like this really be an extension arm like his elephant trunk that would go and you know manipulate environment and really make make a difference in industrial inspection or home maintain and Stas okay thank you okay thank you or less [Applause] and I know that she said is Lara and pet Shack over there actually sank and control in the drone so there's some remote control there of course how autonomous is that one and how autonomous could you imagine them becoming so this one is autonomous under control cecil level so that things it does are autonomous but at that's one point that i think is very important in double discussion is what is intelligence so it's not just like humans are a bit brain focused many of them especially scientists including myself i guess but it's not just the brain such as the intelligence it's also the emotional intelligence the intuitive intelligence and also the physical intelligence the physical body has an intelligence built-in so for example if you have your hands and you're pressed in the middle of your palm the hands will be closed passively so there's no control needed to do that it's really the mechanics of your hand and in a similar approach i have a slide on that if you don't mind sharing that is to look at ways how to attach to objects so how to perch the walls for example and if you have a lot of control a lot of weight a lot of sensors like all the way on the right you can do a complex maneuver to attach but when you get smaller on the size of insects or very small systems you cannot afford all the sensing so you need the physical intelligence and the thing that you've seen before is the spider inspired perching where the string makes the intelligent attachments to the structure and I think this embodied intelligence is another layer that we have to incorporate in our machine learning approaches and philosophy as well well I said does anybody have any any questions father yeah you said the robot was your friend how many of your friends do you keep under blankets like that like the birds you also make them sleep at nights right so yeah we do that with pets and dogs too we say our dog is our best friend then we leave it locked in a cage all day again these are issues that will probably come up in the second half of questions yeah but depending on its personality it might get scared when it sees all the people so that's my - well yeah I mean you are referring to you and throw poor Martha sizing is that the right word yesterday I mean how how fast do you see these drones we hear about delivery drones for example in particular from Amazon I think how how soon will it be before we it's absolutely commonplace to see drones like this around in our environment as you showed on that slide so if you can go back maybe to the slide I think a lot of that we are to the vision slide I think so there is a lot of this already happening so drone deliveries are reality today in Rwanda for example there was a large project on delivering medicine and they are becoming ours or part of our cities Lord Norman Foster developed this drone ports as part of a city so how do they interact how to drones interact with cars with humans with buildings and I think we need to think of it as an ecosystem again looking at the biologic biology how biology works and how it has worked for many millions of years and if you do this right if it gets these interactions right then we can create a technology that really is the ethical supportive symbiotic reality and technology where humans can increase their well-being and the drones and robots taking care of all the things they there is dangerous for humans for example because we talked briefly about the the regulatory framework that's been put in place partly already but what also needs to be done I mean I noticed that we talked early but that one's tied down to a big wave since the in case it goes crazy and flies out to the audience in just somebody so in terms of drone at the moment drone flying is quite heavily regulated where so what's that do you work on developing that framework particularly I'm thinking of autonomous drones that are not controlled by people yes and there's a large pushing that in different countries the u.s. in August has released in new framework for drone operation that you can just simply do an online test and sign up to to operate your drones in the UK there's a similar movement that is happening just now a few weeks ago a similar initiative was launched on the EU level also this happened so now it's a bit of a drone race of who creates the ecosystem on the economy regulation education sectors and to really attract all the startups of today in this sector which are the air buses and ba systems of tomorrow so the question is still open and still small companies so it comes from the roots but from the grassroots type of development a lot of it but it needs to be worked on and it becomes needs to be a support innovation I think do you think the interaction between governments and the public and the research scientists like yourself is is adequate at the moment does it need to be improved I think a lot is already happening and what is important is to think a bit out of the box of what can be done with drones so the liver is one thing it's very it's a like a first most obvious step in some way Photography is another one but what else could you want to who use a drone forum what is your vision so how would you do it and I think if we like this all together work on the ideas on the collaborations and create this from us from all of us today and tomorrow and following this the next years then I think it starts from the right place so it's on us it's like a crown for crowdsourcing of ideas initiative if they can make a difference in this space it's honesty regulation one of the the thing about regulation is that it's not only about stopping you from doing things it's also about encouraging so the regulations include things to encourage innovation or whatever but speaking of such regulations one of the things Sweden has done is Sweden is said there cannot be cameras on drones because they thought that was the only way they could solve the privacy problem so it isn't just about scrambling to get the innovation there first there was actually something about mobile phones so mobile phones were taken up in Europe much faster than the United States and the reason is because in Europe the person you called was charged for you having called them with a mobile phone right even though they didn't know or if you called some of the mobile phone and in America that was seen as not fair and so the consequence was people were much faster to buy them because they were cheaper to buy here but then yeah you guys had really outrageous costs for your phones for quite a long time until the EU saved the day like a year ago or so with with with damping down on these cross country oh yes oh you mean if a bromine coals were the most recent thing but yeah so anyway that's the yeah I just want to say that we should think about regulation and government as a way that we come to an agreement ourselves about where do we want drones how much do we want to hear them how much do we want them flying over our own yards looking in our own windows it's it's something where we can see the positive advantages of drones and we can see the negative consequences and we can't just individually guess all the consequences as we were talking about the surprising things earlier so we have to keep exploring and we use the government the government is the way we coordinate our our desires so it's very important that we're invested in trying to make this stuff work maybe just one example is the red flag Act you know if you remember that big cars and cars were invented there was a regulation that you have to work in front of the car holding a red flag yeah of course all the innovation happen in France not in the UK so I think that's one the dynamics that can happen now today is really I think a lot is about the fear of people you know not accepting new developments now is it going to crash or not privacy but a lot of other risks we accept widely easily so I think once we overcome this fear this initial acceptance feared and it will just bootstrap completely so you think that that this this image that the drones being absolutely part of our environment is gonna be the way that it is in a decade two decades or so well I have no doubt it will be an integrated systems everything around us will become more robotic more autonomous more sensorial more interactive and it will become to the part that we won't call it robot anymore we won't call it drone it will call it a jacket or house or a car and that we like mail delivery you know so I think this will happen but we are not there yet but we need to I think keep in mind that there is a future so one two hundred years ago people wouldn't think about iPhones or Android phones or a competition between the two so I think there is a lot more to common as fuzzy is a very very good points isn't it if you go back 50 years then we would be having this ethics discussion about mobile phones or cars as you said if you go back hundred years or so so do you think you all think that's that really we're having a discussion here that's gonna seem arcane or you know rather quaint in in 20 years time well you know if you took a mobile phone back 500 years you'd probably be burned at the stake and not just because there was voices coming in on it but also because it could do math math was something that only humans could do it separated from the rest of the world it was a it was part of the evidence of our divinity right so having a machine that did math would have been a big issue I think that that and this is again it's it's not determined for sure but I think the most coherent think as is what you've said that it becomes part of the infrastructure but that isn't to say that we shouldn't be afraid of it it is an issue that then our clothes are you know our fork so you can get a fork on the Internet I don't know why but people sell web forks right so do you want to know whether or not your fork is sniffing your passwords right and passing them on one of the problem one of the problems with the Internet of Things is a lot of these devices are super cheap and so they aren't going to get upgraded when people find security problems with them they're just going to say well buy another light bulb and then a lot of people aren't going to buy another light bulb and then we have we have we have like all kinds of security holes in our houses I actually just got one of those today the baby or not the very delightful talking to your phone and make the light change color it's the most amazing thing it just arrived this morning I don't know why yeah good fun hey actually Sarah thank you very much thank you and also thank you too tomorrow Lisa thank you very much yeah it's a really it's a very cool thing the light bulb actually you know it was ironic the the Princeton had a meeting I'm the security of the Internet of Things and it was the exact day that the entire Internet was brought down on the east coast of America not the whole Google is really good at defending itself but most of the internet was brought down Netflix was brought down Twitter was brought down by a botnet that was on baby cameras so you know you want to have of course you want to have your baby camera on the internet so you can go party and then still look at your phone and see if your baby is still there right that's what they buy them forth all right and so but but again those are unsecured CPUs I mean CPUs are so cheap and so easy to make and people were doing denial service attacks by making botnets and it was two weeks before the American election and they thought they took down the internet on the East Coast left it down I wonder if it got out a little earlier the last term we've got about 15 minutes or so left and I've got a lot of questions to get through won't one bite from poll which is much related to what we've spoken actually should we expect some backlash against AI technology developers and companies in a similar way that we currently see against globalization so I think it goes back to this question of bringing a benefit to the public I was recently at a conference in machine learning and if you looked at the name tags they were Facebook Google Amazon uber Tesla a lot of the big players because they're very excited about machine learning now and they're growing and they have the three components right they have the data they have the computational power and now increasingly they have the talent because a lot of researchers are going into industry and so I think concentrating all that power in the hands of a few people is worrisome to a couple to people in the general public and so what I think needs to be done is to really think carefully about how that the benefits can be shared whether its monetary benefits whether it's the services because they do provide services that we use every day and our real benefit to us and whether it's actually making it more open in terms of how they do it so a lot of them have been really good at sharing their algorithms which I think we should really applaud them for and that has the potential to democratize the technology so that the start of the vibrant startups that are coming up can use that so that the person who has a bit of data could use that for themselves as well but it's it's a tricky thing to do well and so that's something that people are mindful of right now I think it's one of the is one of the biggest challenges we have as a society and I think we need to think hard about it and act quickly probably as Joan mentioned earlier and sabaeans just said so machine learning algorithms will improve efficiency at least in the areas where they're good that's likely to improve productivity that will probably lead to increases in living standards so there's a kind of benefit there there's some increased productivity and I think the opportunity for us is our idea and a challenge for us is to think about how that that if you like that the productivity dividend of machine learning algorithms how's that going to be used and how will it be spread I suspect if we don't do very much if we're not active that what will happen is that that dividend will end up in the hands of a relatively small number of companies and individuals and that the obvious of that is that there will be people whose livelihoods don't exist anymore because they've been replaced and the kind of trends around globalization but are already much more of an issue than I think at least our politicians had realized until fairly recently all of disaffected people will just be exacerbated by that so if we don't do very much I think that might well happen and the sort of backlash backlash Brian that the questioner foreshadowed may will occur but that as several other panelists have said it doesn't have to be that way I think we as a society have the opportunity if we act quickly and we think hard about it and and we act with some gusto and and some courage we have the opportunity to make sure that productivity dividend is spread and so that everyone benefits rather than a smaller but so that instead of increasing inequality in society that the benefits of machine learning could in principle and and I think many of us would agree should but could in principle be used to help society to help equalize things rather than increase inequality we spoke earlier backstage about you you begin to get into political choices such as for example universal incomes because what you're really talking about if this works in the way that we're all imagining it could is you're in the gold and the golden snare is you give people free time to do creative jobs and not menial tasks jumping really quickly and say exactly what was just described happened at the big the late 1890s the the early part of the 20th century so the same thing happened then it wasn't a I then it was oil and things like that but we had this massive income inequality it also led to massive political polarization and mayhem and it led to the world wars right so so this is a problem we faced before and exact what you just said is incredibly important it isn't something special about AI it's special about technology in general that tends to allow for these pilings up and so we need to figure out how to damp that back down and so that people don't they're just thrashing right now looking for good solutions because that's not clear and that's partly because of the way the economy is we've been pretty bad as a society in all the previous opportunities to do things like this and I think we just need to get better and we need to get better fast because I think the thing about AI is that the voluntas are happening quite rapidly so we really need to kind of get on the case yeah we did solve this once before we had a period between 1945 and 1978 where we did keep wages pretty much equal to to the rate the rate of the rate of wage went up with about the rate of productivity but that was a weird exception and I think it was probably because people were trying to solve the problem of communism in all this chaos that we'd had in the last 50 years but we need to get back to making that a priority because that worked so it's not but you're and you're right it's moving faster now with digital stuff and I think a vital part of the right sort of strategy for this is education and educating people to to basically be very flexible in in in their careers and very creative as well creative and innovative and you know we need you know new kinds of of people to sort of live in a different kind of way perhaps and I think that that depends upon education being being right notice we've only got about six or seven minutes left but the last bunch of questions are all quite similar they're much more speculative and we saw touched on it so it's beyond this idea that we've got these useful machines that can increase productivity enhance our lives hopefully there is this question of having real intelligence I know we've touched on but there's a question here by frim from Angie which is how realistic is it that the Machine will a have an agenda B be capable of acting on it and see act to our detriment so there are three so there perhaps I don't know who'd likes it to address that let me have a first go I think in in the short term as we've heard machine learning algorithms are very focused on specific tasks so now I can have their own agenda to do something else there are still challenges there that those algorithms can in principle do harm because the machine learning algorithm is trying to do what you told it to do because of the law of unintended consequences we've talked a little bit already about the possibility where machine learning algorithms learn by example and they're examples in obvious in algorithms that have been developed to vet applications for jobs or applications for university courses where they're given a set of examples where humans have done they looking at a CV and said yep go to the interview stage or no reject them at the machine only algorithm and spot the patterns and develop that so humans have implicit biases some explicit advisers but but often implicit biases and machine learning algorithms if though if there are implicit biases in the examples they're learning from will continue with those biases and so there are those sorts of harm's we need to be careful of there I think there's some hope this the possibility and there's a it's an area of ongoing research of developing algorithms which are sort of demonstrably fair where you can say it's provable that no aspect of a person's ethnic background for example is related to the decision that's research on them so machines could do better than humans but again we need to be careful of them inheriting parts so they can do harm in those sorts of ways I don't think in the short term they're issues of them having their own agendas for example this is actually a question which is a more positive take on a very advanced AI John and the question from Vicki is could an a I eventually answer questions about the universe that science us where their entire lives to solve right could they replace us it could be your retirement plan yeah mine too yeah unfortunately yes and in fact we're actually on there at that than some of the other hard AI problems so computing how we're them was politically theorem provers have already cope with Fermat's Last Theorem and at the full color map Rob which defeated humans for hundreds of years and those are really really hard mathematical problems and a large amount of certainly physics at the sharp end incorporates a lot of math but also we have algorithms now that are fairly good at analyzing where the gaps are in our knowledge which is what science is trying to plug and then forming hypotheses which is what you start with usually not always and therefore you do an experiment to disprove your hypothesis mind quick-and-dirty take on scientific method and and basically for a large fraction of Sciences that are relatively well cooked so quite a lot of chemistry for example you want to make a new plastic you can pretty much turn the handle on this and and I think this is true increasingly where we can mechanize large parts of science right the way through to you know verifying a whole bunch of the steps and that that's um that's quite scary but that sort of the the bad news the good news is the universe is really really big to quote Douglas Adams and there's an awful lot of difficult problems and in fact doing the empirical studies to to disprove all proof you I hope not prove but verify may be your hypothesis statistic looks plausible can involve a really really long time and lots of clever people as well it seems humans do make intuitive leaps and we haven't quite codified what our intuitive leap is somebody who does you know could correct me on that but I I think there's a missing step there are there are also problems which are computationally intractable that doesn't mean there aren't solutions this is a hardcore bit of old computer science tutor Alan Turing but it basically means there may be a solution but you're not going to find it by one of these automatic means being may find it by some weird mix of luck and leaps in the dark and something and so and people have seem to be better at that so so there is room but but yes a lot of science can be can be fed through some machinery some of that is in fact some forms of drudgery you know staring at a petri dish counting in the number of green and black brown blobs is not fun but some of it's actually quite fun even if you spend your whole life and don't solve something quite often also along the way you come up with some interim things which turn out to be useful for somebody else which is a sort of another fun bit of science you you know the things you did that may be dead end for one person turned out to be the beginning of something else so so I don't think we're we're there yet but but the eyes are already helping with this a lot in particular at the the more theoretical end of things where we can actually go through this term proving and I think that's quite quite advanced now there is actually that's something you said in there about this we don't quite know what these intuitive leaps are these things that we might say or innately human there is a question by Janus which is which is how far away are we from being able to render the c40 self-consciousness in a machine and and how necessary do do the panel consider the task is to develop a true artificial intelligence in there I think we've kind of already entered that before when we were talking that well we may not all agree on that maybe we should go around I want to I'm gonna say very quickly about the agenda problem a driverless car already does set its own agenda the only thing that we do is say I want to go to you know the South Bank Centre and it's it's meeting all kinds of goals and actualize all kinds of things that decides which side streets to go down all that sort of thing how is that different from us well you know maybe humans are a little different with a lot of other species the main goal is to make sure that your genes go into the next generation and then you have a whole lot of side goals that you you you do to persist that humans a little different because we also want our ideas to go into persist but I don't know if that is that the point is that because we build these things were responsible for them and as as people have said already we could make the choice to say okay we want the machine to choose a random goal for itself but I don't think we should build a machine like that because there's another question for us I could go around around the panel very briefly we ran out of time this room Hamish which is related again it was the panel's opinion on an AI being given emotions and freewill I suppose the sentences not only can we do it well and would we do it or should we if we could maybe maybe Sabine ever so I once had a beautiful pink flying robot that I ran into a tree and it broke so for a moment you're sad and you feel like a poor robot you do give emotions to two robots because we anthropomorphize them but they don't have emotions and and they they don't have free will and I think we're very very far from anything that would have emotions or freewill and so part of me thinks that that shouldn't be the main question if you're really focusing on what we need to develop in terms of framework in terms of public discussion because it's much easier for the public to focus on that question than their real questions that are the more relevant ones in the near term I really I know Maurice got a really good answer so I'll go before you and then you can do the better one but I really disagree I don't all these things even about intuition and I'm sorry to disagree with some of the people on the panel but for example Maria's example from lazy dull that you could call that intuition when when the when the alphago made the jump into a part of the space that no human happened to have wandered before and that goes back to this combinatorics thing I was trying to explain before so go is a more complicated game than chess but even in chess if you think of all the possible games even the short game is not moving back and forth the rook but you know a 35 move game of chess there's more of those than there are was that particles in the not particles atoms atoms in the universe right so comet Parkes is gigantic and so things look like intuition absolutely it's true that there's problems that we can't solve quickly but the way we solve them is by having 7 billion of us working on them together and now we're using computers to solve them - it's called concurrency and so we are solving all kinds of problems we're doing things like insight we can even build things in kind of like emotions we can build things in that say ok good things have been happening I've accumulated that ok I'm going to be excited I'm gonna go out and try to do things so bad things have been happening I seem to be damaged okay I'm going to be depressed I'm gonna hold back we can build back synthetic emotions like that but that doesn't mean we're obligated to it and I think that this goes back to some of the stuff we said earlier I think we're obliged to build AI were not obliged to and so things like for example we can make it not unique we can back it up in real time by Wireless and make sure that if our drone gets stuck in the tree then you know it can burn it could do whatever it wants so you don't have to worry about rescuing do you rescue the child or the or the last set of Shakespeare or the robot from the burning building because you figure out the child and Shakespeare thing you get outside the burning building and you know the robot comes up and says oh by the way here's yeah how do you like my new body and here's the bill from Apple right but basically you've got your brain back backed up right and so I think there's a lot of things very simple things we can do to keep AI from being an ethical conundrum I wanted to pick up on the self-awareness question so so there are different senses in which we might build an AI or a robot that has a kind of self-awareness but the problem the problem is that whenever we use that kind of phrase I think it conjures up the the idea of something that that like in the Terminator franchise supposedly Skynet becomes self-aware and that's when it wants to take over the world and wipe out humanity so this notion of self-awareness for us seems to bring up ideas of selfishness and and the need to kind of like protect yourself and accumulate resources and be greedy but that's to anthropomorphize artificial intelligence far too much but there certainly are useful senses of self awareness that subserve cognition and cognitive capability that we could imagine and we do want to build into computers and robots so for example a very simple notion of self-awareness is that is that the system that controls a robot might be aware in some sense of that robots body and where it is in space where the parts of the robot are in relation to each other so that they don't collide with each other and and so on and that's the basis indeed of self-awareness in animals as well awareness of our own bodies and where we are in space and so that's an important thing you might indeed want to build into your into your robots and your AR systems that control robots then you can build on that and there's a there's also a sense of self-awareness in which you might want a system that is aware of its own internal reasoning processes and can critique those reasoning processes so for example a machine learning system might want you might want a meta level of machine learning that learns how to learn by judging how well it's learned in the past and and improving on that so there are there are these kind of different senses of self-awareness that are very very important but but we must be careful not to anthropomorphize that concept too much in the concept in the context of of computers and robots well sadly we've run out of time we're at the final question which is kind of in some sense a trivial question but it I've been interested insight into your views on on AI machine learning I think it comes from Laurie and then we're gonna ask us everybody in the panel in turn so I'll start we start over there the question is which film or television series do you think most accurately depicts the future of AI I'm going to give two answers which have both sort of cop-outs so the first one is the film Iron Man the reason I chose that is because as you've seen the film Iron Man has an absolutely fantastic personal assistant as an aside Hollywood gives the personal assistant their posh English accent but in the way it does and we've already talked about I think Murray made the point personal assistants are one area where machinery I will have a big impact relatively soon Mark cazuca Berg you all have heard of has a major project making a system a bit like Jarvis Iron Man's AI and my other example because I think it's a great film is ex machina I really enjoyed it it's great that Murray was involved that mean I think that's realistic there is the fact that the guy who builds the extraordinary artificial intelligence starts off by hoovering up effectively all the data that's available in the world and it really makes the point very well obviously with great scientific advice it makes the point very well that data is the key I think to success in this field I loved a robot and Frank who thought I loved robot in Frank because the idea is that you have a robot which is still quite limited that's there to help an elderly person with dementia stay in their home longer essentially and this person is a burglar and so he very quickly understands the limitation of the technology and somewhat tricks the robot into thinking that it should become its sidekick so that it can go back to its old activities of essentially stealing and I really like this example because of the limitations it understands what it's for and that what they create over the film is a partnership that even though the goal is a little bit dubious is really the way you'd want technology and humans to work hand-in-hand so I would go with humans I carry humans for several reasons one is that it is a TV series made here no Hollywood movies but two it's it depicts a very strong ambiguity about which direction people and replicants androids robots you whatever you want to call them would prefer to go some people in it's not clear at some point whether they are one or the other where they prefer to be which side or the other and that's a good exploration of the ethics but absolutely crucially instead of the artificial beings being lonely ai's their social and humans are social and the only way you're going to make something that resembles anything like that is to deal with the fact that humans cope with the biggest social net of any of the primates of Dunbar's number 150 plus I mean people have 1,500 friends on Facebook and they cope with it because we have language and we can form models of other people's behavior and that ability is part of not just being self-aware but being you know a mutually aware you can have me tell you about a Shakespeare play and I'd be telling about Shakespeare telling about people who are five or six degrees of separation from any of us that would mean you're understanding why Yago did to a fellow what he did because shakespeare told me in seeing a play and I explained the plot to you I haven't done it very well just now I could if you gave me a minute more I won't but that's a human capability if we made it a artificial intelligence that did that it would not be a lonely AI and it would almost certainly be indistinguishable from ours except in very subtle ways and we might prefer to be it okay I'm going to be actually short but due to because I'm going to you my favorite book which is not yet a movie is ancillary justice and it's a really nice showing of of what it is to be in a situation where humans no longer have any privacy whatsoever and the AI and the humans are interacting however it doesn't meet the criteria as a question because I don't think I hope that's not where we go so I think the best one that is where I actually think it's really realistic is a slightly obscure movie called moon the I don't know if you guys have seen this but there's a robot in it that absolutely looks like the kind of robot you're gonna be stuck with it's like you know some crappy thing with writing on it and yellow stickies and whatever and it has its emotional interfaces like this face that does like three pictures there's like three you know smiley faces that can do like it can look concerned or it can smile or it can be neutral and yet you get incredibly attached to it and you recognize that its entire program is to protect the people and so even though you know it's a machine and you know it's a simple machine and there's not much complicated going on you can still love it and it still has a huge impact on the plot of the movie so I think that's sort of the right future and quite accurate yeah I think a lot of the Sun the AI that we see in science fiction is necessarily very you know very futuristic and in it and and that's what you know that's the role it takes in the in the movies typically so I don't think much of it we don't really know what the AI in 50 years time is really going to be like so to say what's realistic and not wouldn't like to to say it but I can certainly say which movies I think other favorites of course the ex machina is a favorite because and I particularly I do particularly like the films that I think raised the the largest philosophical questions and I think ex machina does a very good job of that and in relation to consciousness and an intelligence and so on of course and it also is a British film by the way I should add not a Hollywood one but but if I can move away from my own personal interest then then I I love ghost in the shell actually ghost in the shell' the manga or the anime the two animated films of ghost in the show I think were wonderful I love the whole cyberpunk feel of those of those films and they they also raised a lot of really deep philosophical questions about consciousness and memory and and so on and you'll be very interested to see what the live-action version is going to be like which should be released this year well thank you very apologetic 110 minutes over but and I'm sure you'd like to thank the panel again a fascinating discussion so integer on a drum Sabina Murray [Applause] and I'll and just very quickly though so let's thank Mirko and the Imperial robotics group for the drone and the wonderful presentation the Royal Society and the South Bank Centre and I should say also that all the this whole thing has been filmed it was stream live but it's going to be available forever on the rolls I see website and on the Royal Society YouTube paid so you can go back and what should all again if you want and thank you all for coming in to your excellent questions but thank you very much good night [Music] [Applause] [Music] [Applause]
Info
Channel: The Artificial Intelligence Channel
Views: 290,658
Rating: 4.6003261 out of 5
Keywords: singularity, transhumanism, ai, artificial intelligence, deep learning, machine learning, immortality, anti aging
Id: YvEIEXE_NL0
Channel Id: undefined
Length: 104min 7sec (6247 seconds)
Published: Wed Sep 27 2017
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.