Q&A: Augmented Intelligence

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] I've really enjoyed these different perspectives on us and our brains and how we work with machines and I'd love to open us up with quite a cosmic question so a friend of mine sent me a link recently saying that researchers at MIT had developed a brain scanning machine a bit more sophisticated than the one that you were wearing there James and that if two people wore them you could now one one machine would interpret the text that someone was thinking about and then the other person who was wearing the other headset would then be able to hear and sort of ingest that that text so you'd have to focus really hard on the words I thought wow my god they've really done this you know and it took me about 45 minutes to realize that it was the first of April when the friend had had sent that there was in that it was in the Daily Mail which should have also given me given me pause but it it it was struck me that maybe I'm just very credible but that didn't seem like it was far off in the world of science fiction so my slightly cosmic question to open up is if we carry on and the kind of trajectory that we're moving at the moment what do how do we think our lives as humans will be different in let's say ten years time and maybe if we go in in order along here in ten years time question I mean I think there's gonna be huge advances it's almost it's hard to give you know a good sense of what ten years will look like but what I imagine is that the technology that you've described probably will exist so the one of the areas that I'm most interested in is that brain computer interfacing so it's really exciting to see James's technology and and we do have the technology now where you can fly drones with your mind essentially and thought-controlled limb have existed so I guess I would project just a closer merge between humans and machines and even in sort of direct lines of communication with the brain what that means and what that will look like is hard to predict I think yeah I think and you made some really good points in your presentation during it one thing that really stuck with me was this idea that and we might see technology plugging the gap for people and I really like the idea and you know I really hope that we see in ten years time a world where we're working more effectively and meaningfully with technology so I think one of the challenges we've got at the moment is that the the rate of development and change has been so rapid that we've not really figured out there or established the norms that we need to make best use of these really powerful tools and so I'd hope that in 10 years time we've wrestled through some of those tensions and some of those challenges and actually found a way to work more meaningfully more effectively more efficiently with technology to benefit everybody now that's quite utopian I think the reality is going to be neither utopian or dystopian we'll be somewhere in the middle but our technology could plug some those gaps the humanity I think that'll be fantastic what kind of algorithms will we be writing in ten years so yeah in terms of algorithms I think that it is great that we we believe that we are going to see you know this fully autonomous algorithms that can really really solve very difficult problems but I think the reason why we imagine this future is that yes the technology we have now is making it possible but this doesn't mean that we are going to be there in ten years I mean it will be possible one day but I think that what we will probably see in the next ten years we will see machine learning these kind of technologies to be applied at scale which we don't yet and we will see essentially more of the tasks that people do to be replaced by be replaced by machines thereby leaving a small time to work on the harder cognitive tasks at the same time of course this can be a big change anyway so we have to absorb that change as a society and then I think we will definitely see some of the more mind-blowing discoveries but I think but I think that getting the next in the next 10 years it's probably scaling up is going to be what we will focus on and so my my other thought is also how we can apply this in daily lives I think James you probably touched on this most directly in your talk my question for for all of you again is if we as humans can take an action or a couple of actions to try to benefit from augmenting versions as much as we can what would that be and to give you a moment to think I'll share mine which is that I'm I'm a big music lover and I've recently passed my grade one can re-examine as I'm learning piano with my with my kids and one of the investments that I've made in time over the last years has been to Train both Spotify and Apple music to understand what music I like and that they now suggest music to me that I actually really really love and I have this incredible sensation of where have you been all my life how did this how did this algorithm know that this song was going to be so meaningful for me so mine was very much this is this is how I felt most tangibly the the real human benefit of this coming together of you know the human and the Machine and human training the machine and the machine being greater being able to then sort of go off and search almost infinite spaces of in this case music what kind of things could we'll be doing to bring these technologies into it into our lives yeah and I think there's more and more opportunities in terms of health and sort of applications that are being developed that people can make use of and that can either tell you about yourself or sort of very consciously free up time for yourself so my personal example is just applications like one password so my charcoal little was remembering all the passwords that we need it's just getting little tricks like that I think end up freeing up a lot of your space too again I think what we can achieve here is to allow us to focus on what we're so good at and what's meaningful to for us and a lot of that can be things like building relationships so spending time actually with people because that is something we're very good at so the kind of social cognition the emotional cognition and building things like empathy and understanding so the things that in the even in a 10 year time gap it doesn't seem like a computer is going to be able to do so I think all the different ways in which you can sort of mindfully offload can be helpful ways of freeing up your own your own time things if I can ask you to build on the the tips you've already given us from your talk what other tips do you have in your pocket I think um you know I sometimes wonder whether we look back on this this period of humanity of human kind of development and as as perhaps the most disruptive time in human history since the emergence of language you imagine what it was like when language emerged and the chaos that happened and but you know during that time it appears that your people started to form structures and norms and eventually we ended up with this grammar in language these kind of agreed ways of interacting and and so one of the things that are maybe encourages to do and of individually in families and groups and societies is is to think about how we create a grammar for a digital age and what would that look like you know because in grammar we and we define precise roles for different words and different structures and different forms of punctuation for example and and actually I think one of the challenges at the moment is that we don't necessarily use our tools in very precise ways and so I think if I was going to give kind of one tip it would be about really thinking and be more mindful and precise about wet which tools we use and when we use them because at the moment these things smartphones are the one in my pocket and just leak into every aspect of our life and and and I hope that we actually start to move towards a phase where we get the best out of them rather than them getting the best out of us so yeah that's another thing to muse on and Martha so I'll give it a different perspective which is I think in the future we are going to interact a lot more with these algorithms so I think it is good for us to obtain a better understanding for them and essentially the more we interact with them start to think about where what is their place in their lives and which parts of what we do on a daily basis we could automate so this is the first part so for example in a new in a new business which are for example the areas where you know we don't really need to you know spend a lot of time doing and we can we can use artificial intelligence to enable us to write redirect attention to something that is more interesting or more valuable that is the first the first part but at the same time also thinking about what how does this shape the environment that we operate with and how can we make sure that we operate in that we operate safely and that the change that we are making is predictable and it it is - the direction that we wanted it to be exactly so I will open up for questions from the floor and do we have do we have mics or should I yeah we have brave roving microphone people that's on the front row and then we'll go one back and thank you to all speakers really interesting topics I was just thinking about the idea of augmenting your intelligence and think it's an extent we already do that with using other people so other people can augment our skills in a certain way I was just wondering whether you think we might make the same kind of mistakes when we augment with machines as we've made with you know the way we organize our our groups way we rely on the wrong people or maybe over I on certain people or the way we do that or whether we might learn from how we've done it with humans and then bypass that stage that we can go into a sort of a much more optimal state from the beginning great great question thank you said I mean Corina maybe if you pick up first on the idea of humans as the extended brain yeah and then mother if I turn to you for how can we learn when we should trust or distrust machines yeah yeah great question so there is this idea of sort of a socially extended mind so you think about a highly interdependent couple one person knows everybody's name in the social group and the other one handles all the planning and then together they able to sort of navigate this complex social world that we find ourselves in and and so I think that you're right that we sort of do this all the time already with other people and that it sometimes goes really wrong so um cases like borderline personality disorder one of the symptoms is that they become overly reliant and form very close bonds too quickly and then when that person maybe leaves their life and they're sort of at a feeling of loss of great loss and you could see something gross really similar happening with machines I think so we trust too quickly we think they can handle and give us more than they can both emotionally and maybe cognitively so optimistically yeah I think I think we should try to learn from these this past experience maybe what's going to be key here is getting the right type of interdisciplinary work too so that and people are informed of the limitations of the technologies they're using and from a sort of social science perspective which right now there's not always enough cross-disciplinary talk between people developing technologies I think that's the worthwhile pursuit Martha so other than obviously when an autonomous vehicle hits a pedestrian or something like that how can we how can we tell when we should be getting suspicious of oh maybe maybe we can think about two things the first one is how does this machine performs and and they and be able to interpret what that means so for example which is the same with any medical test we want to know how often it gets it right how often it gets it wrong and we also want to know what we have to lose by an error and the problem essentially what we have to lose is not the same so for example if Netflix doesn't recommend you the right movie then you have nothing to lose but there are other for it a self-driving car it's everything is a steak so so so the importance of what is at stake at the given application essentially gives us an idea about when we really need to put a very robust fail-safe mechanism in that machine and then when it comes to trust I think that for at least for a while especially at in applications where the stakes are high we will probably be looking at and enforcing some some form of transparency and explain ability to these machines so essentially trying to understand why they make the predictions they make and then have them be reviewed by a human and this of course this may not be necessary for everything but I think that for a while until we fully understand how these machines operate and what exactly is at stake we will probably have to do that it was a question just in the second row there yeah I'd like to express a word of warning I think the situation is almost apocalyptic actually because already when we interact with any computer system which may mean as we saw earlier even a system that can talk to us and we don't even aware of it the way we have to interact the way we interact with other people is we have to form a model off of them I believe so to interact with any computer system we have to form a model of of it which means we have to understand it now we're also being forced to thinking ways that are not human computer ways already finished inside i/o during last night in the early hours of the morning I was trying to create a I was trying to open a new corporate bank account okay and I found it almost incredibly difficult to do because I didn't I couldn't understand what the what the intention was that the system had behind its questions I would answer them in a natural way and then find that the computer actually meant something entirely different so I have to actually then model my responses and adjust my responses according to what the systems are doing now that's just a pre-programmed system imagine when we have a eyes the way we have to then be able to communicate with a eyes effectively is to be able to model them now currently a eyes are inscrutable this is one of the problems you know that you were talking about making them understanding but I think we're going to lose our humanity to be honest I think we're going to lose ourselves if we're not very careful and we I think we need to put a stop to a lot of this right now otherwise is going to be you know catastrophe isn't far down the way I'm talking about stopping until we've solved these problems for instance I think they need to stop using a eyes to control cars right now because because those systems are inscrutable if there's an accident we don't know why and you know there are all kinds of issues there so until we've got AI systems that can explain to us in our terms why they're doing where they're making certain decisions they've been linked to become screw to belief that's a word we need to be understand them if we adopt a eyes that are inscrutable then how on earth can we model them in our minds so that we can interact with them effectively they're going to be doing really crazy things and we're not going to know why so I think we need to be incredibly cautious and I think we need to put a stop to a lot of the application of this until we've really got them way way better than they are now you know maybe translate that into two questions I think one question for Karina would be we touched on the Apple and FBI case as a case where actually legislation or regulation or like policies from a large corporate organization drew a boundary what kind of boundaries like that might we see you know so uber have stopped their experiments for now but other other cars that obviously other carmakers they're still experimenting with that and I wonder James if think we then think about what kind of boundaries can we draw ourselves so maybe we Carina first done there yeah so I I agree that the technology is advancing so quickly that our our legal systems and our normative systems are playing catch-up now the wall is typically reactive so that's in a sense nothing new but whether or not it will react fast enough and whether or not policymakers can can stay on the front of things is a big question in the case of self-driving cars for example there's gonna be a trade-off between things like safety I think in the case you described things like safety and understanding so we we can imagine that self-driving cars will have accidents you know a fraction of the time that human drivers will or that a computer algorithm can diagnose diseases with a higher percent of accuracy than a human doctor could but with the algorithm it won't be able to explain how it came up with the results it did but it will have an accuracy about 98% let's say where as a human doctor has an accuracy of 80% but the doctor can give you a reason about why she thought the outcome was one or the other which would you prefer I might prefer the algorithm even if it's got a 2% chance of being wrong and can never explain itself in a way that I'll understand so there's a kind of trade-off here sometimes between you know competing virtues or what we competing demands and I think the same is true in the case of self-driving car so they might be a lot safer than us but those few times in which there are accidents we'll never understand why and that can be highly unsatisfying for a person that's been killed and their family but there's I think a reasonable case to be made that a trade off has to be there it's something to consider James how can we how can we impose boundaries and ourselves in terms of how we're interacting with yeah I think it's a really good question and it's something that I struggle with myself there being a fight for technology user but to go back to that grammar metaphor or analogy I think one of the things that we need is a form of punctuation with technology and and actually to pause for clarity regularly and I think the trap that we've often fallen into is that and you know either intentionally or accidentally we've created these high for addictive systems which continually improve and they're optimizing for harvesting our attention whether we like it or not and and actually so as human beings just to rediscover this amazing capacity we have to to pause between our perception of a situation and our action engage those executive functions and seek some clarity and be more intentional about what we do next so I think you know way one of the things about human beings is that we do have this quite long gap between perception and action you know my wife and I took her on dog for a walk in the Lake District a couple of months ago you know think the things saw a squirrel and it was gone and but we have the luxury that we don't have to chase every squirrel that we see in the digital forest and but often we do and actually just that that moment you're taking a step back and pausing for clarity and getting into the habit of that or rediscovering the habit of it I think could help us be more mindful more intentional and have better boundaries about how we interact with these systems other questions there's one right at the back there and then we'll go to the lady in glass sister good form of exercise yeah yeah exactly and so there was a kind of recurring theme about task automation in your talks and how one of the huge benefits of AI in general is that it'll kind of free up our time to focus less on spreadsheets and email and boring rubbish like that and more on you know the big issues in the world and the arts and creativity but our economy and you know culture in general doesn't really incentivize those things very well at the moment and a lot of people have jobs that are pretty much made up of email and spreadsheets and so things like that and they're going to be very worried about computers coming in and taking our jobs do you think that kind of fear is going to be a big inhibitor for the adoption and development of this technology at least for the short term future antastic question and a very hot topic there was I was reading just recently that the the number one employer for adult males in the US who didn't go to college is long-distance truck driving and obviously that's dovetails very neatly with one of the principal areas of research in in these kind of fields maybe Carina if we think first about you touched on we've been we've been at this for a few thousand years now and we've come through is it different this time if so why is it different and and yeah so I think it's different in that this shift could happen very rapidly and so we could see it you know changing industries more quickly than an individual human can change their sort of education and what they can offer to the job market and was given that the kind of jobs that are going to be opening up or highly skilled jobs we're going to have a gap between what's the demand for and what's the skill set of the labor market so I do you think that's a legitimate concern now the question is it was frayed something like you know do you think this fear is gonna inhibit that change and my concern is that the fear alone won't so the change will happen because it's not always the same people who who are at loss or at risk of losing their jobs will be making those decisions so um you know if we look at the the big picture where we zoom out I think it's the possibility that things look very nice so there's a nice outcome which is achievable but I do think that the systems and the government need to step in to make that possible future the actual future yeah mother please one perspective is that because the the transformation is going to be quite big but at the same time this will generate also a different type of economy and different types of jobs because as we are trying to automate all the different tasks tasks essentially machine learning itself is going to get to an extent commoditized and it will require a lot more people with with the skills to build these systems so in the short term it might even grow the economy and then in the longer term if there is there is that transformational capacity but it is it is a debate that society in general needs to have about how we want to absorb it and how what what jobs really mean and how do people generate income we've debated these topics a lot within quantum black as you can imagine I mean and we have a lot of debate internally about ethics I also spent a long weekend with a group of CEOs in New Zealand recently where this was the sole topic of conversation and because New Zealand is such a small country the 20 or so people in the room represented most of the government departments and most of the economy of New Zealand and all of them I think were were admirably forthright in facing up to their responsibilities and there are various perspectives on these questions and I think each of them individually where we landed as a group I think was that each of these solutions individually can seem GLIP you know people can retrain great so all of the truck drivers are going to retrain as machine learning engineers you know maybe that's a pretty that's a pretty high bar equally there's a we can you know we can automate away the boring stuff so the people can focus on the creative stuff and we did a project recently for example where there was a large group of people focusing on classifying patient negative responses to to drugs and the concerns would come in in the form of text and then they would have to be classified and so on a lot of that could be automated and actually all of those people kept their jobs and then had much better jobs afterwards because they were doing much more more creative stuff and there's also a lot of discussion about redistribution there's a lot of TED talks and debate about things like a basic income and how can we how can we redistribute the incredible concentration of capital and and value that's accruing in the in a smaller number of technology companies and so I think each of those individually can seem glib but I think that's if we can navigate our way through some combination of all of those things as a society and I think James to your point about grammar if we can find the right frameworks in the right language to to think about this then we maybe have a hope so the next question was from the lady to star in the glasses can we bring the microphone back on its epic journey and then we'll just have time I think for probably one or two more questions after this one of the things that resonated with me most was the theme around this requiring humans to bid our best and there are most humaneness and the the things that struck me were our ability to make judgments and use our intuition but then if you sort of think about the tools that we're now being asked to use to make judgments and use our intuition the tools at very different black boxes with millions of data sets behind them and the thought that that struck in me was surely that will trigger different things in us when it comes to making judgment calls and using intuition will there be odd kind of unconscious biases or you know second-guessing ourselves so my question is when when you sort of look at humans being at their best and some of those qualities what are the challenges these tools are going to to put on those qualities and how do we need to tweak those qualities or be wary of things that were wired to do to use these tools in the right ways James if we turn for us to you how can we be at our best in terms of maybe not driving a race car around the track as fast as it can be but at our best in terms of intuition yeah I think there's a number of layers to that question I think some one of the lays is temporal and what maybe ten temporal what technology might look like in the future and I think we will probably look back at how we interact with these devices and technologies now and and say how quaint in archaic it was because in one sense in the smartphone is a pretty rubbish form you know we've kind of forced to kind of carry this block with us and and kind of you know look at it and look down and focus and and I hope that in the future we start to discover some more natural interfaces that help us to work in more human ways but in the meantime I think some of it again is about actually challenging some of the ways that we we use these tools and actually experimenting with with old ways again and maybe coming up with new ways and so very practically and one of the things I trying to is is actually get away from my devices and back to pens and paper and and at times and also interact with people in more natural contexts and and you know I do some kind of coding and and fairly boring data analytic tasks some of which I can automate as part of my research but but actually one of the things I do if I do a rare day at home and is that make sure I interspersed that with natural activities you know time talking to people I like going for walks and thinking about problems away from the device and the technology before I go back and interact with it and I think that also often we get kind of just stuck in this artificial world and we never actually take a step back to consider how we're interacting with it and whether it is optimal and we never take the chance to experiment with something new so yeah I'd encourage everybody to do their own self experiments and challenge some of these norms and see what they come up with I was in Silicon Valley recently and there's a bit of a theme among the startups around this almost trademark term of deep work someone's no trying to brand this console Newport and yeah one of one of the deep work startups so something to actually was talking about the shower moment and you know that moment in the morning in the shower when you I sort of half think about something half not and you've had time to kind of decompress and that's when the insight comes and they were saying how can we recreate that shower moment in in the work and it was really interpreted as how can we put more showers into offices because that will lead to more inside I think it's kind of good good and in a way I mean there's a really interesting thing that happens when we task negative when and we see a distinct network of interconnected brain regions called the default mode Network this seems to activate when we're in a state of what we call wakeful rest we're not focused on anything and you know when was the last time that you were in a state of wakeful rest doesn't happen very often and that shower moment I think is the wakeful rest I think a lot of us instinctively crave and maybe we don't need showers in our offices but we need to find some ways to rediscover that so let's go here and then here thanks and sorry unconscious we won't have time for augmented intelligence poses a huge challenge for accountability when an Augmented system makes a harmful decision like which is a man in a machine who are we supposed to blame for that because this is very very hard to answer because if you take the human side there are many humans involved in the decision process there is the user there's a designer there's the person who provided the training said etcetera and if we go on the machine side it gets even awkward and very weird because can as the machines get smarter and more involved in the process can we blame them for the decision that they took what does that even mean back through a chain can we go to a developer or something so I think there's there's this basic concept of don't give kids guns right but like how can we how can we actually go get this maybe a queen or a few yeah yeah so it's interesting right now we don't really have that many I would argue maybe not any at all truly autonomous machines so there's almost always a human in the loop in the case you described we have many humans in the loop which makes it hard for sort of the but hard in the kind of standard cases in moral philosophy where you have an over determination of something bad and the question is what's the real cause so who do we attribute responsibility to so we have two assassins that shoot at the same time or something and I think so a couple of things I think there's a legitimate case that could be made for when we hold a machine responsible although it also raises serious questions so a case might be made when let's say a team of researchers develop a technology which has for the most part good outputs but somehow something goes wrong at a certain point but it's hard to hold the team responsible because the algorithm has outcomes that are truly unpredictable so they really can't predict what it's going to do and that suggests that the the algorithm might have a kind of agency which is truly independent of the creators now so you might think okay so how can we hold the creator's responsible in case like that the problem of course is that holding a machine responsible is difficult because they don't suffer they don't feel guilt they don't feel shame do we just ban the algorithm it's not really clear what holding it responsible would look like one thing philosophers are interested in is looking at other kinds of non-traditional forms of agency so looking at corporations so there's ways in which we can punish corporations even though they don't have a sort of you know fueling of suffering that humans do or feeling of guilds maybe that humans do so there might be sort of a new sort of paradigm that has to emerge and how we hold these things responsible and what that looks like I can't really say incredibly emotional scene in the movie 2001 when he's gradually pushing down the blocks that give how nine thousand this I'm just a little bit curious about how we're supposed to deal with the difference of a taxonomical level of two different ontological kinds coming together on a metaphysical level so like are we talking about reconstructing our conception of what it means to be a human and I was supposed to introduce a third like the the human extended one as opposed to the machine versus human or are we supposed to keep the the binary machine human assisted we're human assisted humans replace non-human assisted humans we are we moving from a from a binary to may be either a sort of tertiary or spectral yeah because if we're talking about machine versus humanist separate kinds what is this human assisted third is it is it a third or is it like an amped version of the already existent second so we keep the we keep it as a binary zero one you know human software I don't know I'll shoot for some rapid-fire answers from each of our panel and then maybe that's a good note to end on we are we going to be living in a binary world in the future in a tertiary world or in a in a spectrum or maybe a multi-dimensional spectrum of richness I think I think there is a spectrum right now artificial intelligence as we know it is cert is good at a limited set of activities and in these activities it is conceivable that one day they might fully replace humans but then for the majority of tasks that we know it is either the case that machines are partially better than humans or or humans are partially better than machines so there we see synergies and then there is there are a lot of other tasks where humans are are essentially the ones responsible but they might be helped by intelligent device providing them with more data and more information to be more effective yeah I hope that we see a spectrum and I hope it's a dense spectrum because I think my fear is that we'll just see an increasing polarization of a kind of you know the the assisted elites and the kind of the underclass and and and I hope that we we continue to see a great degree of diversity and the spectrum of people who choose not to engage fully with technology and others who want chips implanted everywhere and hopefully if it's dense enough then there might be a chance that we can understand each other a little bit better yeah I think oh I'll opt for the spectrum too so I think the man versus machine needs to be put down put to rest and the philosopher I mentioned in my talk Andy Clark has a book titled natural-born cyborg so he thinks we're all cyborgs already even if they're not implanted in us so there's a kind of picture already emerging of that whether the devices are implanted or not whether or not they're just being worn on her head like James showed us so we're already having a high degrees of interactivity between our various devices and and I think that's going to emerge and a whole colorful different array of I have different ways thank you everyone for coming along tonight we've had a pretty extraordinary race in history from Socrates a few thousand years ago with complaining about the kids of today with their pens and their paper all the way through to realizing that we are already cyborgs and race racing ahead to an unknown unknown future that could be anywhere between apocalypse or or a utopian spectrum of coexistence so please join me in thanking our speakers and thank you [Applause]
Info
Channel: The Royal Institution
Views: 4,302
Rating: 4.5955057 out of 5
Keywords: Ri, Royal Institution, augmented intelligence, lecture, psychology, robots, artifiical intelligence
Id: WNHy6Fqc4xg
Channel Id: undefined
Length: 39min 22sec (2362 seconds)
Published: Wed Jun 27 2018
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.