Artificial Intelligence: An Inhuman Future? | Full Panel Discussion | Oxford Union

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] [Music] [Applause] so then dr. Morris you want to start off by saying a few words No thank you very much should I should I stand or should that whatever you whatever you think thank you very much few months ago I went to an unnamed country the reason is because I named the country and the Ambassador came to have a special visit in my office of that country so it was four of us it was bongani it was Peter Peter Vail Oscar fan here than in me so at the airport as we enter in this country you put your passport on on some piece of machine and you have a camera that is looking at you so as a person was leading the delegation was the first one to go I did exactly that it could not recognize me it denied me accent access then came Peter Vail it avails ancestry is actually English did the same put this passport on this machine the camera looks at him and allows him then came from Ghana no longer please don't try to pronounce you are going to actually chew your tongue and then it could not recognize him with deny him access then Oscar phone here then of course when Hilton is a Dutch name so you can actually see the ancestry of Moscow it allowed him access of course this is a machine learning system it verifies the picture on the passport that it is the same as the picture that it is seen so I was wondering why is this device discriminating against me and Volga no longer and actually allowing access for Peter Vail and Oscar fannia the reason for that is because the data that was used was collected mainly from Europe North America and Asia not from Ampang gain where bongani comes from or due to new air come from and why is that the case the reason for that is because the economic distribution of resources are such that you have more resources at Cambridge than in my village in Indu Tony and this is really what is happening we are having these devices that have been designed there have been fed with data and this data is actually reflecting the economic patterns of our society and the result of that is that Wang Danny and I are denied access Peter and Oscar given access and I come from South Africa so I know what it actually means you know it actually means that you know the behavior of machines is coming to the behavior of people so what is actually happening is that these devices even though they are more rational than a human being they're actually very very rational they are consistent they make decisions that are consistent but they actually are biased so how do we design this machine so that they can reflect the future as we decide rather than the future that the present as it exists and all its a symmetries and and contradictions and so on and so forth that is really my take on artificial intelligence I'm reminded of a famous British author Charles Dickens a child became says this The Tale of Two Cities you know hope against despair and I think artificial intelligence is gonna give us a lot of hope it's going to solve many many problems very important problems in medical sciences but it is also going to to give us some grief it certainly gave me some griefs when I went to that town a particular country and I'm not going to reveal it I was that there that was at the roads house and there I revealed it by mistake thank you very much [Applause] hello everybody good evening thank you very much for coming here tonight to hear us speak and talk and thank you so much for the invite so I've been working for a long time to help computers think about the world more like people and less like machines and in order to do that so that they can look at the world more like we do and less like the problems that Chile's be encountered so I first got into this particular area of AI I run a very large crowd-sourced resource in fact the first use of crowdsourcing called concept net at MIT and concept net can be used for a lot of different things and it's largely created through crowdsourcing and through reading the web and we were using it years ago back when this kind of thing was was new to start looking at restaurant reviews near the university and you know if there's one thing that that really makes people interested it's can we have the AI discover worship where we should go to lunch but we noticed something really interesting really fast which was the AI from using its ideas of how the way the world works was more likely to recommend certain types of ethnic restaurants than others and so that started us down the path of being able to take a look at bias in natural language processing and how to remove it as best as we can from the systems that we all use and program on every day so I also work in computational creativity so my goal is to help computers be able to think and act and understand context in ways that's much more human-like than what we're doing today there's been a lot of advances in my field deep learning and all these things that you've heard about have really changed the way that we think about the world and the way that AI can work but that's not really going to be enough if we talk about the sort of goals that we have right now and the limitations of deep learning we're really going to need something new and something different in order to jump over those hurdles and I hope we'll explore some of that through the questions today and we need to do that because in order for a eyes to be able to achieve goals like being able to explain their actions and being able to make more unbiased decisions we're going to need a deeper type of AI than what we're creating so thank you all so much and we'll talk as we go through this panel so thank you this is uh thank you very much those are great remarks I'm so pleased to be here I'm also quite pleased that it's not a debate I don't have to wear a tuxedo I don't have to have a perfunctory joke at the beginning at the expense of Cambridge and I don't have to treat my fellow speakers as adversaries with winners and losers at the end who are voted on so the issue of bias has come up twice and it's a very relevant one because of course we're just at the outset of the AI era we're sort of at the Aristotelian moment if you will it's sort of worth emerging from a sort of a dark period sure that's been around for 60 years but what we called AI in 1956 when the term was coined the famous John McCarthy Dartmouth seminar and what we used the term AI to mean today is something radically different before we tried to create rules now we try to use math okay and it's doing far better we can actually achieve it it's not as brittle as before and it's so important to identify these biases at the outset because if we don't get it right in terms of the foundation you can extrapolate and see the mess that's gonna be created because when AI fails it'll fail catastrophically okay but what it works it works really really well so I'm the optimist here in the panel because although I think it's really important that we recognize these bias not to say that the others aren't optimistic but I think the bias is although we can see them today as real impediments we're gonna get over them not entirely we'll still always have problems but we always have problems with biases to dicks that's almost first day of class that you learn but I think largely we there will be a huge community that will work on these problems so we should focus on the benefits of how AI works not on how there's all of these shortcomings to it even though it's easier and in our comfort zone to talk about those shortcomings in fact it's a it's almost like a malaise of us because if we talk purely about these huge benefits it looks like we're uncritical of the technology I don't think that's right I think there's so much to cheer about the technology that a balanced way of looking at it is to glorify what's so special about it while being mindful of the criticisms but I want to conclude and then to start off the panel include my introductory remarks by talking about the two curses of AI the first one is in privacy and the second one is explained ability why why these are the two curses of AI is that in order to get the benefits of AI we have to overcome these stumbling blocks in the case of privacy it's the fact that the technologies the techniques that really are showing promise like deep learning only work when you can use all of the data and you don't make any presumptions about what data is relevant or not because the algorithm can actually surface the relevant variables and covariance that matter a problem privacy law as its construed today is very hostile to this approach I can go into more details in the panel of of exactly how and maybe how we need to loosen privacy in order to get the benefits of AI the second is on explained ability we know these techniques work they work better than other approaches traditional statistical approaches in which we have a model that we've actually thought through in advance because we can measure it against ground truth and we can validate it however we don't know why these techniques work in fact catherine mentioned that as well in terms of explained ability and the problem may come is that we can get explain ability a bit but not entirely and so then what we do as society if we know that we can use a high performance algorithm and give up on explained ability we know we can actually here you know better diagnose someone for disease but we don't know why when there will be diagnosis so what do we do do we accept a lower performance algorithm sort of an on AI are not full AI so that we can have explain ability but we know that some people will be mischaracterized and therefore some people may die who might not otherwise die they'll have perished because we wanted to be able to explain something it's going to be a very difficult decision I'll leave it at that and I look forward to a great spirited conversation thank you all for these introductions my question to kick this discussion off is that most artificial intelligence today is based on the parameters largely unchanged since the 1950s namely deep learning your networks that you use large data sets to create generals of action as we just discussed but all of these neural networks are supervised dyi programmed and set up to learn so do you think we're beginning to reach a saturation point of what this type of artificial intelligence can achieve so I think one of the things that's really interesting when we think about this question is what deep learning is now there's a lot of people in the audience who have probably been hearing the buzz word quite a bit and really are wondering what that means so I think I'm gonna start out just by getting a really brief explanation of that so deep learning is pattern matching when we're very good at understanding the world we people learn patterns and if you take that ability to learn patterns and really drive it out to its conclusion what you get is deep learning and it's a great tool it's something we've been working for for a long time if you look at the mathematics behind deep learning the mathematics behind so many different algorithms we've been heading in the right direction for a while now and we're getting closer and closer but when we think about real intelligence we're going to need a lot more than pattern matching and I think one of the things that we've seen is there's been such focus on the amazing short-term gains we can get with deep learning the low-hanging fruit that we haven't spent as much time anymore really thinking about how to get to the next level I think that's something real that's really important my advisor at MIT was Marvin Minsky and when he looked at these problems one of the things that he thought about was low-hanging fruit versus high hanging fruit and if we want to go back and we want to think about how do we get a I to the next level how do we make sure we don't run into the roadblocks ahead we really need to start looking at high hanging fruit and part of that is understanding limitations deep learning requires a lot of data if we all look back to the wonderful result in StarCraft that we saw in the past week the amount of money it would take for someone in this audience to buy the compute that it would need to train that system is over two million dollars and if every time we need to solve a problem we need to spend that kind of money on compute that's not going to scale we generalize we learn we do take things we learn in one context and apply it to another and we need to be able to jump to that next level that's not to say that we're not on the right track here but I think we need to be focused on that in understanding how to bring that far okay maybe if I could just come in here one of the things that we have not taken too seriously is the fact that this deep learning methods are actually correlation techniques and we are using correlation techniques in order to make causal inference and of course there's a danger to that if a correlation is certainly not causality and causality is not correlation the second thing which is what we encountered we have worked quite a great deal on how do you how do you use deep learning to be able to predict the risk of HIV on an individual for an insurance company for in the for an example and what we realize is that you know the input data is not always complete and it's not always perfect and statisticians have spent a lot of time studying how do you deal with decision-making with incomplete information and because of the enthusiasm of the abundance of data sitting in North America and maybe some parts of Asia I think they're forgotten how do is how do we design machines that can still make decisions with incomplete information and then the third thing is that the algorithm the imperfection of the algorithms are actually forcing a new paradigm in the design of algorithms I don't know what I'm making any sense you know they did you know we we are even forgetting some other types of AI because deep learning is not AI in its entirety you know I know many people don't like this but certainly you know fuzzy systems are still sit in the some form of hybrid that is going to to blend the two systems is actually quite important and the last thing that I want to talk about is the issue of of transparency transparency because these things are very difficult to explain what exactly is actually happening and if we are going to put these systems to make critical decisions medicine some are even thinking about weapon are putting them in in weapons who is responsible and who is accountable for the decisions that it actually makes especially if the if it is going to make these decisions with a human out of the loop oh yeah although that's very interesting it's interesting because it sort of queues up my views very neatly I respect everything that you've said but I disagree with three of the four points let me talk about each one specifically and in general so the first one and I think it's a healthy to that's why we're having it right in this is the union correlation the the the the fact that the the outputs are correlations rather than causal inference is actually just I don't think a problem whatsoever it depends on how it's going to be applied in some instances you need causality but causality is really hard to get in fact you really need to experiment and have a trial specifically to get the causality and even then it's highly imperfect but when you think of the here and now how we actually apply statistics traditional plain vanilla statistics in society since the mid 1800s you could do it earlier but let's do eight mid 1800s with Gault and later Fisher and Pearson today it's a we're all in a world of correlation and we've put men on the moon and brought them back based on correlations without really understanding the causality we take aspirin without understanding how it actually works in the bloodstream but we know it works when we when we take it okay but when you actually think of how AI works and the the correlations on a deep learning neural network that has many layers and has millions of cross connections what we're able to do if you think about the magic of it is we're able to take in lots of examples of an image we'll say a pathology exams and then from that ask the Machine look at it compared to for example patient survival rates and then look at the output and to predict what patient has severe cancer and what patient doesn't sounds great we don't actually have the causality that's true but the point is that the machine learning algorithm can spot the tell-tale signs of cancer better than the human practitioners in a study 2012 Daphne Koller Andrew Beck Harvard and Stanford they did just that and the machine learning algorithm was able to identify 11 telltale signs that best predict severe cancer the medical practitioner is only knew of eight of them three of the things were spotted by bit algorithm that the humans didn't know and the medical literature didn't know to look for okay because it could spot patterns that humans couldn't see because it was going through the machine learning well it's going through a machine vision okay so you didn't need the correlations or rather you didn't need the the causal inference you can just rely on the correlations and do a better job likewise when it's the case of imperfect data or incomplete data the law of large numbers world of big data suggests actually that's not the most important thing in fact if anything we can infer what the imperfect or the missing data is based on the existing data that's almost axiomatic in this technique in the case of transparency although I completely agree with you that this is an important issue I think that over time and in the case of who is responsible and who is accountable we'll answer those questions it'll be human beings ultimately where I agree with you is that the imperfections of today's system is leading to a new paragraph paradigm of the design of the algorithms I think that's a good thing then I wanted now to disaster and merge two questions together before opening up to the floor so following on from the points we just discussed solving the hard problem and creating artificial intelligence that can think and I take these decisions for themselves has been the goal of artificial intelligence research since its conception do you think we're any closer to creating conscious artificial intelligence and if we were would that possible world scare you maybe if I could just take this one much of the algorithms that have actually been designed actually designed to do a specific task I still have not seen an artificial intelligent algorithm that can be able to predict the weather and will be able to make coffee for me and at the same time it is going to be able to remind me to call my mother so and the reason for that is because this is we are training this these models to do specific tasks now the crucial thing is that consciousness is not going to emerge out of an system that is narrow in its prediction because what actually differentiate asks for machines is that we are conscious maybe I shouldn't talk too much about consciousness because we don't really know what it is you know but I don't think we are closer to consciousness and that is why this idea that somehow machines are going to conspire against us I think is it's a little bit over exaggerated unless machines are conscious you can sleep well you shouldn't be worried about machines and at the moment I don't think we're closer to seeing machines that are conscious so I would definitely agree that right now having AI that we that we use is what someone like Forrest or the analyst firm in the US would call pragmatic AI it's AI that's very good at solving a specific task I think there's a lot of work that's being done on AI that's good at types of tasks or groups of tasks but at this point we're talking about AI that can summarize a document and then can also figure out if that document is positive or negative it's not AI that can make you coffee and AI that can also predict the weather so in that place yes were further but I guess I'm a little bit more of an optimist I think the tools that we're building now are really amazing tools and that we'll be able to use them to get incrementally closer to strong AI but there's a lot between here and now so I wouldn't worry either and I think that you know it there's a lot that we have left to do to do that so I hate to dull in the panel by agreeing with everyone so allow me to at least say that AI does predict the weather in AI does remind us to call our mothers however I agree with you on consciousness it's a non-issue yeah but normally it's not on one system you will have to design one that will help you call you mother you'll have another one that will probably even make coffee for you but human beings are able to do all those things at the same time and that is really when when the structure of what they can be able to do becomes that complicated that there's going to be hope that consciousness is ultimately going to emerge out of this machines well I think I would just build on that I think if the the test wouldn't be what it can actually do of course because AI was able to do lots of different things and all it would take is two coming GLE many algorithms together and then you'd have the coffee making reminding your to call your mother system but it's going to be how does it actually have goals in and of itself that it is auto-generated and it has a sense of self an awareness of self I agree we don't actually know what consciousness is we could debate it for a long time and still not come to a conclusion but it doesn't it just doesn't from what we know about how human beings make a decision where consciousness comes from where their cup our shal e comes from the brain is partially comes from the the the biome from the gut the from from emotion from hormonal elements of consciousness because of so much is based on our fears loves and hopes dreams ambitions and epigenetics so if we don't actually know any of these things it were the test of cotton machine consciousness is going to be somehow about its self-awareness and that just I just I just don't see it I've not actually heard a credible argument of heard lots of arguments in favor that there will be machine consciousness and what happens then but I've never heard a credible one and then maybe if I could just add something that that that is another limitation of AI one of the things that make humans creative is because we are able to imagine things that do not exist no imagination is actually what differentiated us from other sapiens that that absolutely did not survive and the reason for that it was because they were able to click to imagine things that does not necessarily exist you know you can imagine you know a blue banana you know in your mind and in philosophy call that counterfactuals the ability to construct counterfactuals what if what if what if and I don't think they're able to do that unless they are driven to do it and they don't think they're able to do it as as well as a human being is able to construct consciousness counterfactuals I mean I think that so to get ahead of what's what we're going to get from the audience hey I can definitely come up with things that it's never seen before and it can do that when you give some of these image recognition systems a sentence or an idea that you want it to draw I can come up with a picture but I need to see a lot of pictures before it could do that and I think that really comes down to sort of something that's in between what you guys are saying that I think is really important to remember which is AI needs to be able to take things that knows already and generalize them to other things when all of you leave this building and you leave Oxford you'll go off and you'll do something else and you might not apply the things you learn here directly to what you're doing but it will give you tools that you have both ways of reasoning information metaphor that you'll be able to use to apply to whatever you do next in a way that's that helps you be able to learn that faster and more efficiently and right now that ability to generalize it's something that we're starting to work on and we're starting to get better at there's something in a to not to get too technical there's something called transfer learning that's really taken off in the past year of AI that is going to be very critical to doing this kind of stuff but even with transfer learning there's a lot we need to do to be able to do that so I'd say we there's so much between this and consciousness I don't know what a test for a conscious AI I don't think we know enough to be able to build that test I think we have to say how do we know when we're getting closer and and when I talk to other practitioners we can kind of almost agree on a set of things that would cause us to be closer generalization a bunch of the stuff we talked about being able to do two things at once without having trained one for the other and at that point maybe we should start having discussions about what a test would look like but I don't think we know enough to know what the test looks like so I think now's a good time to open up questions in the audience so if you'd raise your hand we'll get a microphone to you yeah what kind of the first question yeah thank you essentially two questions just first off you haven't really talked about possible social implications about vast kinds of automation automation how is this process going to look like are we going to be a species which is suddenly going to only go to arts drawing and stop working or are we going to or are we going to or the rich people are going to keep all the money in tweet we can't work or and also a second question if we lower like the level of consciousness and talk a bit about understanding you're probably familiar with the Chinese room thought experiment and the idea is for anyone that hasn't heard of it is that you're in a room and you have and you know I have a Chinese dictionary and you given some Gnostics dictionary but you're given inputs and you're given given outputs and you to someone that's outside of you and it looks like you know Chinese but you have no idea what the symbols inside of you mean is there a chance that computers can escape this and achieve understanding or necessarily consciousness so those are my questions thank you okay maybe it can I take this one I think that you know the issue about work what what is the implication of of AI to the world of work there is there is a concept in artificial intelligence which they called the marvex dilemma it basically says that on an evolutionary scale their skills that are older are much easier to to automate than the skills that are younger for example we have only been writing and reading for only six thousand six thousand years but we have been climbing trees for maybe even before we became human beings and therefore it is much difficult to construct in a robot that can climb a tree than a robot that will be able to to read so I think do you know the the impact is that this is my view that the blue-collar jobs are actually the ones that are most at risk then and and then they the white-collar jobs are most at risk and then blue-collar jobs so so that is the first one the second one is that when you when you put people out of work because of automation then from an economic perspective the aggregate demand is going to go down and we know production is driven by demand so we're going to buy all these things that are going to be made I think the whole system of our economics actually is is at threat now in terms of the Chinese wall no not the tiny yeah the Chinese wall that's what philosophers sell actually calls it I think it is actually a much simplistic way of evaluating intelligence certainly human beings you know can be able to show you intelligence even if you are seeing them which you are not seeing what is happening on the other side I think it's just a thought experiment it's a good thought experiment but certainly I think even if it is a chilling test you know it shouldn't be an event it should be a process where you are interacting with a machine over a longer period of time then I think you'll be able to find out that it is actually human being not a machine so I guess I'm probably one of the optimists in the room about the future of work in automation so to start out a little bit I'm gonna give a little bit of ground statistics about 5% of jobs that are currently out there right now due to McKenzie Deloitte study can be completely automated right now but about 30 percent of jobs can be about half automated so we've been through the cycle a bunch of times before every time there's a big new technology or an Industrial Revolution we see this happen and we see new jobs being created if you think about the jobs that that have been created over the past 20 years most of them are things that didn't exist before that and I think we need to go back and say what's going to happen if to their serve two classes jobs that are completely eliminated and for that I think what happens there it is up to us right new jobs are going to be created but they're going to be different skills than the jobs that were destroyed and so what we need to do is help these people help workers help the population as a whole people move careers this is where job retraining is something that we talk about and I think isn't tremendously important but on the flip side of that the other group where part of what they do on a daily basis is automated what we're gonna have to do is learn to work with machines and as much as Ken talked about the study where doctors were able to really discover new ways of figuring out cancer a lot of studies have shown that people and computers work better together than either do separately and I think there's a thing there as well that people work better in groups sometimes than that they do separately and so when you see all these new benchmarks where an AI has beaten a person if you take a group of people don't beat the benchmark of the AI as well which is something that's interesting and so we have to learn to work better with machines and I think that's something that's going to open up tremendous possibilities as long as we do right yeah I I agree with that I build on Catherine's points in in a couple of ways the first one is to say that if you think about how a I can interact in the economy although it does look like on the surface it's gonna be destructive to many jobs both I think both blue-collar and white-collar jobs so I sort of agree with with everyone although I'd stress blue-collar as well the it's also gonna lead to incredible amounts of productivity sort of a real boom of productivity and we could just take the one example of the pathology exam to see that the cost of a pathology exam which today is probably dot I'm about a thousand dollars and goes down to we'll call it twenty P because that's the Machine it's an algorithm right it's just some cycle times in the processor the it's not that we're gonna have the same number of pathology exams that we have today it's just gonna be cheaper we're just gonna be running pathology exams all the time on our biochemistry it'll be our saliva wouldn't we brush our teeth at night it'll be in the stool sample and maybe we'll learn something new about the progression of disease that we never knew before and we'll know about problems before they become so serious that we need to sort of cut open to the body to get into it and so it should be it'll be it'll be mystifying in 50 years time that today you we always hear these terrible stories of someone being diagnosed me as four months to live or diagnosed and the tumors the size of a golf ball in his back and he always I think like you like me like I'm not an expert you have to wonder like how did that happen how did something that was once the size of a grain of sand then become a grain or rice then finally only when it became the size of a golf ball or a grapefruit that we that not that not the person presented at a clinic and it was diagnosed you think that's something's wrong in society well that's how the system works but if the cost goes down so much we'd see a lot more pathology exams and we've got lots of examples all thread economic history too when the price falls and it seems like it's giving job destructive we actually see the use of that good or you know increase in just one small example is fabric right 200 years ago very typical for people like us to have well people who went to Oxford probably have several items of clothing but most people would have maybe two items of clothing one for everyday work the other one for Sunday right and today we have we don't even and and nothing was a pulse truth and only rich could have curtains now of course by the mid-1800s because of the mechanical looms fabric was going everywhere and it's being applied to everything we had you know fabric on the walls of middle-class houses of wallpaper in effect so we're gonna say the exact same thing but in terms of how algorithms can interact with society it will stress jobs but jobs as Kathleen has pointed out will change and I think well we'll see as the cent or the person who has to work with the algorithm to get the job done let's take another question let's scared the hand yeah and if we take our objective for eyes to be to create beings that can be entirely objective and rational decisions how are we gonna create such machines if the dataset that we feed to them and the data set upon which the development is based already contains inherent bot biases obviously the latter claim is based on our assumption that due to our socio-economic and Constitution that this already contains inherent biases that we haven't overcome and that we won't overcome in the future so if we feed an AI with a data set that already contains biases how can we create artificial intelligence that will ever make entirely objective and rational decisions I can give a very practical answer to that so as I mentioned in my intro I run probably the largest open knowledge graph for AI and you know we're seen for 20 years we started at crowdsourcing and the thing that we when we started discovering that there was bias in our data we first came up with some tests to measure and understand that bias and some of these tests are based on tests that Microsoft does actually tremendous work in this field as well and once we understood that we were able to go into the model and remove those bias so in the end we ended up with a model that was very much like the one always started with with those particular biases removed and actually that model performed better so the answer is that there's computational ways to look at bias and data there's ways in looking at the train data set to be able to add it after the fact and so that's very practical in some ways I'm very much an optimist about bias and data because I think it's a it's a machine learning problem now I mean the issue of rationality is obviously quite a difficult one and so what happens to a machine is that in order to train them it's not just the data that is an issue the optimization process is also an issue in mathematics we say that this optimization problem is actually non convex much of it is non convex which means that the optimum solution that you get is the best that you can be able to get and the implication of this and maybe just before I talk about the implication of this these are models and I think a British statistician box once said all models are actually wrong simply because they are approximations all models are wrong some are useful exactly you know exactly the sedessa so they're approximation and if we agree that the optimization process is not optimal that all models are wrong but useful then you cannot you can never have a full additional she what you are going to have is bounded relational machines and of course this is exactly what habit Simon actually said that rationality is certainly bounded the data that you use the variables that you use you cannot use all the variables that you need to use you have to use the ones that have the most impact because otherwise you know they will also be too complex and you will have the the the problem of the case of them dimensionality and and balancing that with complex the complexity of the model so we have to agree that the machines are not rational but one thing for sure and we have done some work on this is that machines actually seem to be more rational than a human being so they seem to be more rational than the human beings the behavioral economists have actually shown that human beings are fundamentally irrational machines the the machine the AI machine is slightly better at reason allottee but it's certainly not full rationality I would agree with that entirely in terms of behavioral psychology of the human beings are fundamentally irrational and Kahneman Tversky have shown that that's so beautifully but I would also disagree slightly with the with the presumption of the question that all data is biased now all data is if you will quote unquote subjective insofar as someone a person had to decide this is going to be relevant that I'm gonna collect versus something else but if you look at but when youth when the presumption of your question is that it's biased in the way that human beings can be biased in so far as prejudiced not always and so it doesn't impute AI writ large so I'll just give you two quick examples and the first one is jet engine monitoring we can all agree would be a good thing to be able to identify if a jet engine is predictive maintenance going to fail or not while it's in the air we can monitor it we can use AI to improve whether it's going to actually perform well or there's a maintenance issue that might crop up hard to imagine a bias there right and likewise you could imagine for example counting blood cells in a pathology exam yeah that that we've chose a human being has chosen one way of looking at the data versus another but it wouldn't be biased in the way that it presumes that of humans we biased against a subgroup etc I mean I think that one youth it's not just data that's biased it's it can be feature engineering it can be what we choose to select and what we choose to optimize to and it's not even about places where we think we are being completely unbiased because what we could be doing is optimizing against a system that had biases pre built into it beforehand and so I think it's really important and perhaps I oversimplified but I think it's really important that we look at things and we see whether or not the effects that they have are something that we can understand where the biases and that's why we want to come back and say can we computationally look at from either a feature engineering standpoint or a data issue standpoint what's causing these biases and pull it out and I think that's tremendous you know I have two hats right so in about half the time I'm in the corporate world and about half the time in the academic world and in the academic world we talk constantly about this just like we are here today and in the corporate world it's only been in within the last five or six months that I've ever heard anybody ask me about this and now it's only in financial services and a couple consulting firms and I think why don't we talk about this we can talk about whether or not what we're doing right now and the efforts we're making are good enough and we can come up with lots of ways to make them better but we also have to educate people out there that even in the systems that are used every day there are these kinds of issues and I think the business world doesn't think about that right now then I think we time for one final question yeah if you want to go to the hand in the back row hi yeah um I was wondering if you could elaborate a little bit and we touched upon it in the introduction um just about the role that law will play in this and how kind of hostility right now to privacy settings sort of collection of data and use of data do you think that will inhibit AI kind of developing kind of going to that next stage and if so how do you can get around that should I start us off okay so you know it's so interesting when we say privacy it really depends on who's saying it and where they're saying it right so being in Britain in 2019 in Oxford and saying privacy what we're really talking about and what we're hearing if you will is you know seeing the same ad follow us all along on the internet for the Reg jumper that we thought about behind on the 24th of December and it's still around in March right and it's just an annoyance and it's behavioural targeting and it could be worse than that but but frankly we live in the rule of law and we use that as a backstop for lots of areas in which we are complacent but when if we were to say privacy and we were in in the Netherlands in 1941 it would mean something else and if we were to say privacy and we were in Beijing or we were a wee gir it would mean something else again so let's go to the elephant in the room and the social credit score in China right in which there is using facial recognition among other techniques to identify where people are all the time in western China that should shake us to our to the marrow of our bones because that is completely anti-democratic and an environment where you don't have individual rights and you don't have any sort of backstop in terms of the sanctity of the individual visiting the state in terms of privacy it would it could be a terrible thing and so if techniques like that were used for law enforcement in Britain you might be able to say well we have we do operate in the rule of law and there will be constraints on power but that would be a really thin read lean on and so I think that you we should be very cautious and have a stronger debate on whether we would want that technique because that really is the surveillance state were at large okay let me leave that aside for a moment having put it out there when you look at how privacy rules that gdpr and privacy rules interact today in lots of ways that researchers just want to do basic things that are sort of pretty good for society it often impedes the use of data for beneficial purposes whether it's trying to target climate change if it's personalized data but most importantly in terms of healthcare and so there really ought to be an exception an exemption a carve out for people who want to do these sort of clearly socially beneficial if they are a certified researcher and you want to do a clearly socially beneficial thing with the data it should be facilitated people are always talking about you should say that and the problem you get from the privacy regulators is they point out that the law says you have these exemptions it's built in it's the law and they're wrong because although the law sorta says that the law elsewhere has a ton of bricks falling down from the sky hitting the researcher on the head so it creates a culture of non-use of data and even if you couldn't do it you knock on the door of your general counsel of your company or your hospital or your newspaper and the person basically says it's not worth the risk or okay you can do it write me a memo which is basically the Kafka inversion of it's not gonna happen ever in your lifetime so there's got to be a better way to balance it and what I would argue is that in the West and places where we have these safeguards that we should facilitate more than we have the use of data but we should be really mindful of the larger stakes of the role of the individual is a fee the asymmetry of power with the state to make sure that the sanctity of the individual and the privacy if you will of the individual somehow somehow can be preserved and why I stressed somehow is if you really connect the dots and think about it it's frankly it's really hard to see how we're gonna preserve our privacy if the state say even in Britain wants and the police forces for law enforcement wants to implement this technology it could be very destructive to the personal privacy of the individual and I don't have a very good solution I think long and hard about this I agree with everything Kenneth Posada and I am also concerned I think put on top of that you can look at how much information we're giving away in the digital breadcrumbs we leave behind in our everyday lives right how many Geographic we walk around with our phones on us every day and it takes only maybe four or five points of knowing where you are in your day to day routine to actually identify you when we were doing games with a purpose in the beginning and dr. Vaughn on at Carnegie Mellon collected some information just about people as well and then we found out that it only took a handful of words to figure out that back solve that demographic information of labeling photographs we are emergent data is a real problem emergent private data is a real problem and as much as we regulate people will be able to infer things from other digital breadcrumbs that we leave behind it becomes very scary i I think privacy is actually a very important issue but what what is me is that these companies that are are gathering the data they give you these contracts that you you have to just say yes and then they start collecting data I think it's an unfair way of of collecting data and we need to actually think about how we're going to manage that and the second thing is when when when you have a company that is based in Silicon Valley starts gathering data in in burundi for example the international law is not harmonized but these companies are actually operating in multiple countries and the people where they are taking data for actually no do not have any say as to what happens to to their data no I think that is a problem that needs to actually be be sorted out it can't be sorted out by a single country we need to find a way in which we can be able to come together to to to to tackle that that problem the other issue that I also want to to lace the social data that Kenneth you talk about something that is beneficial to society if somebody has a piece of data that was collected from the population that probably did not know how much they were giving and that data is actually beneficial to society so they company be allowed to keep that data in to pituitaur Rada should we make some form of an hours that after after five years maybe that data should be put into the public without necessarily with protecting the owners of the data you know you know if it is data about medical some form of a medical the disease you put it to the public so that other people can be able to use the data to create technology that will actually be able to solve the problem that that that data is able to solve you know it's almost a king to the concept of nationalization of data you know and and it happens already you know if you invent something you get a patent and I think you allowed to exploit that patent for is it 17 years 20 years and after that you know other people should also have the chance of exploiting that I think that framework we should also start thinking about it especially for data that is beneficial to society that it cannot it cannot be owned by a company in perpetuity well let me just build on that quickly there is there is some good news so there is research that's being done on how we can process data while preserving the privacy exactly in fact one of the leaders of a movement doing that is actually in the audience tonight Andrew Trask who's getting his PhD here at Oxford and it sort of so it is a way that you can actually interrogate the data process it for AI but not actually be able to tap into the underlying data that's great if it's used right but for someone who doesn't want to go through the rigmarole of applying that process it it really just simply does not neutralize the the bone-chilling concern that we're gonna be having sensors everywhere that are gonna be detecting be able to identify and identify who we are just by dint of who we are by the freedoms that are coming from up from our biochemistry and so just every our face is already a barcode with facial recognition and it's going to get more and more elaborate to be able to spot us and of course it never forgets and it can identify us even with partial attachment to our body parts it's simply that that's not going to be the full answer the full answer has to be well part of the answer is going to be law but there's gonna have to be other creative ways that we can preserve privacy that we haven't thought of yet but we'd better start thinking I'm sure so I think a couple things I definitely know that I think five years is a long time I think data gets old within that time period but I think to be able to do anything like that it's it's exactly what Ken said we're going to have to come up with and use these ways that we're talking about of anonymizing data and even when we do as I was saying there's this real emergent data that you can get about a person you can get a lot of information from only a tiny bit of data and you aren't gonna be able to understand when you release that data what data will be able to be inferred about that person later on down the line I mean we all saw that happen with so Netflix everybody knows about the Netflix challenge the first Netflix challenge happen there was recommender systems and then the second time they actually had the polar challenge back because people were able to figure out who individuals were from reviewing records right and so if we look at data we're gonna see that we're a pull out a lot of this emergent information well thank you unfortunately that so we've got time for but I like if you could join me in thanking Kenneth dr. Hevesy and dr. Maya left joining us today you
Info
Channel: OxfordUnion
Views: 53,112
Rating: undefined out of 5
Keywords: Oxford, Union, Oxford Union, Oxford Union Society, debate, debating, The Oxford Union, Oxford University
Id: uqeqnE7CLr8
Channel Id: undefined
Length: 57min 56sec (3476 seconds)
Published: Mon Feb 18 2019
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.