The AI Revolution | Toronto Global Forum 2019 | Thursday, September 5 |

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
okay if anybody in this room thinks that I was even the least bit intimidated before accepting to do this interview you would all be right that said if we're gonna try and have a great discussion here and I know that twenty five minutes from now people will walk away knowing quite a bit more than they knew when they came in so let's get started I thought it would be so helpful given that you just the two of you with a colleague just won the Turing prize award I thought it'd be you port for your discovery on neural nets and deep learning if you Jeff gave the audience a good understanding of what exactly is deep learning and what are neural nets sixty years ago or longer at the beginning of AI there were two ideas about how you make intelligent systems it was a logic inspired idea that you process strings of symbols using rules of inference and there was the biologically inspired idea the utrom mimicked a big network of brain cells and learn the strengths of the connections and these were very different paradigms and for a long long time the neural net paradigm based on trying to mimic the brain didn't work very well and we didn't really know why and in the end it turned out it didn't work very well because we hadn't given enough data and we hadn't got enough computer power and starting the beginning of this century we got more and more computer power more and more data and suddenly systems that learned how to do things as opposed to systems where you programmed them became effective and that's what's happened in the last ten years we've seen them be much better at speech recognition much better at recognizing things in images much apparent machine translation and all of this is done by taking a big network of simulated brain cells that have connections between them and modifying the connection strengths so the network behaves more in the way you'd like you to behave and so to get it to do something you don't write a program that tells it how to do that particular thing you're at a program that tells the big Network how to learn and then for any particular task you just give it some input data you shot what the correct output data is and it just figures out how to change all the connection strength so if you give it those inputs it gives you the right outputs and it tends to generalize well to new examples so if you've got any big data set you want to predict anything this is the way to go so essentially you're mimicking the way you believe our brains work in a very abstract level with mimicking have the brains work all that all the details are different but the general idea that you learn from examples by adapting connection strains is how the brain works was it your get please motivate a bit more why learning is so important and why the classical I approach based on symbols and rules and facts that were given by humans didn't work because there's so many things that we know how to do but we can't program computers to do we don't have conscious access to that knowledge like you and I know how to recognize this is a glass of water but we can to tell the computer how to do that job and so it turns out there's a lot about the abilities of our brain which we can't dissect in simple explanations that we can give to computers in the same way that we can we can't even explain another human because we don't have access to that knowledge it's hidden in our brain and so the solution for computers to have that kind of knowledge is to learn from data just like children do that's that's why it's so important so it's the closest to making our brain as opposed to the logic approach our brains were random it's not mimicking our brain because your scientists would say the brain is very different from what we're doing okay but it's clearly inspired by a lot of the things we know about brain but the brain and was your background in cognitive psychology also I was curious about this is that combination of cognitive psychology and computer science was it that cognitive psych all psychology background that tweed due to this way of thinking about it actually I didn't get along very well in psychology and he quit the inspiration was from thinking what those cognitive psychologists were saying was complete and would never work okay all right so we have to come up with something else to figure it out for both of you you worked in this field for many years and Jeff you a bit longer because you're older kind of in the wilderness meaning your approach to things was not being taken seriously what keeps you going I think it's so interesting about scientists and their curiosity what keeps you going what keeps you pushing to get to the breakthrough when the mainstream computer scientists are kind of ignoring you or even worse we were obviously right but it takes something right does it yeah I mean other than brainpower I think if you want to be successful in research you know you have to be willing to do things that others are not doing because research is exploration its discovery right and before you do this discovery many other people might not believe it so you have to have that self-confidence some extent and willingness to take risks right for this kind of thing to happen I think that's a big idea right being willing to do something that no one else is doing particularly when it looks implausible so as Joshua said what people in conventional AI were doing was putting facts into the computer they would they would look at the world they'd write down some facts that express them in some logical language at the computer could program program and they put the facts into the computer and the alternative was that the computer would derive all this knowledge just from data where the knowledge wasn't at all explicit and that seemed sort of tough that seemed hard to do and in particular the idea that you could get a computer with a lot of random connections in it and it would learn to do complicated things that machine translation seemed utterly implausible to almost everybody and then let me just add another thing to clarify the terms so AI is about building machines that might eventually be as intelligence is us and machine learning is an approach to AI where we want computers to learn how to do things and understand the world and deep learning and neural nets is a particular form of machine learning that is inspired by the rain okay so because all of these terms can be confusing some time together no that that is very helpful what do you see as the most exciting initiatives that are going on today actually where deep learning is being applied one of the biggest initiatives that's hard to answer because there's a lot of different areas where it's being applied so there's areas like for example to save the planet we need to make solar panels more efficient and to do that we need nanotechnology and deep learning is now being applied to predicting the properties of materials so I think it may have a big impact there if you could make solar panels ten percent more efficient that would have a huge effect and then and tip the scales on my ability of that in fact the same technique to be used to potentially it hasn't been done yet build better carbon capture better batteries so this the applications to climate change is something new but it has a lot of potential but it's not just in materials it's also for example in improving the efficiency of the use of electricity and using forecasting to be able to use the renewables more efficiently it's in better climate models because there's it's very hard to predict the future that is changing so there are lots of ways but but these are more where we're looking forward right now most of the applications of deep learning are more in companies that are using it for improving how they interact with their customers so predictions yes so it could be for example search engines recommendations and proposing things that customers need like I want I want to come back to that in one minute that prediction machines and the ads and the proposing things customers want let me ask you about driverless cars because that's one of the applications we hear so much about who wants to take a shot at telling us a bit what the status is of driverless cars and I think it's inevitable they will come I think when they come they'll save a lot of lives um I think it may well be there's a transition this is just my own personal pin ring in how we view transport like currently have cars and we have trains and what if we could have things like trains but that came whenever you want it and went wherever you want it but they're not things you own they're things that are socially owned and they're things that are highly coordinated so you have a lot of central coordination so you can get a lot of them traveling very closer together very fast without problems so I think there may be a transition over a longer time period in the whole way transport works ok so the concept of you own your personal car you know that did that paradigm goes away I think that concept is gonna disappear but it'll take time and so one of them we're not there yet right away right so just let's think driverless cars at some level even the ones that we see tuning around the Google campus how long before you imagine those will be commercialized I think it's sometime between a few years and 50 years okay really it like it's still that unknown I think it's very hard to predict the future you can predict the future quite well for a few years and then it's like fog you can see quite clearly and suddenly you hit this wall where you just don't know what's happening beyond the wall and I think predicting the futures like that I think for driving this cars I'm pretty confident in the next 10 years we'll have a lot of them are we'll have a lot but we don't know for sure and what what create I'm curious because I thought that was so much more advanced do you hear about it as if they're like tomorrow what's creating the fogginess so it's an like think of it like it's an 8020 issue right so making progress on driving River less cars has been very fast initially and in fact you can somebody with a bit of technical know-how in a few months can cook up something that will kind of work so long as the driving situation is is easy but then getting those things to be really secure and saying that the level of human is you know he's gonna take a lot more work right so it's really hard so it's not a city it's not necessarily as soon as we think it's it's good there's a lot of investment going on but there's also a lot of uncertainty because there's some basic challenges which remain to be dealt with and know ahead give you an example so machine translation is now pretty good mm-hmm but the some things were still quite a long way of being able to do and we don't know when we'll be able to do them for example if I ask you to translate into French the trophy would not fit in the suitcase because it was too small you think the itch refers to the suitcase because it's too small but if I say the trophy will not fit in the suitcase because it was too big you think the it refers to the trophy because it's too big and in French they're different genders so you have to know which it is to translate the it and Google Translate can't do it you tried on Google Translate you won't get it right I mean they'll get it right half the time right that's acceptable for machine translation you can make the occasional mistake it's not acceptable for a self-driving cars and cases like that where you need a deep understanding of what's going on in order to make the right decision there's many different cases like that all of which are rare but you have to get them all right so do you imagine that by the time self-driving cars or vehicles whether for one are accepted that the assumption is they will be perfect in their performance no I think I mean the public is lighter than humans better than just better than you but it's not sufficient for them to be better than humans that is if on average they're better than humans if they kill far fewer people in humans but they make mistakes that humours wouldn't have made the public's gonna be very unhappy this in itself could be a two hour discussion and I want to get to a bunch of things but it does raise all kinds of questions like who's receiving who's responsible if someone gets killed is it the driverless car the companies the companies building the cars the companies building the algorithms yes no I think is the result she's a developer so I like that as I said this in itself could be a two-hour discussion let me come back a bit to the part we know that's more proximate which is the use of deep learning in companies that are essentially selling us things or services Amazon Google or engaging a socially Facebook etc they are in a huge race to get better and better and they certainly seem to be in a race to accumulate as much data as they can and researchers and and researchers yes so there's two parts their dominance there their dominance on accumulating the researchers and their massive dominance on accumulating data and help us understand where this goes I imagine the more weak that they want to know what we do where we eat where we sleep who were having relationships with what we watch etc etc etc at what point do they have so much data about us that their ability to influence us overtakes our ability to influence ourselves and should we be concerned with the larger issue of human agency as these prediction machines get so much better definitely we don't want to have other organizations whether you know these machines are controlled by a particular person or controlled by a company or a government to have too much influence on us to basically manipulate us right this is not morally acceptable and I think politicians eventually will have to face the decisions about where we put social norms laws and regulations to clarify what is acceptable and what is not acceptable in terms of how AI using a lot of data about each of us can influence this I personally I would put the bar very low to make sure that regulation would be high meaning that I don't think we should allow the use of AI to influence people in doing things that clearly they haven't chosen to be in their own interest but is maybe in the interest of some other organizations that want to sell you something or whatever so does that require some kind of privacy laws or transparency laws and I'm gonna get to Jeff on this because I know he has a view about transparency but it is that where you think it has to go to some kind of regulation on our privacy it's not just privacy so privacy is another issue which is what what data do we allow others to you rate and for what purposes and so part of it is the purposes might be we don't want to have our data used against us essentially whether it's your medical data or where you go and and and with whom you are and so on so it's it's related to privacy but it's more about the limits we put on how AI can be used that's morally acceptable I think with medical data it's clear that there's a big trade-off I don't necessarily want people know much about my medical conditions but I'd really like to get better predictions of better treatment based on programs that learn from lots of other people's medical conditions so there's a trade-off I have to make which is really I'm going to get much better medical treatment if you can use a lot of data to make predictions things like blockchain will be very helpful there so that I can control my data and I can make decisions like okay I want you to be able to predict well what treatment I need so you can use my data for learning and then you can predict well if I don't like that I can say okay I'll accept much worse predictions I'm and you can't seem you can't see all my personal data right I think it would be very good at people could make that trade-off and until recently that didn't seem feasible it looked like a politician had to make the trade-off for everybody but I think we blockchain you can give people actual control power but you would assume most people if you said look if you share your data and nobody knows it's you personally and they can't use it against you that if they can't not hire you or give you a loan or go out on a date with you or whatever if you share it on that you're trusting the fact that that's gonna happen right so we need a mechanism to make sure that those wishes are actually satisfied and maybe one possibility is to have mutual third parties like what we call data Trust's that would be defending you know my needs I'm not a lawyer I can't understand you know tens of pages of legal language about how my data is gonna be used but an organization which understands my needs and the needs of millions of people could have the means to sign in my behalf when when my data is gonna be used for some purpose so some intermediary between the user and I'm a bit fascinated and interested in your view we in the Western world do have this concept of privacy we think we're entitled to it we should China doesn't have such an issue on privacy in fact people there seem to recognize they have no privacy the government knows everything they do if we get engaged in trying to understand the proper levels of legislation regulation and China does not does that potentially give them the ability to have huge leaves over us in what their scientists do and what they can do is that a concern I don't think so I think mostly it gives us a huge lead in controlling their population which you don't think it gives them a lead in advancing their artificial intelligence because they have more data no no concern to you other concern my main concern there is I have a friend who just came back from Western China where he knew some we Gers and what's happening there is terrible and it depends on mass surveillance and it worries me that hey I can be used that way and I think we need lots of protections to stop it being used that way in the West ok it's going to be difficult to do it in the current international coordination framework where states have a lot of autonomy in how they deal with their internal affairs everything ultimately this and other issues like climate change raise the question of how do we set up global rules for the planet where human rights and the environment and fiscal equity or whatever that needs to be settled at the global level can be done and I don't think that the current institutions we have with the UN and so on are sufficient for that it does feel like there's this asymmetry between the advancing world of AI which even though you say there's so much so much to do but this advancing world of AI and our ability as societies to structure make decisions I mean the entire United States is still wrapped up deciding whether or not abortion is legal a case we thought was prosecuted 40 years ago and they're still prosecuting that they're having trouble accepting the fact that climate change is real we see what's happening all over the world with different governments can you comment a bit on that asymmetry the asymmetry of where science is and where we as humans and organized societies are I think it's I mean I think society is going backwards right okay into some the technologies going forwards reckon a greatly increased productivity whether that great increase in productivity benefits people in general is going to depend very much on political decisions and it's very worrying that we're getting these populist government's that deceives the majority of the people I don't like it you don't like I mean as you said I know you said many times you're a scientist not a social policy maker I think you you do try and think about social policy but it feels to me like we need you to think about somehow to engage in the social policy discussions the whole society needs to engage in in Montreal we we both this Montreal Declaration for socially responsible development of AI and we tried to bring it around the table of course the eye scientists but the social scientists the political scientists legal medical people and ordinary citizens who came to libraries to give us feedback and build a better set of principles for social norms regarding AI I want to come back to your question there's a phrase that I really like it's connected to the discussion which is wisdom race so this wisdom wisdom race weight racing race right lolly between our collective and division wisdom on one hand yes and the power of the technologies that we're bringing into the world and think of it like if you give big guns to children you know some bad things gonna happen if you give them big bumps it might be even worse and AI could be misused in big ways the more powerful it is the more it can be misused by fewer and fewer people so we need to take care of the wisdom parts yes and that I think you need to respect the Second Amendment rights of children okay I mean this this wisdom is that's a beautiful phrase the wisdom wisdom right grace the wisdom race that is a beautiful phrase I mean I of all things I'll take home today that will be at the top of the list I know we have many international guests here today but we are here in Canada we do have the two of you as Canadians and I am curious as to your views on what we in Canada can do to build on this wonderful leadership that you have started to create for one of the most important things we can do in Canada to build on this lead well my own personal view is the most important thing we could do is reelect Trudeau okay well there we go oh okay what like like guinea pig well in addition I think that Canada has a tradition of being a strong but neutral and positive peaceful player in the international scene and AI and other questions I think require Canada's presence at that table and taking leadership and not just following our neighbors in the South that would be good you know you that's we want to do that you brought up elections in politics so I feel that I have permission to wait into this there is all this proof now that the last election was hacked and hacking could be a whole discussion that I'd love to have with you the risks of hacking of AI systems is there any way to avoid that level of interference in the American or any election but let's talk about the American election because it is so important to the world is there any way to avoid that level of interference in the next election yes um how one machine learning researchers can do things about it like Cambridge analytic who is created by a machine learning researcher called Bob Mercer and other machine learning researchers didn't like what Cambridge analytic I did and what must have became an embarrassment to the company he worked for mm-hmm and he had to retire as the co CEO of the company and in this cycle so far as I can tell and because of the embarrassment produced by other machine learning researchers reaction Bob Mercer is not going to be funding from that's that's just a little thing we can do but at least I think we've kept him out of the equation this time around so it is doable I think there should be some big time articles like New York time facing articles on what can be done last question because I'm out of time and I just want to ask you this how close are we to the singularity there's that assumes as a single errors okay so is that is that a word that should be discarded yeah I think the singularity is definitely well beyond the point where the folks stopped you seeing anything all right thank you it is such an honor to be here [Applause]
Info
Channel: International Economic Forum of the Americas
Views: 6,710
Rating: undefined out of 5
Keywords: AI, Artificial Intelligence, Geoffrey Hinton, University of Toronto, Google, Université de Montréal, Montreal Institute for Learning Algorithms, MILA, Indigo
Id: e8FBi4icNgs
Channel Id: undefined
Length: 25min 58sec (1558 seconds)
Published: Thu Sep 05 2019
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.