geoff: This has been a week were concerns over the rapidly expanding use of artificial intelligence resonated loudly in Washington and around the world. Vice president Kamala Harris met with top executives from companies leading in ai development, Microsoft, Google. The vice president discussed some of the growing risks and told companies they had a moral obligation to develop ai safely. That meeting came just days after my the leading voices in the field of ai announced he was quitting Google over his worries over the future of ai and what he could eventually lead to, unchecked. We would her about some of those concerns now with Dr. Jeffrey Hinton, which jointly from London. Thank you for joining us. Were you free to express about artificial intelligence that you could not express freely when you were employed by Google? >> It was not that I could not express it freely when employed by Google but inevitably, if you work for a company you tend to self censor. You think about the impact it will have on the company. I want to be able to talk about what I now perceive of the risks of super intelligent ai without having to think about the impact on Google. Geoff: What are those risks? >> There are quite a few different risks. There is the risk of producing a lot of fake news so you do not know what is true anymore. There is the risk of encouraging polarization by getting people to click on things. There is the risk of putting people out of work. It should be would we make things more productive, greatly increase productivity, it helps everyone. There is the were it might just help the rich. There is the risk I want to talk about. Many other people talk about the other risks, including bias and discrimination. I want to talk about a different risk, the risk of super intelligent ai taking over control from people. Geoff: How do the two compared -- human intelligence and machine intelligence? >> That is a very good question and I have quite a long answer. Biological intelligence uses very little power. We only use 30 Watts. We have huge numbers of connections, like 100 trillion between neurons. Learning can change the strength of the connections. Digital intelligence we have created uses a lot of power when you are training it. It has far fewer connections, only 1 trillion but I can learn much, much more than any one person which suggests that it is a better learning algorithm than the brain. Geoff: What would smarter than human ai systems do? What is the concern that you have? >> The question is -- what will motivate them? They could easily manipulate us if they wanted to. Imagine yourself and a two-year-old child. You could ask if you want the peas or the cauliflower and the child does not realize it does not have to have either. We know for example that you can invade a building in Washington without ever going there yourself by manipulative people. Imagine someone who is much better than manipulate people than our current politicians. Geoff: Why would ai want to do that? Would that not require some form of sentence? >> Let's not get confused about that issue. I do not want to confuse the issue. Let me give you one example white but want to do that. Suppose you are getting ai to do something. You give it a goal. You give it the ability to create sub goals. You create a sub goal of getting a taxi. One thing you will notice quickly is there is a sub goal, you can achieve it to make it easier to achieve the other goals. The subgoals make it easier to get more control and get more power. The more power you have, the easier it is to get things done. We give a perfectly reasonable goal, it decides in order to do that I will give myself normal power. Because it is much smarter than us and trained from everything people ever did, it has read every novel, it knows a lot about how to mend appealing people. There is the worry it might start manipulating us into giving more power. We might not have a clue what is going on. Geoff: When you were at the forefront of this technology decades ago, what did you think it by due? What were the applications you had in mind? >> There are a huge number of good applications and that would be a mistake to stop developing. It will be useful in medicine. Would you rather see a family doctor that has seen a few thousand patients or a doctor that has seen a few million patients, including many with the same rare disease you have? You could make better nanotechnology for solar panels. You can predict floods and earthquakes. You can do tremendous good with this. Geoff: Is the problem then the technology or is the problem the people behind it? >> It is a combination of. Obviously, many of the organizations developing this our defense departments. Defense departments do not necessarily want to build in "Be nice to people" as the first rule. Some defense department would like to build in "Kill people of a particular kind." We cannot expect them to have good intentions toward all people. Geoff: There is the question about what to do to it. The technology is advancing faster than societies can keep pace with. The capabilities of this technology, they leap forward every few months. When it is required to write legislation, pass legislation, that takes years. >> I have gone public to try to encourage a much more -- many more creative scientists to get into this area. I think it is an area in which we can actually have international collaboration. The machines taking over as a threat for everybody. It is a threat for the Chinese, Americans and Europeans. Just like a global nuclear war. For a global nuclear war, people did collaborate to reduce the chances of it. Geoff: There are other experts in the field of ai who said the concerns you are raising, this dystopian future, that it distracts from the very real and immediate risks posed by artificial intelligence, some of them you mentioned -- disinformation, fraud. >> I do not want to distract from those. They are very important concerns and we should be working on those, too. I just want to add this other existential threat of it taking over. One reason I want to do that is because that is an area in which I think we can get international collaboration. Geoff: Is there any turning back? You say there is a time that ai is more intelligence than us. Is there any coming back from that? >> I do not know. We are entering a time of great uncertainty. We are dealing with things we have never dealt with before. It is this aliens have landed but we did not take it in. Geoff: How should we think differently about artificial intelligence? >> We should realize that we are probably going to get things more intelligent than us quite soon and they will be wonderful. They will be able to do all sorts of things very easily that we find difficult. This huge positive potential. But of course there is also huge negative possibilities. I think we should put more resources into developing ai to make it more powerful and figuring out how to keep it under control and minimizing bad side effects. Geoff: Thank you so much for your time and sharing your insights with us. >> Thank you for inviting me. ♪♪