Six months to save humanity from AI? - Part 2 Max Tegmark

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
well my guest today is Max tegmark he is a Swedish American physicist cosmologist and machine learning researcher he's a professor at the Massachusetts Institute of Technology and the scientific director of the foundational questions Institute it is six months enough um you said at least six months in the letter is that just a time frame that you chose just to have something or why six months yeah you guys start somewhere and this would be the first time ever that we slowed down anything in in this space and really give people some breathing room and we can go from there I think we should also need to loose not lose sight of the much bigger picture you know this is a lot like the movie don't look up if you I don't know have you seen it sure where there isn't destroy that's heading towards Earth right yeah heading towards Earth right there is no asteroid don't look up don't worry you know there's much bigger asteroid-like threat that have has kept scientists worried for about as long as they've worried about climate change actually that if we build machines that are way smarter than humans Some Humans could use that to kill our democracy and and take over our civilization and shortly thereafter it's quite likely as the machines get smarter that the Some Humans would lose control of the machines entirely so we were basically building these alien mines that are much smarter than us we're gonna have to share the planet with right and it can be really inconvenient to have to share of the planet with much smarter alien Minds that don't care about us should should make sure that we put in place the safety measures also so that we can keep control over these machines so that they help Humanity flourish rather than the risky things that we might lose control over it despite very diligent technical work on on so-called AI alignment we have failed I confessed as an AI researcher myself to solve this so far we need a little bit more time um otherwise they might simply not can be any humans on the planet at all within a few decades um do you think this is just something that's hard for people to wrap their minds around because they haven't seen it to this point will they start to um understand it as they see some of these applications do you think then the urgency behind um more policy do you think that'll grow yes but then it'll probably be too late unfortunately once people see AI That's way smarter than us right now is our chance to uh to to really get this right and I don't want to sound too gloomy we're not talking here about say nuclear war where there's only downside no upside there's a huge upside if we can get this right right everything I love about civilization as the product of human intelligence so if we can amplify our intelligence with machine intelligence and use it to solve climate change solve all the Cure All the diseases and eliminate poverty and help Humanity flourish you know not just for the next election cycle but for billions of years that's awesome and let's not squander all of those opportunities by being just a little too eager to release things a little bit too quick let's do it deliberately so we can do it safely and get all these benefits but it's fair to say you're generally pessimistic given that companies are racing against one other given that there's been no regulation at this point uh given you said even if there is more public awareness and regulation it's not going to be in time given that nations are racing against each other and obviously they're military applications for this uh fair to say you're pretty pessimistic when you look at this the pessimism isn't because we don't have ways of solving this the pessimism is because basically everybody who's driving the race towards this Cliff is in denial about their even being a cliff or even being an asteroid right that's why it's so valuable what you're doing you're helping people out there understand that this is not a race ultimately that anyone is going to win if we do an out of control race we're all going to lose it doesn't matter whether the AI that we lose control over that drives Humanity extinct is American or German or Chinese it really doesn't matter what matters is that Nobody Does it and that we can use all this amazing technology to help all humans on the planet get get dramatically better off this is not an arms race it's a suicide race and I think if we can educate people about that uh then everybody is going to have the right incentives to to stop racing and figure out how to make this all safe I want to move on to another aspect but before before that I have a technical question you've compared this to um nuclear proliferation to bioweapons for example the kind of Technology we're talking about here can it proliferate or is it so complicated and expensive that it's going to be State actors who are really in control of this it can very easily proliferate you know nuclear weapons don't proliferate so much because it's really hard to get hold of the plutonium or the uranium whereas this is more like bioweapons it's small cheap stuff once you have the code and then which is very expensive right now to develop the so-called large language models spend hundreds of millions of dollars on them once someone has that they can copy it if they get access to it software never respects borders right any more than than kovid did so what you have to do instead is make sure that you don't develop the most risky things and there's a lot of discussion happening right now in the policy space about regulating the compute side of it because you know you can't hide six gigawatts of compute server power that you can see from space when you're building one of these things for you know a quarter million dollars or whatever that's the first that's a good first place to start and then we can gradually of course also use a lot of tools including AI tools to identify dangerous things that's to prevent proliferation so the problem is not technical the problem is really political and and sociological stemming from the fact that most people don't understand how big the downside is if nobody wants to cooperate okay well talking about different countries perhaps choosing different paths for AI let's take a look at China briefly uh China's cyberspace Administration releasing Draft rules this week designed to manage how companies develop generative artificial intelligence products like chat gbt these rules say the content generated by AI needs to reflect the core values of socialism and should not subvert state power yeah what we're seeing here is that Europe China and the United States are taking different approaches to regulation Europe has been out in the Vanguard with the U EU AI act originally some lobbyists had managed to put in a loophole saying that the chat CPT and gpt4 would be entirely exempt uh we've worked very hard to close that loophole because it would be ridiculous um so I think that's going to help the U.S has been very resistant towards regulating anything I think because of very successful Tech lobbying the Chinese government actually has been quite interregulating this because they see I think very clearly that this could cause them to the Chinese government to lose control and they don't want to lose control so they they um they they're eliminating the freedom of companies to just experiment wildly with with with poorly understood stuff I was just going to ask you if maybe they're taking the approach that you're pushing for more than the West is because it is such a threat to their system um but at the same time they're in a military competition um with the rest of the world won't that push them towards certain AI applications that could be dangerous and proliferate yeah do you respond on with all the risks there but you know Soviet Union and the US were not exactly best buddies either where they got together for drinks and greatly trusted each other but they still were able to avoid a nuclear war and come up with all sorts of useful uh ways to reduce the risk because in that case they had all seen videos of nuclear explosions they all knew that hey you know nobody wins a nuclear war once we get to the point with AI that more policy makers realize that this would be much worse than the nuclear war if if we completely lose control of of ultra intelligent machines I'm optimistic that um China and the U.S militaries will also realize that they should they can work together to prevent exactly what they don't want it'll be too late right didn't you say it would be too late at that point anyways once there is the mass recognition uh event that it might be too late anyways no I said it's going to be too late once those machines exist um so I'm hoping very much that through journalism like yours we can help them realize that the imminence of these machines before they actually get built what we have now is still short of what's called artificial general intelligence this Holy Grail of making machines that can outsmart us on basically all job tasks um but we're getting there at Express speed right it turned out that it was much easier to build this than people thought 10 years ago 10 years ago most people thought ah 50 years maybe 30 years maybe way longer now you see a lot of the top experts in the field saying giving much much shorter timelines we already have arguably passed you know the Mastery of the of human language for example and uh so I I'm hoping this will make a lot of policy makers realize that um this is not science fiction intelligence is not something mysterious that can only exist in human brains it's something we can also build when we can build it we can also very easily build things which are straightly Beyond us as far beyond us as we are Beyond insects right and obviously we're building this we should build AI for Humanity by Humanity for Humanity not for the purpose of of the machines having a great time later on right so making sure we we really give ourselves the time and to make sure we control these machines and or make sure at least that they have our values and do what we want that is something more important than any other choice that humanity is making right now because it's just a such a big topic and we have a great expert with us but we're gonna have to leave it there a big thank you to Max take Mark uh he's an AI expert and professor at MIT and of course thanks so much to our viewers for watching if you've enjoyed this check out one of our other DW business specials and we'll see you next time
Info
Channel: WarningNews
Views: 10,702
Rating: undefined out of 5
Keywords:
Id: OerXJ5FVmVI
Channel Id: undefined
Length: 10min 46sec (646 seconds)
Published: Fri May 19 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.