AGI: WE ARE ALL GOING TO DIE!

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
so we will get AI without understanding how it works and there were people saying like well we will have giant neural networks that we will Train by gradient descent and when they are as large as the human brain they will wake up we will have intelligence without understanding how intelligence works and from my perspective this is all like an indistinguishable lob of people who are trying to not get to grips with the difficult problem of understanding how intelligence actually works that said I was never skeptical that evolutionary computation would not work in the limit like you throw enough computing power at it it obviously works that is where humans come from and it turned out that you can throw less computing power than that at gradient descent if you are doing some other things correctly and you will get intelligence without having any idea of how it works and what is going on inside um it wasn't ruled out by my model that this could happen I wasn't expecting it to happen I wouldn't have been able to call neural networks rather than any of the other paradigms for getting like massive amount like intelligence without understanding it and I wouldn't have said that this was a particularly smart thing for a species to do this week there was an open letter calling for AI labs to pause the development of AI systems which are more powerful than gpt4 for at least six months 1537 signatures have been collected to address what they say is the profound risks of AI systems and the risks that they pose to society and Humanity the letter emphasizes the importance of carefully planning and managing Advanced AI systems and expressed concern about the lack of control over increasingly powerful AI models it urged AI labs to focus on making existing AI systems safer and more transparent and trustworthy while also working with policy makers to create robust AI governance systems the call for a pause purportedly aimed to ensure what they say would be a flourishing future with AI allowing Society to adapt and enjoy the benefits of the technology a cynical reading of this is oh it's just AGI doomers or it's folks who have a vested interest in open AI not being successful in a way it's a shame that the first two options take away the attention on the real societal impacts of AI but there's a problem with this approach first of all Global competition with countries like the US and China Investing heavily in AI research and development no single nation would fall behind in this technological race implementing a moratorium would only hinder progress in countries that comply While others continue to advance their AI capabilities Unstoppable progress the development of AI is becoming increasingly accessible and affordable making it nearly impossible to Halt its advancement completely even if major corporations and governments agreed to a moratorium individuals and smaller organizations could still continue to work on AI projects also the potential benefits AI has the potential to revolutionize various industries from Healthcare to transportation and could help address some of the world's most pressing problems such as climate change and poverty by imposing a moratorium on AI we risk losing out on these potential benefits and slowing down progress in key areas and also managing risks instead of avoiding them rather than trying to Halt AI development entirely it would be more productive to focus on understanding mitigating and managing the risks associated with AI and this could include developing AI safety measures ethical guidelines and Regulatory Frameworks that help ensure responsible development and deployment of AI systems the real problem is that there's no friction between the legal landscape and the technology landscape at the moment like if if you already have giant nuclear stockpiles don't build more if some other country starts building a larger nuclear stockpile than sure build then you know even then maybe just have enough nukes you know there's the these things are not quite like nuclear weapons they spit out gold until they get large enough and then ignite the atmosphere and kill everybody um and there is something to be said for not destroying the world with your own hands even if you can't stop somebody else from doing it but but open sourcing it now that that's just sheer catastrophe oh the whole notion of open sourcing this was always the wrong approach the wrong ideal there are there are places in the world where open source is a noble ideal and building stuff you don't understand that is difficult to control that where if you could align it it would take time you'd have to spend a bunch of time doing it that is that is not a place for open source because then you just have like powerful things that just like go straight out the gate without anybody having had the time to have them not kill everyone yep yep it's um it's very true this is a this is another classic and the alignment world is that uh it's kind of like futurist whack-a-mole where the usual way this game this this happens is uh the futurist says um hey I think you know AGI you know do some like crazy things and like things we can't understand or predict and then the interlocutor will be like well okay name one thing and then they name a thing and like oh no that's actually irrelevant because actually that can be fixed using this and then and then like well yeah sure but what about you know scenario two and like oh okay yeah but like we could have done this other thing to solve that but then and then the future is some points like don't you see the problem you didn't think of one or two before I pointed out to you yeah but I could fix it once you pointed out like no no the generalized form of this is that a sufficiently intelligent system will come up with even more things than neither you nor me can come up with this is a it's very funny how often I've run into this exact scenario where I I like give people us like people ask me for a scenario and I'm like fine here's the scenario and they're like oh okay well I'm not worried anymore because I could fix that one and then they don't generalize to the general case like okay but you didn't come up with it either so what are all the other things you're not coming up with if you're dealing with a system that is smarter than you you should just expect it can trick you even if you can't come up with how it can trick you you should just assume there is something you can do that can drink you we should worry though about people using large language models to control things like electrical power grids there are companies now who want to make current AI which is limited in a bunch of ways and connected to every bit of the world software that seems like a scary mission to me not because these systems are going to go Rogue and deliberately want to take over the world because they don't understand the world and so they're going to make bad decisions when the world is different from how it was when they were trained decision theorist elieza udkowski just published an article in Time Magazine calling for an indefinite and worldwide moratorium on the development of artificial general intelligence boldly asserting that we're not ready for the potential catastrophe it may bring yukowski argued that the key issue is not human competitive intelligence but rather the consequences of creating an AI which exceeds human intelligence he warns that if we continue on our current path we risk creating an AI that doesn't care for us or sentient life and may lead to the extinction of humanity yukowski emphasizes that in his opinion the situation is dire and a mere six-month moratorium is insufficient he urges for an immediate halt to all AI large training runs and the shutdown of major GPU clusters with no exceptions for governments or militaries he calls for international cooperation to enforce these measures stressing that we all live or die as one in this matter so in this passionate plea yudkowski argued that policy makers should recognize the gravity of the situation and take decisive action to shut it all down he contends that if we Forge ahead recklessly everyone including innocent children will suffer the consequences for it the thing is though this just doesn't really add up does it I mean first of all the hard limits on AGI I mean one counter argument is that there are hard limits in what any AGI can do um you know it can't break encryption it can't invent warp drives it can't cure Alzheimer's and critics argue that yukowski's fear of AGI bootstrapping itself into the real world by emailing a DNA sequence to a synthesis company it seems patently absurd as if an AGI is not an intellectual Santa Claus machine the comparison of the 11th century fighting the 21st century is also perceived as inaccurate as the 11th century people were not Dumber they just knew less an AGI would be smarter but doesn't know more than we do critics also argue that the extinction risk discussions are unmoored from various fields of knowledge including evolutionary biology cognitive psychology real AI sociology and indeed the history of Technology the arguments lack a plausible sequence of events with logical causality that lead to mass extinction and instead rely on hand wavy assumptions and hypotheticals some believe that the real threat to humanity is nuclear war and yutkowski's suggestion to bomb Rogue Data Centers may actually be more dangerous than any AI development also there's a concern that even if some researchers and organizations were to Halt their AI development others would continue creating a collective action problem stopping AI development is seen as impossible as no one's willing to give up when someone else out there is willing to take the risk for the potential Advantage this reality calls for a different approach such as focusing on defensive AI even though it might be futile and also there's this thing about a misplaced focus on AI in particular some argue that Humanity has more pressing concerns such as the oceans are dying plastic pollution increasing inequality the rise of authoritarianism and nuclear proliferation this focus on creating apocalyptic Doom AGI it may just be misplaced the universe should be full of Berserkers if that was a likely outcome critics question why is super intelligent AI trained on the collective data of humanity would even want to destroy Humanity instead of coexisting or even emulates us so in conclusion elieza's call for an indefinite and worldwide moratorium on AI development has generated a lot of lively debate with many questioning the plausibility of his concerns and offering different perspectives but while the potential risks of AI can't be ignored it's essential to consider various arguments and potential Solutions before making such a drastic decision to be able to make it look so authoritative at essentially no cost in such a volume it's like the difference between retail and wholesalers like you know the difference between a knife and a sub machine gun I mean you can do harm with a knife but you can do a lot more harm with a submachine gun and so um I am you know American citizen I'm really really concerned about the 2024 elections and how this is going to play out um and these things can move really fast now so give you a related example um the science fiction magazine I think might actually be British called Clark's world and they have open submission so anybody can submit a story which is a great thing from a social perspective because we would like you know to have new writers discovered and not just have old boys networks and that kind of stuff and like a month ago Chachi PT or maybe it's two months ago let's say chat GPT came out at that point they were getting no fake stories and then they got like one or two and they're like ah this is interesting and then they got like seven the next day and then like within a week or a month or something like that they got like 600 a day and they actually had to shut down they couldn't do the open submissions anymore because it was so much computer generated garbage I don't think any of those stories were good but they all took humans time to deal with and it's going to be like that with misinformation we may be able to build new technologies to address it but we don't have anything now off the shelf that works all that well if we in one shot build systems that are so far beyond cognitive Horizon we can't understand them and then we turned them on we die obviously if we can we build systems that are 0.5 or 1.5 on the horizons and we just develop much much better theories to understand them now that's more that's a more interesting question and if we have a 1.5 system can we use it to build a 2.5 system and if we have a 2.5 system can we use that to build a five system and if you have a five system can you understand a 10 system I don't have an answer to this question but it doesn't seem obviously impossible to me it seems obviously probably impossible that if you just get a thousand X AGI draws from the sky you're screwed you can't understand this thing it's you will never understand this thing we will never ever understand this thing and if we turn it on that's it and game over but if we build a much more that's why that's why I'm focusing on this like we get to build the AI this is so this is a common conversation to have with people where people are like wow Conor why aren't you like happy about like scaling laws it makes like our Intel like predictable I'm like no no it doesn't at all it's it's we don't know how to predict how smart AIS are this is a big problem while the Align problem is hard is people sometimes will be like oh it's fine we'll just wait until it's like just below danger and then we'll stop and I'm like first of all LOL imagined corporations just stopping and B how do you know you're just one step away from the bad thing before it's already too late how would you know and sometimes people like you know hand waves something's almost alien loss I'm like okay at which loss does it become super intelligent and please show me the number like no of course not that's very silly it's the ability to predict how powerful system will be is dependent not only on the agent you're studying but also on its environment so you also have to have a very very strong theory about the environment interacting with and because the environment is the universe and we don't have super strong predictive theories of the entire universe we can't predict this kind of stuff and this is why this is at the core of why I think this is hard I think if we could all agree to just like I slow the down let's just like go only like teeny micro steps until we've like developed as much Theory as humanly possible and then we take one more micro step and then we develop as much Theory as humanly possible and we take one more micro step yeah I think we'd be fine probably maybe not you know maybe one of those microsteps still kills us but it's like possible but like currently we're not even trying like currently we're just like let's just make as big a steps as possible as often and as fast as possible and just see what happens a little as AI continues to advance it's crucial for researchers policy makers and society as a whole to engage in an open and Frank discussion about the potential consequences and the best path forward with a balanced approach to AGI development we can be able to harness its power for the betterment of humanity while mitigating the risks it may pose thank you
Info
Channel: Machine Learning Street Talk
Views: 53,958
Rating: undefined out of 5
Keywords:
Id: V4YRRSp1vxo
Channel Id: undefined
Length: 15min 39sec (939 seconds)
Published: Thu Mar 30 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.