OpenAI Researcher BREAKS SILENCE "Agi Is NOT SAFE"

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
take a look at what he said because if the lead of probably the most advanced AI company in the world at the moment has left due to safety concerns I think it's worth paying attention to so he said it's been such a wild Journey over the last three years my team launched the first ever reinforcement learning with human feedback large language model with instruct GPT published the first scalable oversight on LMS pioneered automated interpretability and weak to strong generalization more exciting stuff is coming soon so he does state that you know there's going to be some more research coming out of this but trust me when I show you like I've looked around and some of the things that I've seen uh the developments are pretty shocking even to me so he said one of the things that you know he did was he stepped away from this job because it's been one of the hardest things I have ever done because we urgently need to figure out how to steer and control AI systems much SM than us and I think you know the wording here is very precise because he's taken like 24 hours to write this thread and you can see you know we urgently need to figure out how to steer and control AI systems much smarter than this and I think the key word here urgently is something that shouldn't be overlooked because urgently means that we need to do this now and I'm guessing that by you know when we look at the overall picture open AI just isn't focusing on that and I mean this is probably the most reputable source of information when it comes to this kind of Technology because I don't think there's anyone you can ask about aligning super intelligent systems that's going to have as much information as the people working in open AI are and that's simply because open AI are truly ahead of the competition and these people who are working on super alignment they are I'm guessing frequently there in those spaces looking at how these Advanced systems are evolving and how they are moving and if he said we need to urgently figure this out this is definitely a cause for concern and a lot of people have mocked the like AI safety crowd and said EAC EAC or E accelerate whatever it is you know the acceleration thing but I think like some there was a tweet that I retweeted and I think this tweet uh it should be retweeted more because it's very very important and there was something about the fact that you know safety culture has kind of become this thing synonymous with W culture and they're just simply not the same safety is something that we should all be focusing on because the impacts like when you truly do look at the way how you know you classify how dangerous AI the things that it could do it could literally impact every one of us and not just like you know super alignment super intelligent AI the bio risks the social risks the you know wealth disparity just a million different things that I've looked at it really is something that's important but you know I guess we might actually have to find out the hard way so he says I joined because I thought open a would be the best place in the world to do this research however I have been disagreeing with opening ey leadership about the company's core priorities for quite some time until we finally reached a breaking point so it's clear that over the time at his uh time working at open there have been some clear things that he may have brought up and during that time it seems that they weren't taken seriously and I think that is uh rather interesting because he says you know I've been disagreeing about the company's core priorities for quite some time so this clearly shows us that this wasn't just a onetime thing this isn't just he woke up and he was like you know what I quit this has you know been a long-standing thing he's been disagreeing with the leadership for quite some time and it seems that now you know there is a Breaking Point to which you know it's like you know enough is enough I've said my concerns I've tried to fix things and it seems that what we have here is a situation where it doesn't seem possible so he's thought you know what the only thing that I can do is I completely have to step away and that's completely understandable but like I said before this is still concerning because for the top safety researchers to you know reach a Breaking Point and then have to leave it kind of brings into the question what on Earth are they actually prioritizing open AI for some of the you know most esteemed and some of the most knowledgeable people on AI safety to be leaving and just completely disbanding he says I believe more of our bandwidth should be getting spent ready for the next Generations of models on security monitoring preparedness safety adversarial robustness super alignment confidentiality and societal impact and related topics and and I completely agree with this because the ramifications of AI are quite hard to quantify because there are knock on effects of knock on effects and that is something that is very very hard for humans to predict because there are just unintended consequences of any technology for example social media you know it causes all of these body issues it causes you know social separation it causes depression it causes anxiety there are a huge range of things that you literally just can't predict and I think it's important to at least you know be focusing on this as more of a priority than maybe they are and I get I get it you know the thing is is that you know when you're trying to run a company like open aai and you've got this terminal race condition where openai are competing against another billion dollar company like Google the problem is is that these companies are pushing themselves to get out systems in a faster and faster race and openai has committed to iterative deployment but I think the problem is that it's a winner takes all market and because open understands that I think they're just trying to speedrun it to AGI because maybe they believe and I don't think that this is probably right but maybe they believe that you know once they get to AGI they can use it to probably solve everything and he says that these problems are quite hard to get right and I'm concerned we aren't on a trajectory to get there so basically saying that look the current strategy that you know is going on at open ey or where they're working on it's currently not working okay and he says you know the trajectory that's going to get them there like the path that they're going to take currently it seems like they don't have that path so I mean it's pretty concerning for him to resign and say that like that's genuinely like you know before when um Daniel the person who was also at open ey said some concerning things um I I think it also was pretty concerning because he gave up a pretty sizable stake like he gave up his stock compensation in order to speak about this and something that uh you know was only revealed today was that if you leave open ey and you want to you know negatively talk about the company you can only do so if you give up your stock compensation so so the fact that you know a previous researcher actually did that just goes to show that um some of the stuff really does need to be talked about so he's basically saying that look I'm concerned we aren't on a trajectory to safely get there which is you know definitely concerning to me as someone who's on the outside of all it it's just looking at like uh you know researchers are leaving this is pretty pretty concerning so he says um over the past few months my team has been sailing Against the Wind which is basically just a metaphor saying that you know he's trying to move forward but there are forces pushing back against him um and sometimes we were struggling for compute and it was getting harder and harder to get this crucial research done so one of the things that most people didn't know was that with open AI uh there is a compute shortage and they actually spoke about this quite a lot which is why they partnered with Microsoft for the $10 billion deal um and the problem is is that super alignment they in the initial blog post what they said was they said that super alignment they were supposed to get like 20% of all of open eyes compute um leaving the 80% for remaining stuff but I'm guessing that they said they since they said there was struggling for compute I'm guessing that the 20% that had been previously allocated may have not uh been able to be distributed um and they said it was getting harder and harder to get this crucial research done and I mean when we look back on it the super alignment team I think they only posted maybe one or two blog post since their Inception um and it's been a couple of months and I'm not stating that you know you can do you know uh that that kind of research really easily because of course it's pretty pretty hard to do that but um I'm guessing that maybe that's just because they just didn't have the compute in order to you know find out how these systems worked and of course do the do the needed research which is surprising to say the least because they had an agreement on how they were going to do things and it seems that you know it it just didn't happen you know that the proof is in the pudding they were struggling for compute and it was getting harder and harder to get crucial research done and he goes to continue to say here that building smarter than human machines is an inherently dangerous Endeavor opening eye is shouldering an enormous responsibility on behalf of all of humanity so this is something that we do know um and I've explained it in videos before basically if you build something that is smarter than you uh and we can use the example of you know let's just say for example uh chimps to where we are so for example if we look at chimps compared to us we're only margin we're only marginally smarter than chimps but the thing about that is that we are now you know like sending Rockets into Mars and stuff like that so it's like you know if we build something that is truly smarter than us are we even going to understand what it's doing and are we going to be able you know be able to stop this creature uh this tool this whatever it is this conscious entity whatever you want to call it are we going to be able to stop it from doing what it wants to do and are we going to be a you know basically see even coming because um we could find ourselves in a very unintended consequence um and you know if if we take the previous example like comparing us to chimps I know that's not probably the best example but um it's all I can think of right now but the point I'm trying to make is that making something that's smarter than you has never historically um been favored out like usually when a smarter species comes along whatever they want is the status quo and the previous you know people who were smarter or you know the people who are you know less intelligent they genuinely or genuinely just have to succumb to whatever the status quo is that's set by the more intelligent beings and I think that that is a pretty concerning thing when we are talking about rapidly trying to develop these uh Smarter Tools or smarter beings that you know really could present uh enormous risks and enormous dangers and he says but over the past years safety culture and processes have taken a backseat to product so he's basically saying that you know the the safety culture has pretty much taken a backseat to the newness of all of the things that are being shipped within open AI I'm guessing the newer models some of the new features plugins whatever it is uh that people do want I'm guessing that's taking a complete back seep to safety and I'm guessing that is of course because this is no longer just a research organization it's now a business and a private company which uh completely changes the kind of ecosystem on how it grows and the decisions they make for long-term growth so I think that is going to be key there now in addition he says we are long overdue and getting incredibly serious about the implications of AGI we must prioritize preparing for them as the best we can only then can we ensure AGI benefits all of humanity and he's basically saying that look we're long overdue so this is something that I didn't want to see because if he's saying that we're long overdue I'm not saying that they do have ai but I mean what this means is that the timeline that we're on now it's that like we are playing catchup right now is basically what he's saying because if we're long overdue it means we're past the point of where we should be which means that we need to kind of like I guess some people would say you know pause the AGI development so that we can catch up with where the relative Safety Research needs to be now what's crazy about this is that I made a video yesterday and I genuinely thought that they had probably solved this because if the researchers was are leaving I'd presume that they thought that okay our mission is complete we solve the issue and then I'm guessing that they'd gone on to work on other projects but with this information it seems that that is not the case it seems that you know that these researchers weren't able to actually get their work done and because of that they just decided that you know what it's time for us to go somewhere else and he's stating that we must prioritize preparing for them the best we can because of course the AGI implications are quite stuck like I don't think people are really going to take this in like I think this video might get a few views and the thing is that like when you have the top leaders of the super line you know and in the opening eyes division like leaving okay stating that look these guys aren't focusing on AI focusing on safety you know we tried but they just aren't prioritizing it and you know we don't want to be be a part of this I think that's like a an alarm belt you know that ringing and it's like okay something clearly needs to be done and I'm wondering now if you know there's going to be some kind of government intervention because whilst yes right now opening ey is just a private company they can do what they can do I'm wondering at the point they start to get these super powerful systems that the government you know starts to intervene as like okay this is like some kind of you know National Security risk because if you guys aren't focused on safety and these systems go Rogue then what on Earth does happen and I'm not saying that they can you know hack the grid and all the all those kind of systems like there you know in some kind of like sci-fi world but I'm stating that you know the societal implications of releasing such systems I'm wondering what is truly going to happen and he's stating here that open ey must become a safety first AGI company if they are to succeed and this makes sense okay this completely 100% makes sense because if they are to succeed they do need to become a safety first AGI company because if there are any unintended consequences we're going to we're going to unfortunately see it come out of opening eye first um and it's definitely going to impact some people I honestly hope I'm not one of them but um definitely I do think unfortunately that is going to happen which is why all of these people are you know urging us to you know you know focus on safety and I mean and if it did happen would would I even be surprised like if something bad did happen with AGI related development would I even be surprised completely not because another thing that I didn't even realize okay and I did kind of speculate this yesterday but was that open AI the team actually did completely dissolve so it says openi dissolves team focused on long-term AI risks less than one year after announcing it open ey has disbanded its team focused on the long-term risks of artificial intelligence a person familiar with the situation has confirmed to CNBC the news comes days after both leaders open ey co-founder IIA satova and Jan like announced their departures from the Microsoft backed startup so the team disbanded guys like like I said in the previous video video where I showed you guys the members that were on previous research papers and how they were contributing to the development of the actual research them actually not being here is a pretty pretty surprising thing because if the team has disbanded that means that there is no one currently working on Super alignment and I'm guessing that there will be some statements you know coming soon because this is a really big headline like it might seem underst understated right now like it might not seem that it's a big headline but trust me guys this this is uh real real important news because I think government agencies and other agencies are going to start to look at open a ey and they're going to start to think okay what on Earth is going on here again because you know we've got a safety team that just completely disbanded we've got ilas satova leaving we've got this guy leaving you know what on Earth is going on here so I think that this is uh it's pretty pretty uh interesting with as to how the development of this will continue and Sam Alman did actually respond to this by stating that I'm super appreciative of likes contributions to opening eyes alignment research and safety culture and I'm very sad to see him leave and he's right that we have a lot more to do we are committed to doing it I'll have a longer Post in the next couple of days so I'm guessing that there's going to be some kind of new team formed I'm not sure what this new team will be maybe it's going to be focused on completely different you know versions of safety but I'm guessing that Sam Alman has realized that look people like Ilia satova and Jan like can't just leave without him stating something because it really does look bad like on the industry as a whole it really just looks insane if you have someone that's you know just leaving especially for such a integral part of your company you know safety is something that they always spoke about that they would prioritize so someone leaving is uh of course not not good like the team being disbanded to do what they do um might prompt further criticism from wider people from the industry now there were a few tweets and I can't you know verify this tweet or not but the Tweet was deleted this is from run an open a ey employee and he says my feeling I speak for nobody but myself is super alignment got plenty of attention but then Ilia blew the whole thing up and this post has been deleted and I just want to say that this is complete of course speculation tweet was deleted and this is just his personal opinion but it's it's pretty crazy because right now opening eyes seems to be imploding again and this is not something you want to have from arguably one of the most advanced companies in the world so it's pretty incredible that all of this is going on you know people are saying that you know they didn't get any of computer I mean it's just it's just pretty crazy like all of this is going on as people would say what is opening eye without the drama and Elon Musk of course he does have you know a certain opinion he says by implication safety is therefore not a top priority at open Ai and he says open openi must become Safety First AGI company and El musk is basically saying that if Jan like is stating that they must become a safety first AGI company and if he's stating that they must do this then that means that currently it's not a top priority which is uh I don't know it's it's surprising I mean I think samman is going to think of something because I think he's a very very intelligent individual I think whatever's happened over the last couple of days there's going to be some significant developments including probably hires including probably some AGI board I have no idea but um yeah I know that opening ey does work on safety but the fact that the super Lin team is gone that's not good at all so um let me know what you guys think about this I think that this is a significant development a really really significant development because so many people have left open a ey and the thing is I wonder if open will be the last AI company around because the only thing and the reason I'm saying this because I understand that business and AI are two hard things to kind of merge because if you're focusing on safety the problem is is that in the AI World it moves so quickly and if it moves so quickly you can get left behind and if you get left behind your company goes out of business if your company goes out of business you don't you literally don't have time to work on safety because you know you're insolvent just like stability AI is and of course they're not working on you know dangerous systems but um I think that it's a difficult thing to do but I do think that they can probably get it done and I do think that you know with entrepreneurs what will you try to do is you try to move fast you try to break things you try to you know break into new Industries you try to release new products and that inherently doesn't align with how safety testing Works safety testing you know um it's still why we don't have GPT 5 safety testing is something that does take an extensive amount of time because if things aren't safe upon release the entire industry is going to be like whoa how could this be released y y y and then the ENT entire industry kind of slows down so I think it is not easy to do this but I think what there is is you know like I've said before that is that there is a giant compute struggle at the moment for openi they are probably working on so many different things and so many different projects you know like we've seen with gbt 4 that was something that most people didn't even predict just came out of nowhere they've got Sora they've got gbt 5 they've got arguably some agentic systems um some other stuff as well so I'm guessing that you know with everything that they're trying to push and all the Innovation they're trying to do safety just kind of do a back seat so I mean it will be interesting to see how this development comes from Sam Alman because I know that there's going to be some key updates but let me know what you guys think about this are you guys concerned um I'm sure that you know people who are focused on AI safety are going to you know clearly be like okay like you know we told you so we told you that you know this is the thing that needs safety but let me know what you guys think about this because I think this this is a pretty pretty important news um and I will see you guys in the next video
Info
Channel: TheAIGRID
Views: 30,833
Rating: undefined out of 5
Keywords:
Id: gLU-pkwm6MA
Channel Id: undefined
Length: 18min 51sec (1131 seconds)
Published: Fri May 17 2024
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.